首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sanyu Zhou 《Metrika》2017,80(2):187-200
A simultaneous confidence band is a useful statistical tool in a simultaneous inference procedure. In recent years several papers were published that consider various applications of simultaneous confidence bands, see for example Al-Saidy et al. (Biometrika 59:1056–1062, 2003), Liu et al. (J Am Stat Assoc 99:395–403, 2004), Piegorsch et al. (J R Stat Soc 54:245–258, 2005) and Liu et al. (Aust N Z J Stat 55(4):421–434, 2014). In this article, we provide methods for constructing one-sided hyperbolic imultaneous confidence bands for both the multiple regression model over a rectangular region and the polynomial regression model over an interval. These methods use numerical quadrature. Examples are included to illustrate the methods. These approaches can be applied to more general regression models such as fixed-effect or random-effect generalized linear regression models to construct large sample approximate one-sided hyperbolic simultaneous confidence bands.  相似文献   

2.
Simultaneous confidence bands are versatile tools for visualizing estimation uncertainty for parameter vectors, such as impulse response functions. In linear models, it is known that that the sup‐t confidence band is narrower than commonly used alternatives—for example, Bonferroni and projection bands. We show that the same ranking applies asymptotically even in general nonlinear models, such as vector autoregressions (VARs). Moreover, we provide further justification for the sup‐t band by showing that it is the optimal default choice when the researcher does not know the audience's preferences. Complementing existing plug‐in and bootstrap implementations, we propose a computationally convenient Bayesian sup‐t band with exact finite‐sample simultaneous credibility. In an application to structural VAR impulse response function estimation, the sup‐t band—which has been surprisingly overlooked in this setting—is at least 35% narrower than other off‐the‐shelf simultaneous bands.  相似文献   

3.
p‐Values are commonly transformed to lower bounds on Bayes factors, so‐called minimum Bayes factors. For the linear model, a sample‐size adjusted minimum Bayes factor over the class of g‐priors on the regression coefficients has recently been proposed (Held & Ott, The American Statistician 70(4), 335–341, 2016). Here, we extend this methodology to a logistic regression to obtain a sample‐size adjusted minimum Bayes factor for 2 × 2 contingency tables. We then study the relationship between this minimum Bayes factor and two‐sided p‐values from Fisher's exact test, as well as less conservative alternatives, with a novel parametric regression approach. It turns out that for all p‐values considered, the maximal evidence against the point null hypothesis is inversely related to the sample size. The same qualitative relationship is observed for minimum Bayes factors over the more general class of symmetric prior distributions. For the p‐values from Fisher's exact test, the minimum Bayes factors do on average not tend to the large‐sample bound as the sample size becomes large, but for the less conservative alternatives, the large‐sample behaviour is as expected.  相似文献   

4.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

5.
The main approach to deal with regressor endogeneity is instrumental variable estimator (IVE), where an instrumental variable (IV) m is required to be uncorrelated to the regression model error term u (COR(m,u)=0) and correlated to the endogenous regressor. If COR(m,u)≠0 is likely, then m gets discarded. But even when COR(m,u)≠0, often one has a good idea on the sign of COR(m,u). This article shows how to make use of the sign information on COR(m,u) to obtain an one‐sided bound on the endogenous regressor coefficient, calling m a ‘generalized instrument’ or ‘generalized instrumental variable (GIV)’. If there are two GIV's m1 and m2, then a two‐sided bound or an improved one‐sided bound can be obtained. Our approach is simple, needing only IVE; no non‐parametrics, nor any ‘tuning constants’. Specifically, the usual IVE is carried out, and the only necessary modification is that the estimate for the endogenous regressor coefficient is interpreted as a lower/upper bound depending on the prior notion on the sign of COR(m,u) and some estimable moment. A real data application is done to Korean household data with two or more children to illustrate our approach for the issue of child quantity–quality trade‐off.  相似文献   

6.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

7.
Abstract

We examine three tests of cross-sectional dependence and apply them to a Danish regional panel dataset with few time periods and a large cross-section: the CD test due to Pesaran (2004), the Schott test and the Liu–Lin–Shao test. We show that the CD test and the Schott test have good properties in a Monte Carlo study. When controlling for panel-specific and time-specific fixed effects, the Schott test is superior. Our application shows that there is cross-sectional dependence in regional employment growth across Danish regions. We also show that this dependence can be accounted for by time-specific fixed effects. Thus, the tests uncover new properties of the regional data.

RÉSUMÉ Nous examinons trois tests d'autonomie transversale, que nous appliquons à des ensembles de données d'une commission régionale danoise, avec peu de plages de temps et un vaste échantillons: le test d'autonomie transversale mené par Pesaran (2004), le test de Schott et le test Liu–Lin–Shao. Nous démontronsque le test d'autonomie transversale et le test de Schott présentent de bonnes propriétés dans uneétude Monte Carlo. Lorsquel'on se penchesur les effets propres au panel et les effets fixes en fonction du temps, le test de Schott dépasse les autres. Notre application illustre le dépendance transversale dans l'expansion de l'emploi à l’échelon régional dans les régions danoises, et nous démontronsque cette dépendance peut s'expliquer par des effets fixes en fonction du temps. De cette façon, le test révèle des propriétés nouvelles des données régionales.

RESUMEN Examinamos tres pruebas de dependencia transversal y las aplicamos a un conjunto de datos de panel regional en Dinamarca con pocos períodos de tiempo y una gran sección transversal: la prueba CD de Pesaran (2004), la prueba Schott y la prueba Liu–Lin–Shao. Demostramos que la prueba CD y la prueba Schott poseen buenas propiedades en un estudio de Monte Carlo. Al controlar los efectos fijos específicos de panel y específicos de tiempo, la prueba Schott es superior. Nuestra aplicación demuestra que existe una dependencia transversal en el crecimiento del empleo regional a través de las regiones danesas. Asimismo, demostramos que esta dependencia puede ser explicada mediante los efectos fijos específicos de tiempo. Por lo tanto, las pruebas dejan al descubierto nuevas propiedades de los datos regionales.

  相似文献   

8.
The cross‐section average (CA) augmentation approach of Pesaran (A simple panel unit root test in presence of cross‐section dependence. Journal of Applied Econometrics 2007; 22 : 265–312) and Pesaran et al. (Panel unit root test in the presence of a multifactor error structure. Journal of Econometrics 2013; 175 : 94–115), and the principal components‐based panel analysis of non‐stationarity in idiosyncratic and common components (PANIC) of Bai and Ng (A PANIC attack on unit roots and cointegration. Econometrica 2004; 72 : 1127–1177; Panel unit root tests with cross‐section dependence: a further investigation. Econometric Theory 2010; 26 : 1088–1114) are among the most popular ‘second‐generation’ approaches for cross‐section correlated panels. One feature of these approaches is that they have different strengths and weaknesses. The purpose of the current paper is to develop PANICCA, a combined approach that exploits the strengths of both CA and PANIC. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions. We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175] showed that the number of breaks and their positions may be allowed to differ across individuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross‐sectional dependence are accommodated.  相似文献   

10.
Acemoglu et al. (American Economic Review 2008; 98 : 808–842) find no effect of income on democracy when controlling for fixed effects in a dynamic panel model. Work by Moral‐Benito and Bartolucci (Economics Letters 2012; 117 : 844–847) and Cervellati et al. (American Economic Review 2014; 104 : 707–719) suggests that the original model might have been misspecified and proposes alternative specifications instead. We formally test these parametric specifications by implementing Lee's (Journal of Econometrics 2014; 178 : 146–166) dynamic panel test of linear parametric specifications against a general class of nonlinear alternatives robustly and reject all these specifications. However, using a more flexible model proposed by Cai and Li (Econometric Theory 2008; 24 : 1321–1342) we find that the relationship between income and democracy appears to be mediated by education, but results are not statistically significant. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Ridge estimation (RE) is an alternative method to ordinary least squares when there exists a collinearity problem in a linear regression model. The variance inflator factor (VIF) is applied to test if the problem exists in the original model and is also necessary after applying the ridge estimate to check if the chosen value for parameter k has mitigated the collinearity problem. This paper shows that the application of the original data when working with the ridge estimate leads to non‐monotone VIF values. García et al. (2014) showed some problems with the traditional VIF used in RE. We propose an augmented VIF, VIFR(j,k), associated with RE, which is obtained by standardizing the data before augmenting the model. The VIFR(j,k) will coincide with the VIF associated with the ordinary least squares estimator when k = 0. The augmented VIF has the very desirable properties of being continuous, monotone in the ridge parameter and higher than one.  相似文献   

12.
In epidemiology and clinical research, there is often a proportion of unexposed individuals resulting in zero values of exposure, meaning that some individuals are not exposed and those exposed have some continuous distribution. Examples are smoking or alcohol consumption. We will call these variables with a spike at zero (SAZ). In this paper, we performed a systematic investigation on how to model covariates with a SAZ and derived theoretical odds ratio functions for selected bivariate distributions. We consider the bivariate normal and bivariate log normal distribution with a SAZ. Both confounding and effect modification can be elegantly described by formalizing the covariance matrix given the binary outcome variable Y. To model the effect of these variables, we use a procedure based on fractional polynomials first introduced by Royston and Altman (1994, Applied Statistics 43: 429–467) and modified for the SAZ situation (Royston and Sauerbrei, 2008, Multivariable model‐building: a pragmatic approach to regression analysis based on fractional polynomials for modelling continuous variables, Wiley; Becher et al., 2012, Biometrical Journal 54: 686–700). We aim to contribute to theory, practical procedures and application in epidemiology and clinical research to derive multivariable models for variables with a SAZ. As an example, we use data from a case–control study on lung cancer.  相似文献   

13.
In the Stackelberg duopoly experiments in Huck et al. (2001) , nearly half of the followers’ behaviours are inconsistent with conventional prediction. We use a test in which the conventional self‐interested model is nested as a special case of an inequality aversion model. Maximum likelihood methods applied to the Huck et al. (2001) data set reject the self‐interested model. We find that almost 40% of the players have disadvantageous inequality aversion that is statistically different from zero and economically significant, but advantageous inequality aversion is relatively unimportant. These estimates provide support for a more parsimonious model with no advantageous inequality aversion.  相似文献   

14.
Suppose the observations (X i,Y i), i=1,…, n, are ϕ-mixing. The strong uniform convergence and convergence rate for the estimator of the regression function was studied by serveral authors, e.g. G. Collomb (1984), L. Gy?rfi et al. (1989). But the optimal convergence rates are not reached unless the Y i are bounded or the E exp (a|Y i|) are bounded for some a>0. Compared with the i.i.d. case the convergence of the Nadaraya-Watson estimator under ϕ-mixing variables needs strong moment conditions. In this paper we study the strong uniform convergence and convergence rate for the improved kernel estimator of the regression function which has been suggested by Cheng P. (1983). Compared with Theorem A in Y. P. Mack and B. Silverman (1982) or Theorem 3.3.1 in L. Gy?rfi et al. (1989), we prove the convergence for this kind of estimators under weaker moment conditions. The optimal convergence rate for the improved kernel estimator is attained under almost the same conditions of Theorem 3.3.2 in L. Gy?rfi et al. (1989). Received: September 1999  相似文献   

15.
Most empirical studies in the banking literature assume that the alternative profit function is linearly homogeneous in input prices. We show that such an assumption is theoretically unwarranted and that its use may yield misleading results. We use Koetter et al. (Review of Economics and Statistics 2012; 94 (2): 462–480) as a benchmark to showcase how empirical results can be sensitive to the linear homogeneity assumption. Contrary to Koetter et al., we find a positive relation between market power and profit efficiency when this assumption is dropped. This relation is slightly weakened after the wave of intrastate and interstate deregulation but not enough to support the ‘quiet life’ hypothesis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Recent work (Seaman et al., 2013 ; Mealli & Rubin, 2015 ) attempts to clarify the not always well‐understood difference between realised and everywhere definitions of missing at random (MAR) and missing completely at random. Another branch of the literature (Mohan et al., 2013 ; Pearl & Mohan, 2013 ) exploits always‐observed covariates to give variable‐based definitions of MAR and missing completely at random. In this paper, we develop a unified taxonomy encompassing all approaches. In this taxonomy, the new concept of ‘complementary MAR’ is introduced, and its relationship with the concept of data observed at random is discussed. All relationships among these definitions are analysed and represented graphically. Conditional independence, both at the random variable and at the event level, is the formal language we adopt to connect all these definitions. Our paper covers both the univariate and the multivariate case, where attention is paid to monotone missingness and to the concept of sequential MAR. Specifically, for monotone missingness, we propose a sequential MAR definition that might be more appropriate than both everywhere and variable‐based MAR to model dropout in certain contexts.  相似文献   

17.
It has long been known that the level of entrepreneurship, indicated as the percentage of incorporated and unincorporated nascent businesses relative to the labor force differs strongly across countries. This variance is related to differences in levels of economic development (Wennekers et al. 2005), but also to diverging demographic, cultural, and institutional characteristics (Acs and Armington 2004; Busenitiz et al. 2000; Fusari 1996; Karlsson and Duhlberg 2003; Rocha 2004; Thurik et al. 2006; Wong et al. 2005). Incorporating an institutional perspective, the aim of this research is to test if culture, operationalized through the World Values Survey (WVS) data, is a significant factor in predicting opportunity and necessity entrepreneurship rates at the country level. Opportunity and necessity entrepreneurship rates will be averaged from the 2001 to 2003 Global Entrepreneurship Monitor (GEM) and aggregated for 38 countries in this cross-sectional analysis.  相似文献   

18.
In this paper we present finite T mean and variance correction factors and corresponding response surface regressions for the panel cointegration tests presented in Pedroni (1999, 2004) , Westerlund (2005) , Larsson et al. (2001) and Breitung (2005) . For the single equation tests, we consider up to 12 regressors and for the system tests vector autoregression dimensions up to 12 variables. All commonly used specifications for the deterministic components are considered. The sample sizes considered are T ∈ {10,20,30,40,50,60,70,80,90,100,200,500}.  相似文献   

19.
Pareto optimality (sometimes known as Pareto efficiency) is an important notion in social sciences and related areas, see e.g. Klaus (2006), Chun (2005), Hild (2004), Kibris (2003), Nandeibam (2003), Papai (2001), Peris and Sanchez (2001), Brams and Fishburn (2000), Denicolo (1999), Klaus et al. (1998), Peremans et al. (1997), and Vohra (1992). This notion invariably involves the comparison of the utility of one outcome versus another, i.e. the ratio of two utilities or in general the ratio of two random variables. In this note, we derive the exact distribution of the ratio X/(XY) when X and Y are Pareto random variables, Pareto distribution being the first and the most popular distribution used in social sciences and related areas.  相似文献   

20.
L1 regularization, or regularization with an L1 penalty, is a popular idea in statistics and machine learning. This paper reviews the concept and application of L1 regularization for regression. It is not our aim to present a comprehensive list of the utilities of the L1 penalty in the regression setting. Rather, we focus on what we believe is the set of most representative uses of this regularization technique, which we describe in some detail. Thus, we deal with a number of L1‐regularized methods for linear regression, generalized linear models, and time series analysis. Although this review targets practice rather than theory, we do give some theoretical details about L1‐penalized linear regression, usually referred to as the least absolute shrinkage and selection operator (lasso).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号