首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper several cumulative sum (CUSUM) charts for the mean of a multivariate time series are introduced. We extend the control schemes for independent multivariate observations of crosier [ Technometrics (1988) Vol. 30, pp. 187–194], pignatiello and runger [ Journal of Quality Technology (1990) Vol. 22, pp. 173–186], and ngai and zhang [ Statistica Sinica (2001) Vol. 11, pp. 747–766] to multivariate time series by taking into account the probability structure of the underlying stochastic process. We consider modified charts and residual schemes as well. It is analyzed under which conditions these charts are directionally invariant. In an extensive Monte Carlo study these charts are compared with the CUSUM scheme of theodossiu [ Journal of the American Statistical Association (1993) Vol. 88, pp. 441–448], the multivariate exponentially weighted moving-average (EWMA) chart of kramer and schmid [ Sequential Analysis (1997) Vol. 16, pp. 131–154], and the control procedures of bodnar and schmid [ Frontiers of Statistical Process Control (2006) Physica, Heidelberg]. As a measure of the performance, the maximum expected delay is used.  相似文献   

2.
This paper considers the problem of procuring truthful responses to estimate the proportion of qualitative characteristics. In order to protect the respondent's privacy, various techniques of generating randomized response rather than direct response are available in the literature. But the theory concerning them is generally developed with no attention to the required level of privacy protection. Illustrating two randomization devices, we show how optimal randomized response designs may be achieved. The optimal designs of forced response procedure as well as BHARGAVA and SINGH [Statistica (2000) Vol. 6, pp 315–321] procedure are shown to be special cases. In addition, the equivalent designs of optimal WARNER [Journal of the American Statistical Association (1965) Vol. 60, pp. 63–69] procedure are considered as well. It is also shown that stratification with proportional allocation will be helpful for improving the estimation efficiency.  相似文献   

3.
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions. We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175] showed that the number of breaks and their positions may be allowed to differ across individuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross‐sectional dependence are accommodated.  相似文献   

4.
This note presents some properties of the stochastic unit‐root processes developed in Granger and Swanson [Journal of Econometrics (1997) Vol. 80, pp. 35–62] and Leybourne, McCabe and Tremayne [Journal of Business & Economic Statistics (1996) Vol. 14, pp. 435–446] that have not been or only implicitly discussed in the literature.  相似文献   

5.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

6.
Panel unit‐root and no‐cointegration tests that rely on cross‐sectional independence of the panel unit experience severe size distortions when this assumption is violated, as has, for example, been shown by Banerjee, Marcellino and Osbat [Econometrics Journal (2004), Vol. 7, pp. 322–340; Empirical Economics (2005), Vol. 30, pp. 77–91] via Monte Carlo simulations. Several studies have recently addressed this issue for panel unit‐root tests using a common factor structure to model the cross‐sectional dependence, but not much work has been done yet for panel no‐cointegration tests. This paper proposes a model for panel no‐cointegration using an unobserved common factor structure, following the study by Bai and Ng [Econometrica (2004), Vol. 72, pp. 1127–1177] for panel unit roots. We distinguish two important cases: (i) the case when the non‐stationarity in the data is driven by a reduced number of common stochastic trends, and (ii) the case where we have common and idiosyncratic stochastic trends present in the data. We discuss the homogeneity restrictions on the cointegrating vectors resulting from the presence of common factor cointegration. Furthermore, we study the asymptotic behaviour of some existing residual‐based panel no‐cointegration tests, as suggested by Kao [Journal of Econometrics (1999), Vol. 90, pp. 1–44] and Pedroni [Econometric Theory (2004a), Vol. 20, pp. 597–625]. Under the data‐generating processes (DGP) used, the test statistics are no longer asymptotically normal, and convergence occurs at rate T rather than as for independent panels. We then examine the possibilities of testing for various forms of no‐cointegration by extracting the common factors and individual components from the observed data directly and then testing for no‐cointegration using residual‐based panel tests applied to the defactored data.  相似文献   

7.
High multiplicity scheduling problems arise naturally in contemporary production settings where manufacturers combine economies of scale with high product variety. Despite their frequent occurrence in practice, the complexity of high multiplicity problems – as opposed to classical, single multiplicity problems – is in many cases not well understood. In this paper, we discuss various concepts and results that enable a better understanding of the nature and complexity of high multiplicity scheduling problems. The paper extends the framework presented in Brauner et al. [ Journal of Combinatorial Optimization (2005 ) Vol. 9, pp. 313–323] for single machine, non-preemptive high multiplicity scheduling problems, to more general classes of problems.  相似文献   

8.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

9.
Second-order analysis of inhomogeneous spatio-temporal point process data   总被引:3,自引:0,他引:3  
Second-order methods provide a natural starting point for the analysis of spatial point process data. In this note we extend to the spatio-temporal setting a method proposed by B addeley   et al. [ Statistica Neerlandica (2000) Vol. 54, pp. 329–350 ] for inhomogeneous spatial point process data, and apply the resulting estimator to data on the spatio-temporal distribution of human Campylobacter infections in an area of north-west England.  相似文献   

10.
In this article, we investigate the behaviour of stationarity tests proposed by Müller [Journal of Econometrics (2005) Vol. 128, pp. 195–213] and Harris et al. [Econometric Theory (2007) Vol. 23, pp. 355–363] with uncertainty over the trend and/or initial condition. As different tests are efficient for different magnitudes of local trend and initial condition, following Harvey et al. [Journal of Econometrics (2012) Vol. 169, pp. 188–195], we propose decision rule based on the rejection of null hypothesis for multiple tests. Additionally, we propose a modification of this decision rule, relying on additional information about the magnitudes of the local trend and/or the initial condition that is obtained through pre‐testing. The resulting modification has satisfactory size properties under both uncertainty types.  相似文献   

11.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

12.
Over the past decades, several analytic tools have become available for the analysis of reciprocal relations in a non-experimental context using structural equation modeling (SEM). The autoregressive latent trajectory (ALT) model is a recently proposed model [BOLLEN and CURRAN Sociological Methods and Research (2004) Vol. 32, pp. 336–383; CURRAN and BOLLEN New Methods for the Analysis of Change (2001) American Psychological Association, Washington, DC], which captures features of both the autoregressive (AR) cross-lagged model and the latent trajectory (LT) model. The present article discusses strengths and weaknesses and demonstrates how several of the problems can be solved by a continuous-time version: the continuous-time autoregressive latent trajectory (CALT) model. Using SEM to estimate the exact discrete model (EDM), the EDM/SEM continuous-time procedure is applied to a CALT model of reciprocal relations between antisocial behavior and depressive symptoms.  相似文献   

13.
Kim, Belaire‐Franch and Amador [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66] present different percentiles for the same mean score test statistic. We find that the difference by a factor 0.6 is due to systematically different sample analogues. Furthermore, we clarify which sample versions of the mean‐exponential test statistic should be correctly used with which set of critical values. At the same time, we correct some of the limiting distributions found in the literature.  相似文献   

14.
The controversy over the selection of ‘growth regressions’ was precipitated by some remarkably numerous ‘estimation’ strategies, including two million regressions by Sala‐i‐Martin [American Economic Review (1997b) Vol. 87, pp. 178–183]. Only one regression is really needed, namely the general unrestricted model, appropriately reduced to a parsimonious encompassing, congruent representation. We corroborate the findings of Hoover and Perez [Oxford Bulletin of Economics and Statistics (2004) Vol. 66], who also adopt an automatic general‐to‐simple approach, despite the complications of data imputation. Such an outcome was also achieved in just one run of PcGets, within a few minutes of receiving the data set in Fernández, Ley and Steel [Journal of Applied Econometrics (2001) Vol. 16, pp. 563–576] from Professor Ley.  相似文献   

15.
《Labour economics》2006,13(5):571-587
This study uses a sample of young Australian twins to examine whether the findings reported in [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp.1157–73] and [Miller, P.W., Mulvey, C and Martin, N., (1994). ‘What Do Twins Studies Tell Us About the Economic Returns to Education?: A Comparison of Australian and US Findings’, Western Australian Labour Market Research Centre Discussion Paper 94/4] are robust to choice of sample and dependent variable. The economic return to schooling in Australia is between 5 and 7 percent when account is taken of genetic and family effects using either fixed-effects models or the selection effects model of Ashenfelter and Krueger. Given the similarity of the findings in this and in related studies, it would appear that the models applied by [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp.1157–73] are robust. Moreover, viewing the OLS and IV estimators as lower and upper bounds in the manner of [Black, Dan A., Berger, Mark C., and Scott, Frank C., (2000). ‘Bounding Parameter Estimates with Nonclassical Measurement Error’, Journal of the American Statistical Association, Vol. 95, No.451, pp.739–748], it is shown that the bounds on the return to schooling in Australia are much tighter than in [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp. 1157–73], and the return is bounded at a much lower level than in the US.  相似文献   

16.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

17.
Bayesian and empirical Bayesian estimation methods are reviewed and proposed for the row and column parameters in two-way Contingency tables without interaction. Rasch's multiplicative Poisson model for misreadings is discussed in an example. The case is treated where assumptions of exchangeability are reasonable a priori for the unknown parameters. Two different types of prior distributions are compared, It appears that gamma priors yield more tractable results than lognormal priors.  相似文献   

18.
The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in system GMM, potentially suspect. A large instrument collection overfits endogenous variables even as it weakens the Hansen test of the instruments’ joint validity. This paper reviews the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it. It illustrates the dangers by replicating Forbes [American Economic Review (2000) Vol. 90, pp. 869–887] on income inequality and Levine et al. [Journal of Monetary Economics] (2000) Vol. 46, pp. 31–77] on financial sector development. Results in both papers appear driven by previously undetected endogeneity.  相似文献   

19.
In empirical studies, the probit and logit models are often used without checks for their competing distributional specifications. It is also rare for econometric tests to be focused on this issue. Santos Silva [Journal of Applied Econometrics (2001 ), Vol. 16, pp. 577–597] is an important recent exception. By using the conditional moment test principle, we discuss a wide class of non‐nested tests that can easily be applied to detect the competing distributions for the binary response models. This class of tests includes the test of Santos Silva (2001 ) for the same task as a particular example and provides other useful alternatives. We also compare the performance of these tests by a Monte Carlo simulation.  相似文献   

20.
This paper examines the importance of accounting for measurement error in total expenditure in the estimation of Engel curves, based on the 1994 Ethiopian Urban Household Survey. Using Lewbel's [Review of Economics and Statistics (1996 ), Vol. 78, pp. 718–725] estimator for demand models with correlated measurement errors in the dependent and independent variables, we find robust evidence of a quadratic relationship between food share and total expenditure in the capital city, and significant biases in various estimators that do not correct for correlated measurement errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号