首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper several cumulative sum (CUSUM) charts for the mean of a multivariate time series are introduced. We extend the control schemes for independent multivariate observations of crosier [ Technometrics (1988) Vol. 30, pp. 187–194], pignatiello and runger [ Journal of Quality Technology (1990) Vol. 22, pp. 173–186], and ngai and zhang [ Statistica Sinica (2001) Vol. 11, pp. 747–766] to multivariate time series by taking into account the probability structure of the underlying stochastic process. We consider modified charts and residual schemes as well. It is analyzed under which conditions these charts are directionally invariant. In an extensive Monte Carlo study these charts are compared with the CUSUM scheme of theodossiu [ Journal of the American Statistical Association (1993) Vol. 88, pp. 441–448], the multivariate exponentially weighted moving-average (EWMA) chart of kramer and schmid [ Sequential Analysis (1997) Vol. 16, pp. 131–154], and the control procedures of bodnar and schmid [ Frontiers of Statistical Process Control (2006) Physica, Heidelberg]. As a measure of the performance, the maximum expected delay is used.  相似文献   

2.
For square contingency tables with ordered categories, CAUSSINUS [Annales de la Faculté des Sciences de l'Université de Toulouse (1965) Vol. 29, pp. 77–182] and AGRESTI [Statistics and Probability Letters (1983) Vol. 1, pp. 313–316] considered the quasi-symmetry and the linear diagonal-parameter symmetry models, respectively, which have multiplicative forms for cell probabilities. This paper proposes two kinds of models that have the similar multiplicative forms for cumulative probabilities that an observation will fall in row (column) category i or below and column (row) category j (> i ) or above. The endometrial cancer data are analyzed using these models.  相似文献   

3.
High multiplicity scheduling problems arise naturally in contemporary production settings where manufacturers combine economies of scale with high product variety. Despite their frequent occurrence in practice, the complexity of high multiplicity problems – as opposed to classical, single multiplicity problems – is in many cases not well understood. In this paper, we discuss various concepts and results that enable a better understanding of the nature and complexity of high multiplicity scheduling problems. The paper extends the framework presented in Brauner et al. [ Journal of Combinatorial Optimization (2005 ) Vol. 9, pp. 313–323] for single machine, non-preemptive high multiplicity scheduling problems, to more general classes of problems.  相似文献   

4.
Second-order analysis of inhomogeneous spatio-temporal point process data   总被引:3,自引:0,他引:3  
Second-order methods provide a natural starting point for the analysis of spatial point process data. In this note we extend to the spatio-temporal setting a method proposed by B addeley   et al. [ Statistica Neerlandica (2000) Vol. 54, pp. 329–350 ] for inhomogeneous spatial point process data, and apply the resulting estimator to data on the spatio-temporal distribution of human Campylobacter infections in an area of north-west England.  相似文献   

5.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

6.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

7.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

8.
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions. We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175] showed that the number of breaks and their positions may be allowed to differ across individuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross‐sectional dependence are accommodated.  相似文献   

9.
In this article, we investigate the behaviour of stationarity tests proposed by Müller [Journal of Econometrics (2005) Vol. 128, pp. 195–213] and Harris et al. [Econometric Theory (2007) Vol. 23, pp. 355–363] with uncertainty over the trend and/or initial condition. As different tests are efficient for different magnitudes of local trend and initial condition, following Harvey et al. [Journal of Econometrics (2012) Vol. 169, pp. 188–195], we propose decision rule based on the rejection of null hypothesis for multiple tests. Additionally, we propose a modification of this decision rule, relying on additional information about the magnitudes of the local trend and/or the initial condition that is obtained through pre‐testing. The resulting modification has satisfactory size properties under both uncertainty types.  相似文献   

10.
Panel unit‐root and no‐cointegration tests that rely on cross‐sectional independence of the panel unit experience severe size distortions when this assumption is violated, as has, for example, been shown by Banerjee, Marcellino and Osbat [Econometrics Journal (2004), Vol. 7, pp. 322–340; Empirical Economics (2005), Vol. 30, pp. 77–91] via Monte Carlo simulations. Several studies have recently addressed this issue for panel unit‐root tests using a common factor structure to model the cross‐sectional dependence, but not much work has been done yet for panel no‐cointegration tests. This paper proposes a model for panel no‐cointegration using an unobserved common factor structure, following the study by Bai and Ng [Econometrica (2004), Vol. 72, pp. 1127–1177] for panel unit roots. We distinguish two important cases: (i) the case when the non‐stationarity in the data is driven by a reduced number of common stochastic trends, and (ii) the case where we have common and idiosyncratic stochastic trends present in the data. We discuss the homogeneity restrictions on the cointegrating vectors resulting from the presence of common factor cointegration. Furthermore, we study the asymptotic behaviour of some existing residual‐based panel no‐cointegration tests, as suggested by Kao [Journal of Econometrics (1999), Vol. 90, pp. 1–44] and Pedroni [Econometric Theory (2004a), Vol. 20, pp. 597–625]. Under the data‐generating processes (DGP) used, the test statistics are no longer asymptotically normal, and convergence occurs at rate T rather than as for independent panels. We then examine the possibilities of testing for various forms of no‐cointegration by extracting the common factors and individual components from the observed data directly and then testing for no‐cointegration using residual‐based panel tests applied to the defactored data.  相似文献   

11.
This note presents some properties of the stochastic unit‐root processes developed in Granger and Swanson [Journal of Econometrics (1997) Vol. 80, pp. 35–62] and Leybourne, McCabe and Tremayne [Journal of Business & Economic Statistics (1996) Vol. 14, pp. 435–446] that have not been or only implicitly discussed in the literature.  相似文献   

12.
The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in system GMM, potentially suspect. A large instrument collection overfits endogenous variables even as it weakens the Hansen test of the instruments’ joint validity. This paper reviews the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it. It illustrates the dangers by replicating Forbes [American Economic Review (2000) Vol. 90, pp. 869–887] on income inequality and Levine et al. [Journal of Monetary Economics] (2000) Vol. 46, pp. 31–77] on financial sector development. Results in both papers appear driven by previously undetected endogeneity.  相似文献   

13.
Kim, Belaire‐Franch and Amador [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66] present different percentiles for the same mean score test statistic. We find that the difference by a factor 0.6 is due to systematically different sample analogues. Furthermore, we clarify which sample versions of the mean‐exponential test statistic should be correctly used with which set of critical values. At the same time, we correct some of the limiting distributions found in the literature.  相似文献   

14.
In empirical studies, the probit and logit models are often used without checks for their competing distributional specifications. It is also rare for econometric tests to be focused on this issue. Santos Silva [Journal of Applied Econometrics (2001 ), Vol. 16, pp. 577–597] is an important recent exception. By using the conditional moment test principle, we discuss a wide class of non‐nested tests that can easily be applied to detect the competing distributions for the binary response models. This class of tests includes the test of Santos Silva (2001 ) for the same task as a particular example and provides other useful alternatives. We also compare the performance of these tests by a Monte Carlo simulation.  相似文献   

15.
Carlos N. Bouza 《Metrika》2009,70(3):267-277
This paper is devoted to the analysis of the estimation of the mean of a sensitive variable. The use of a randomized response (RR) procedure gives confidence to the interviewed that his privacy is protected. We consider that a simple random sampling with replacement design is used for selecting a sample. The behavior of the RR procedure, when ranked set sampling is the design used, is developed under three different ranking criteria. The usual gain in accuracy associated with the use of ranked set sampling is exhibited only by one of the designs. The behavior of the models is illustrated using data provided by a study of samples of persons infected with AIDS.  相似文献   

16.
《Labour economics》2006,13(5):571-587
This study uses a sample of young Australian twins to examine whether the findings reported in [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp.1157–73] and [Miller, P.W., Mulvey, C and Martin, N., (1994). ‘What Do Twins Studies Tell Us About the Economic Returns to Education?: A Comparison of Australian and US Findings’, Western Australian Labour Market Research Centre Discussion Paper 94/4] are robust to choice of sample and dependent variable. The economic return to schooling in Australia is between 5 and 7 percent when account is taken of genetic and family effects using either fixed-effects models or the selection effects model of Ashenfelter and Krueger. Given the similarity of the findings in this and in related studies, it would appear that the models applied by [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp.1157–73] are robust. Moreover, viewing the OLS and IV estimators as lower and upper bounds in the manner of [Black, Dan A., Berger, Mark C., and Scott, Frank C., (2000). ‘Bounding Parameter Estimates with Nonclassical Measurement Error’, Journal of the American Statistical Association, Vol. 95, No.451, pp.739–748], it is shown that the bounds on the return to schooling in Australia are much tighter than in [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp. 1157–73], and the return is bounded at a much lower level than in the US.  相似文献   

17.
This paper considers the problem of unbiasedly estimating the population proportion when the study variable is potential sensitive in nature. In order to protect the respondent’s privacy, various techniques of generating randomized response rather than direct response are available in the literature. But the theory concerning them is developed only under the hypothesis of completely truthful reporting. Actually, the occurrence of untruthful reporting is a prospect in dealing with highly sensitive matters such as abortion or socially deviant behaviors. Illustrating Warner’s [(1965), Journal of the American Statistical Association. 60: 63–69] randomized response technique we show how unbiased estimation of the population proportion can be extended to cover a case when some respondents may lie.  相似文献   

18.
Over the past decades, several analytic tools have become available for the analysis of reciprocal relations in a non-experimental context using structural equation modeling (SEM). The autoregressive latent trajectory (ALT) model is a recently proposed model [BOLLEN and CURRAN Sociological Methods and Research (2004) Vol. 32, pp. 336–383; CURRAN and BOLLEN New Methods for the Analysis of Change (2001) American Psychological Association, Washington, DC], which captures features of both the autoregressive (AR) cross-lagged model and the latent trajectory (LT) model. The present article discusses strengths and weaknesses and demonstrates how several of the problems can be solved by a continuous-time version: the continuous-time autoregressive latent trajectory (CALT) model. Using SEM to estimate the exact discrete model (EDM), the EDM/SEM continuous-time procedure is applied to a CALT model of reciprocal relations between antisocial behavior and depressive symptoms.  相似文献   

19.
Helpman, Melitz and Rubinstein [Quarterly Journal of Economics (2008) Vol. 123, pp. 441–487] (HMR) present a rich theoretical model to study the determinants of bilateral trade flows across countries. The model is then empirically implemented through a two‐stage estimation procedure. We argue that this estimation procedure is only valid under the strong distributional assumptions maintained in the article. Statistical tests using the HMR sample, however, clearly reject such assumptions. Moreover, we perform numerical experiments which show that the HMR two‐stage estimator is very sensitive to departures from the assumption of homoskedasticity. These findings cast doubts on any inference drawn from the empirical implementation of the HMR model.  相似文献   

20.
Dickey and Fuller [Econometrica (1981) Vol. 49, pp. 1057–1072] suggested unit‐root tests for an autoregressive model with a linear trend conditional on an initial observation. TPower of tests for unit roots in the presence of a linear trendightly different model with a random initial value in which nuisance parameters can easily be eliminated by an invariant reduction of the model. We show that invariance arguments can also be used when comparing power within a conditional model. In the context of the conditional model, the Dickey–Fuller test is shown to be more stringent than a number of unit‐root tests motivated by models with random initial value. The power of the Dickey–Fuller test can be improved by making assumptions to the initial value. The practitioner therefore has to trade‐off robustness and power, as assumptions about initial values are hard to test, but can give more power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号