首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stable autoregressive models are considered with martingale differences errors scaled by an unknown nonparametric time-varying function generating heterogeneity. An important special case involves structural change in the error variance, but in most practical cases the pattern of variance change over time is unknown and may involve shifts at unknown discrete points in time, continuous evolution or combinations of the two. This paper develops kernel-based estimators of the residual variances and associated adaptive least squares (ALS) estimators of the autoregressive coefficients. Simulations show that efficiency gains are achieved by the adaptive procedure.  相似文献   

2.
We consider tests of the null hypothesis of stationarity against a unit root alternative, when the series is subject to structural change at an unknown point in time. Three extant tests are reviewed which allow for an endogenously determined instantaneous structural break, and a related fourth procedure is introduced. We further propose tests which permit the structural change to be gradual rather than instantaneous, allowing the null hypothesis to be stationarity about a smooth transition in linear trend. The size and power properties of the tests are investigated, and the tests are applied to four economic time series.  相似文献   

3.
A new semi-parametric expected shortfall (ES) estimation and forecasting framework is proposed. The proposed approach is based on a two-step estimation procedure. The first step involves the estimation of value at risk (VaR) at different quantile levels through a set of quantile time series regressions. Then, the ES is computed as a weighted average of the estimated quantiles. The quantile weighting structure is parsimoniously parameterized by means of a beta weight function whose coefficients are optimized by minimizing a joint VaR and ES loss function of the Fissler–Ziegel class. The properties of the proposed approach are first evaluated with an extensive simulation study using two data generating processes. Two forecasting studies with different out-of-sample sizes are then conducted, one of which focuses on the 2008 Global Financial Crisis period. The proposed models are applied to seven stock market indices, and their forecasting performances are compared to those of a range of parametric, non-parametric, and semi-parametric models, including GARCH, conditional autoregressive expectile (CARE), joint VaR and ES quantile regression models, and a simple average of quantiles. The results of the forecasting experiments provide clear evidence in support of the proposed models.  相似文献   

4.
杜诗晨  汪飞星 《价值工程》2007,26(4):161-165
金融时间序列具有分布的厚尾性、波动的集聚性等特征,传统的方法难以准确的度量其风险。文中运用一种新的估计VaR和ES的方法,即采取两阶段法。首先用GARCH-M类模型(GARCH-M、EGARCH-M和TGARCH-M)拟和原始收益率数据,得到残差序列;第二步用极值分析的方法分析的尾部,最后得到收益率序列的动态VaR和ES。最后对三个模型的计算结果进行比较。  相似文献   

5.
《Journal of econometrics》2005,126(1):79-114
We propose a hybrid estimation procedure that combines the least squares and nonparametric methods to estimate change points of volatility in time series models. Its main advantage is that it does not require any specific form of marginal or transitional densities of the process. We also establish the asymptotic properties of the estimators when the regression and conditional volatility functions are not known. The proposed tests for change points of volatility are shown to be consistent and more powerful than the nonparametric ones in the literature. Finally, we provide simulations and empirical results using the Hong Kong stock market index (HSI) series.  相似文献   

6.
We consider time series forecasting in the presence of ongoing structural change where both the time series dependence and the nature of the structural change are unknown. Methods that downweight older data, such as rolling regressions, forecast averaging over different windows and exponentially weighted moving averages, known to be robust to historical structural change, are found also to be useful in the presence of ongoing structural change in the forecast period. A crucial issue is how to select the degree of downweighting, usually defined by an arbitrary tuning parameter. We make this choice data-dependent by minimising the forecast mean square error, and provide a detailed theoretical analysis of our proposal. Monte Carlo results illustrate the methods. We examine their performance on 97 US macro series. Forecasts using data-based tuning of the data discount rate are shown to perform well.  相似文献   

7.
Maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with fixed effects is inconsistent under fixed time series sample size and large cross section sample size asymptotics. This paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference, shows unbiasedness and analyzes efficiency. Monte Carlo studies show that our procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and is shown to have superior finite sample properties to the generalized method of moment (GMM) and the bias-corrected ML estimator.  相似文献   

8.
This article proposes a class of joint and marginal spectral diagnostic tests for parametric conditional means and variances of linear and nonlinear time series models. The use of joint and marginal tests is motivated from the fact that marginal tests for the conditional variance may lead to misleading conclusions when the conditional mean is misspecified. The new tests are based on a generalized spectral approach and do not need to choose a lag order depending on the sample size or to smooth the data. Moreover, the proposed tests are robust to higher order dependence of unknown form, in particular to conditional skewness and kurtosis. It turns out that the asymptotic null distributions of the new tests depend on the data generating process. Hence, we implement the tests with the assistance of a wild bootstrap procedure. A simulation study compares the finite sample performance of the proposed and competing tests, and shows that our tests can play a valuable role in time series modeling. Finally, an application to the S&P 500 highlights the merits of our approach.  相似文献   

9.
By contrasting endogenous growth models with facts, one is frequently confronted with the prediction that levels of economic variables, such as R&D expenditures, imply lasting effects on the growth rate of an economy. As stylized facts show, the research intensity in most advanced countries has dramatically increased, mostly more than the GDP. Yet, the growth rates have roughly remained constant or even declined. In this paper we modify the Romer endogenous growth model and test our variant of the model using time series data. We estimate the market version both for the US and Germany for the time period January 1962 to April 1996. Our results demonstrate that the model is compatible with the time series for aggregate data in those countries. All parameters fall into a reasonable range.  相似文献   

10.
This paper proposes an estimation method for a partial parametric model with multiple integrated time series. Our estimation procedure is based on the decomposition of the nonparametric part of the regression function into homogeneous and integrable components. It consists of two steps: In the first step we parameterize and fit the homogeneous component of the nonparametric part by the nonlinear least squares with other parametric terms in the model, and use in the second step the standard kernel method to nonparametrically estimate the integrable component of the nonparametric part from the residuals in the first step. We establish consistency and obtain the asymptotic distribution of our estimator. A simulation shows that our estimator performs well in finite samples. For the empirical illustration, we estimate the money demand functions for the US and Japan using our model and methodology.  相似文献   

11.
This paper proposes a fully nonparametric procedure to evaluate the effect of a counterfactual change in the distribution of some covariates on the unconditional distribution of an outcome variable of interest. In contrast to other methods, we do not restrict attention to the effect on the mean. In particular, our method can be used to conduct inference on the change of the distribution function as a whole, its moments and quantiles, inequality measures such as the Lorenz curve or Gini coefficient, and to test for stochastic dominance. The practical applicability of our procedure is illustrated via a simulation study and an empirical example.  相似文献   

12.
This paper proposes a new testing procedure for detecting error cross section dependence after estimating a linear dynamic panel data model with regressors using the generalised method of moments (GMM). The test is valid when the cross-sectional dimension of the panel is large relative to the time series dimension. Importantly, our approach allows one to examine whether any error cross section dependence remains after including time dummies (or after transforming the data in terms of deviations from time-specific averages), which will be the case under heterogeneous error cross section dependence. Finite sample simulation-based results suggest that our tests perform well, particularly the version based on the [Blundell, R., Bond, S., 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87, 115–143] system GMM estimator. In addition, it is shown that the system GMM estimator, based only on partial instruments consisting of the regressors, can be a reliable alternative to the standard GMM estimators under heterogeneous error cross section dependence. The proposed tests are applied to employment equations using UK firm data and the results show little evidence of heterogeneous error cross section dependence.  相似文献   

13.
We discuss a method to estimate the confidence bounds for average economic growth, which is robust to misspecification of the unit root property of a given time series. We derive asymptotic theory for the consequences of such misspecification. Our empirical method amounts to an implementation of the subsampling procedure advocated in Romano and Wolf (Econometrica, 2001, Vol. 69, p. 1283). Simulation evidence supports the theory and it also indicates the practical relevance of the subsampling method. We use quarterly postwar US industrial production for illustration and we show that non‐robust approaches rather lead to different conclusions on average economic growth than our robust approach.  相似文献   

14.
Time series of financial asset values exhibit well-known statistical features such as heavy tails and volatility clustering. We propose a nonparametric extension of the classical Peaks-Over-Threshold method from extreme value theory to fit the time varying volatility in situations where the stationarity assumption may be violated by erratic changes of regime, say. As a result, we provide a method for estimating conditional risk measures applicable to both stationary and nonstationary series. A backtesting study for the UBS share price over the subprime crisis exemplifies our approach.  相似文献   

15.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

16.
Perron [Perron, P., 1989. The great crash, the oil price shock and the unit root hypothesis. Econometrica 57, 1361–1401] introduced a variety of unit root tests that are valid when a break in the trend function of a time series is present. The motivation was to devise testing procedures that were invariant to the magnitude of the shift in level and/or slope. In particular, if a change is present it is allowed under both the null and alternative hypotheses. This analysis was carried under the assumption of a known break date. The subsequent literature aimed to devise testing procedures valid in the case of an unknown break date. However, in doing so, most of the literature and, in particular the commonly used test of Zivot and Andrews [Zivot, E., Andrews, D.W.K., 1992. Further evidence on the great crash, the oil price shock and the unit root hypothesis. Journal of Business and Economic Statistics 10, 251–270], assumed that if a break occurs, it does so only under the alternative hypothesis of stationarity. This is undesirable since (a) it imposes an asymmetric treatment when allowing for a break, so that the test may reject when the noise is integrated but the trend is changing; (b) if a break is present, this information is not exploited to improve the power of the test. In this paper, we propose a testing procedure that addresses both issues. It allows a break under both the null and alternative hypotheses and, when a break is present, the limit distribution of the test is the same as in the case of a known break date, thereby allowing increased power while maintaining the correct size. Simulation experiments confirm that our procedure offers an improvement over commonly used methods in small samples.  相似文献   

17.
Multiple time series data may exhibit clustering over time and the clustering effect may change across different series. This paper is motivated by the Bayesian non-parametric modelling of the dependence between clustering effects in multiple time series analysis. We follow a Dirichlet process mixture approach and define a new class of multivariate dependent Pitman–Yor processes (DPY). The proposed DPY are represented in terms of vectors of stick-breaking processes which determine dependent clustering structures in the time series. We follow a hierarchical specification of the DPY base measure to account for various degrees of information pooling across the series. We discuss some theoretical properties of the DPY and use them to define Bayesian non-parametric repeated measurement and vector autoregressive models. We provide efficient Monte Carlo Markov Chain algorithms for posterior computation of the proposed models and illustrate the effectiveness of the method with a simulation study and an application to the United States and the European Union business cycle.  相似文献   

18.
Many structural break and regime-switching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible modeling approach which can accommodate virtually any of these specifications. We build on earlier work showing the relationship between flexible functional forms and random variation in parameters. Our contribution is based around the use of priors on the time variation that is developed from considering a hypothetical reordering of the data and distance between neighboring (reordered) observations. The range of priors produced in this way can accommodate a wide variety of nonlinear time series models, including those with regime-switching and structural breaks. By allowing the amount of random variation in parameters to depend on the distance between (reordered) observations, the parameters can evolve in a wide variety of ways, allowing for everything from models exhibiting abrupt change (e.g. threshold autoregressive models or standard structural break models) to those which allow for a gradual evolution of parameters (e.g. smooth transition autoregressive models or time varying parameter models). Bayesian econometric methods for inference are developed for estimating the distance function and types of hypothetical reordering. Conditional on a hypothetical reordering and distance function, a simple reordering of the actual data allows us to estimate our models with standard state space methods by a simple adjustment to the measurement equation. We use artificial data to show the advantages of our approach, before providing two empirical illustrations involving the modeling of real GDP growth.  相似文献   

19.
The central concern of this paper is the provision in a time series moment condition framework of practical recommendations of confidence regions for parameters whose coverage probabilities are robust to the strength or weakness of identification. To this end we develop Pearson-type test statistics based on GEL implied probabilities formed from general kernel smoothed versions of the moment indicators. We also modify the statistics suggested in Guggenberger and Smith (2008) for a general kernel smoothing function. Importantly for our conclusions, we provide GEL time series counterparts to GMM and GEL conditional likelihood ratio statistics given in Kleibergen (2005) and Smith (2007). Our analysis not only demonstrates that these statistics are asymptotically (conditionally) pivotal under both classical asymptotic theory and weak instrument asymptotics of Stock and Wright (2000) but also provides asymptotic power results in the weakly identified time series context. Consequently, the empirical null rejection probabilities of the associated tests and, thereby, the coverage probabilities of the corresponding confidence regions, should not be affected greatly by the strength or otherwise of identification. A comprehensive Monte Carlo study indicates that a number of the tests proposed here represent very competitive choices in comparison with those suggested elsewhere in the literature.  相似文献   

20.
Trend breaks appear to be prevalent in macroeconomic time series, and unit root tests therefore need to make allowance for these if they are to avoid the serious effects that unmodelled trend breaks have on power. Carrion-i-Silvestre et al. (2009) propose a pre-test-based approach which delivers near asymptotically efficient unit root inference both when breaks do not occur and where multiple breaks occur, provided the break magnitudes are fixed. Unfortunately, however, the fixed magnitude trend break asymptotic theory does not predict well the finite sample power functions of these tests, and power can be very low for the magnitudes of trend breaks typically observed in practice. In response to this problem we propose a unit root test that allows for multiple breaks in trend, obtained by taking the infimum of the sequence (across all candidate break points in a trimmed range) of local GLS detrended augmented Dickey–Fuller-type statistics. We show that this procedure has power that is robust to the magnitude of any trend breaks, thereby retaining good finite sample power in the presence of plausibly-sized breaks. We also demonstrate that, unlike the OLS detrended infimum tests of Zivot and Andrews (1992), these tests display no tendency to spuriously reject in the limit when fixed magnitude trend breaks occur under the unit root null.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号