首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 49 毫秒
1.
This article proposes a class of joint and marginal spectral diagnostic tests for parametric conditional means and variances of linear and nonlinear time series models. The use of joint and marginal tests is motivated from the fact that marginal tests for the conditional variance may lead to misleading conclusions when the conditional mean is misspecified. The new tests are based on a generalized spectral approach and do not need to choose a lag order depending on the sample size or to smooth the data. Moreover, the proposed tests are robust to higher order dependence of unknown form, in particular to conditional skewness and kurtosis. It turns out that the asymptotic null distributions of the new tests depend on the data generating process. Hence, we implement the tests with the assistance of a wild bootstrap procedure. A simulation study compares the finite sample performance of the proposed and competing tests, and shows that our tests can play a valuable role in time series modeling. Finally, an application to the S&P 500 highlights the merits of our approach.  相似文献   

2.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

3.
We propose a new diagnostic tool for time series called the quantilogram. The tool can be used formally and we provide the inference tools to do this under general conditions, and it can also be used as a simple graphical device. We apply our method to measure directional predictability and to test the hypothesis that a given time series has no directional predictability. The test is based on comparing the correlogram of quantile hits to a pointwise confidence interval or on comparing the cumulated squared autocorrelations with the corresponding critical value. We provide the distribution theory needed to conduct inference, propose some model free upper bound critical values, and apply our methods to S&P500 stock index return data. The empirical results suggest some directional predictability in returns. The evidence is strongest in mid range quantiles like 5–10% and for daily data. The evidence for predictability at the median is of comparable strength to the evidence around the mean, and is strongest at the daily frequency.  相似文献   

4.
We consider the problem of unobserved components in time series from a Bayesian non-parametric perspective. The identification conditions are treated as unknown and analyzed in a probabilistic framework. In particular, informative prior distributions force the spectral decomposition to be in an identifiable region. Then, the likelihood function adapts the prior decompositions to the data.  相似文献   

5.
When evaluating the performances of time series extrapolation methods, both researchers and practitioners typically focus on the average or median performance according to some specific error metric, such as the absolute error or the absolute percentage error. However, from a risk-assessment point of view, it is far more important to evaluate the distributions of such errors, and especially their tails. For instance, a lack of normality and symmetry in error distributions can have significant implications for decision making, such as in stock control. Moreover, frequently these distributions can only be constructed empirically, as they may be the result of a computationally-intensive non-parametric approach, such as an artificial neural network. This study proposes an approach for evaluating the empirical distributions of forecasting methods and uses it to assess eleven popular time series extrapolation approaches across two different datasets (M3 and ForeDeCk). The results highlight some very interesting tales from the tails.  相似文献   

6.
Using frequency distributions of daily closing price time series of several financial market indices, we investigate whether the bias away from an equiprobable sequence distribution found in the data, predicted by algorithmic information theory, may account for some of the deviation of financial markets from log‐normal, and if so for how much of said deviation and over what sequence lengths. We do so by comparing the distributions of binary sequences from actual time series of financial markets and series built up from purely algorithmic means. Our discussion is a starting point for a further investigation of the market as a rule‐based system with an algorithmic component, despite its apparent randomness, and the use of the theory of algorithmic probability with new tools that can be applied to the study of the market price phenomenon. The main discussion is cast in terms of assumptions common to areas of economics in agreement with an algorithmic view of the market.  相似文献   

7.
Efficient semiparametric and parametric estimates are developed for a spatial autoregressive model, containing non-stochastic explanatory variables and innovations suspected to be non-normal. The main stress is on the case of distribution of unknown, nonparametric, form, where series nonparametric estimates of the score function are employed in adaptive estimates of parameters of interest. These estimates are as efficient as the ones based on a correct form, in particular they are more efficient than pseudo-Gaussian maximum likelihood estimates at non-Gaussian distributions. Two different adaptive estimates are considered, relying on somewhat different regularity conditions. A Monte Carlo study of finite sample performance is included.  相似文献   

8.
We consider time series forecasting in the presence of ongoing structural change where both the time series dependence and the nature of the structural change are unknown. Methods that downweight older data, such as rolling regressions, forecast averaging over different windows and exponentially weighted moving averages, known to be robust to historical structural change, are found also to be useful in the presence of ongoing structural change in the forecast period. A crucial issue is how to select the degree of downweighting, usually defined by an arbitrary tuning parameter. We make this choice data-dependent by minimising the forecast mean square error, and provide a detailed theoretical analysis of our proposal. Monte Carlo results illustrate the methods. We examine their performance on 97 US macro series. Forecasts using data-based tuning of the data discount rate are shown to perform well.  相似文献   

9.
Detecting structural changes in volatility is important for understanding volatility dynamics and stylized facts observed for financial returns such as volatility persistence. We propose modified CUSUM and LM tests that are built on a robust estimator of the long-run variance of squared series. We establish conditions under which the new tests have standard null distributions and diverge faster than standard tests under the alternative. The theory allows smooth and abrupt structural changes that can be small. The smoothing parameter is automatically selected such that the proposed test has good finite-sample size and meanwhile achieves decent power gain.  相似文献   

10.
The maximum eigenvalue (ME) test for seasonal cointegrating ranks is presented using the approach of Cubadda [Oxford Bulletin of Economics and Statistics (2001), Vol. 63, pp. 497–511], which is computationally more efficient than that of Johansen and Schaumburg [Journal of Econometrics (1999), Vol. 88, pp. 301–339]. The asymptotic distributions of the ME test statistics are obtained for several cases that depend on the nature of deterministic terms. Monte Carlo experiments are conducted to evaluate the relative performances of the proposed ME test and the trace test, and we illustrate these tests using a monthly time series.  相似文献   

11.
This paper establishes the asymptotic distributions of the impulse response functions in panel vector autoregressions with a fixed time dimension. It also proves the asymptotic validity of a bootstrap approximation to their sampling distributions. The autoregressive parameters are estimated using the GMM estimators based on the first differenced equations and the error variance is estimated using an extended analysis-of-variance type estimator. Contrary to the time series setting, we find that the GMM estimator of the autoregressive coefficients is not asymptotically independent of the error variance estimator. The asymptotic dependence calls for variance correction for the orthogonalized impulse response functions. Simulation results show that the variance correction improves the coverage accuracy of both the asymptotic confidence band and the studentized bootstrap confidence band for the orthogonalized impulse response functions.  相似文献   

12.
We present a new approach to trend/cycle decomposition of time series that follow regime-switching processes. The proposed approach, which we label the “regime-dependent steady-state” (RDSS) decomposition, is motivated as the appropriate generalization of the Beveridge and Nelson decomposition [Beveridge, S., Nelson, C.R., 1981. A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle. Journal of Monetary Economics 7, 151–174] to the setting where the reduced-form dynamics of a given series can be captured by a regime-switching forecasting model. For processes in which the underlying trend component follows a random walk with possibly regime-switching drift, the RDSS decomposition is optimal in a minimum mean-squared-error sense and is more broadly applicable than directly employing an Unobserved Components model.  相似文献   

13.
In this paper, we consider testing distributional assumptions in multivariate GARCH models based on empirical processes. Using the fact that joint distribution carries the same amount of information as the marginal together with conditional distributions, we first transform the multivariate data into univariate independent data based on the marginal and conditional cumulative distribution functions. We then apply the Khmaladze's martingale transformation (K-transformation) to the empirical process in the presence of estimated parameters. The K-transformation eliminates the effect of parameter estimation, allowing a distribution-free test statistic to be constructed. We show that the K-transformation takes a very simple form for testing multivariate normal and multivariate t-distributions. The procedure is applied to a multivariate financial time series data set.  相似文献   

14.
Many key macroeconomic and financial variables are characterized by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with non-stationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special cases. We show that the conventional rank statistics computed as in  and  are potentially unreliable. In particular, their large sample distributions depend on the integrated covariation of the underlying multivariate volatility process which impacts on both the size and power of the associated co-integration tests, as we demonstrate numerically. A solution to the identified inference problem is provided by considering wild bootstrap-based implementations of the rank tests. These do not require the practitioner to specify a parametric model for volatility, or to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. The bootstrap is shown to perform very well in practice.  相似文献   

15.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

16.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

17.
Understanding changes in the frequency, severity, and seasonality of daily temperature extremes is important for public policy decisions regarding heat waves and cold snaps. A heat wave is sometimes defined in terms of both the daily minimum and maximum temperature, which necessitates the generation of forecasts of their joint distribution. In this paper, we develop time series models with the aim of providing insight and producing forecasts of the joint distribution that can challenge the accuracy of forecasts based on ensemble predictions from a numerical weather prediction model. We use ensemble model output statistics to recalibrate the raw ensemble predictions for the marginal distributions, with ensemble copula coupling used to capture the dependency between the marginal distributions. In terms of time series modelling, we consider a bivariate VARMA-MGARCH model. We use daily Spanish data recorded over a 65-year period, and find that, for the 5-year out-of-sample period, the recalibrated ensemble predictions outperform the time series models in terms of forecast accuracy.  相似文献   

18.
Since the pioneering work by Granger (1969), many authors have proposed tests of causality between economic time series. Most of them are concerned only with “linear causality in mean”, or if a series linearly affects the (conditional) mean of the other series. It is no doubt of primary interest, but dependence between series may be nonlinear, and/or not only through the conditional mean. Indeed conditional heteroskedastic models are widely studied recently. The purpose of this paper is to propose a nonparametric test for possibly nonlinear causality. Taking into account that dependence in higher order moments are becoming an important issue especially in financial time series, we also consider a test for causality up to the Kth conditional moment. Statistically, we can also view this test as a nonparametric omitted variable test in time series regression. A desirable property of the test is that it has nontrivial power against T1/2-local alternatives, where T is the sample size. Also, we can form a test statistic accordingly if we have some knowledge on the alternative hypothesis. Furthermore, we show that the test statistic includes most of the omitted variable test statistics as special cases asymptotically. The null asymptotic distribution is not normal, but we can easily calculate the critical regions by simulation. Monte Carlo experiments show that the proposed test has good size and power properties.  相似文献   

19.
The concept of causality introduced by Wiener [Wiener, N., 1956. The theory of prediction, In: E.F. Beckenback, ed., The Theory of Prediction, McGraw-Hill, New York (Chapter 8)] and Granger [Granger, C. W.J., 1969. Investigating causal relations by econometric models and cross-spectral methods, Econometrica 37, 424–459] is defined in terms of predictability one period ahead. This concept can be generalized by considering causality at any given horizon hh as well as tests for the corresponding non-causality [Dufour, J.-M., Renault, E., 1998. Short-run and long-run causality in time series: Theory. Econometrica 66, 1099–1125; Dufour, J.-M., Pelletier, D., Renault, É., 2006. Short run and long run causality in time series: Inference, Journal of Econometrics 132 (2), 337–362]. Instead of tests for non-causality at a given horizon, we study the problem of measuring causality between two vector processes. Existing causality measures have been defined only for the horizon 1, and they fail to capture indirect causality. We propose generalizations to any horizon hh of the measures introduced by Geweke [Geweke, J., 1982. Measurement of linear dependence and feedback between multiple time series. Journal of the American Statistical Association 77, 304–313]. Nonparametric and parametric measures of unidirectional causality and instantaneous effects are considered. On noting that the causality measures typically involve complex functions of model parameters in VAR and VARMA models, we propose a simple simulation-based method to evaluate these measures for any VARMA model. We also describe asymptotically valid nonparametric confidence intervals, based on a bootstrap technique. Finally, the proposed measures are applied to study causality relations at different horizons between macroeconomic, monetary and financial variables in the US.  相似文献   

20.
In theory, time series models can provide more accurate predictions than econometric models. However, time series models do so by introducing bias. To marketing managers who are interested in planning as well as forecasting, lack of bias is more important than accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号