首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider testing distributional assumptions in multivariate GARCH models based on empirical processes. Using the fact that joint distribution carries the same amount of information as the marginal together with conditional distributions, we first transform the multivariate data into univariate independent data based on the marginal and conditional cumulative distribution functions. We then apply the Khmaladze's martingale transformation (K-transformation) to the empirical process in the presence of estimated parameters. The K-transformation eliminates the effect of parameter estimation, allowing a distribution-free test statistic to be constructed. We show that the K-transformation takes a very simple form for testing multivariate normal and multivariate t-distributions. The procedure is applied to a multivariate financial time series data set.  相似文献   

2.
It has been documented that random walk outperforms most economic structural and time series models in out-of-sample forecasts of the conditional mean dynamics of exchange rates. In this paper, we study whether random walk has similar dominance in out-of-sample forecasts of the conditional probability density of exchange rates given that the probability density forecasts are often needed in many applications in economics and finance. We first develop a nonparametric portmanteau test for optimal density forecasts of univariate time series models in an out-of-sample setting and provide simulation evidence on its finite sample performance. Then we conduct a comprehensive empirical analysis on the out-of-sample performances of a wide variety of nonlinear time series models in forecasting the intraday probability densities of two major exchange rates—Euro/Dollar and Yen/Dollar. It is found that some sophisticated time series models that capture time-varying higher order conditional moments, such as Markov regime-switching models, have better density forecasts for exchange rates than random walk or modified random walk with GARCH and Student-t innovations. This finding dramatically differs from that on mean forecasts and suggests that sophisticated time series models could be useful in out-of-sample applications involving the probability density.  相似文献   

3.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

4.
In practical econometric analysis we are faced with the problem of how to specify structural equations. The conventional t-test of coefficients is apparently inappropriate. The smallest root, say λ, of a certain determinantal equation provides us with basis for the test of overidentifying restrictions. The preliminary test, based on λ, may give us a possible decision rule for choosing a structural equation from nested alternatives. However, ambiguity remains in specifying the significance level. We propose a decision method called the unbiased decision rule; unbiased in the sense that we attain a correct decision with probability of more than a half. The critical points are found as the medians of non-central F-distributions. The degrees of freedom and the non-centrality parameter of non-central F-distributions are determined by the properties of contending models. We also discuss the implications of the unbiased decision rule in the context of the conventional pre-test.  相似文献   

5.
I propose a new multivariate GARCH specification that maintains positive definiteness of the conditional covariance matrix. The idea is to specify the dynamics in the matrix logarithm of the conditional covariance. Because the matrix exponential transformation ensures positive definiteness, the dynamics can be specified without the positive definiteness constraint. This affords a variety of specifications and, in particular, we can specify element-by-element the dynamics of the matrix logarithm. I discuss specifications with leverage effects, estimation with multivariate Gaussian and t-distributions, and diagnostics that evaluate the appropriateness of the matrix exponential specification.  相似文献   

6.
Small sample properties of t-tests are compared with those of tests based on relative goodness- of-fit in the context of the first order moving average time series model. Monte Carlo experiments reported in the paper suggest that the actual size of these t-tests greatly exceeds theoretical large sample significance levels, while conformity of goodness-of-fit statistics to the appropriate chi-square or F-distributions is much closer. The evidence presented suggests that practitioners are well advised to employ goodness-of-fit tests as a check on results of t-tests particularly when the latter indicate ‘significance’.  相似文献   

7.
Ya. Yu. Nikitin 《Metrika》2018,81(6):609-618
We consider two scale-free tests of normality based on the characterization of the symmetric normal law by Ahsanullah et al. (Normal and student’s t-distributions and their applications, Springer, Berlin, 2014). Both tests have an U-empirical structure, but the first one is of integral type, while the second one is of Kolmogorov type. We discuss the limiting behavior of the test statistics and calculate their local exact Bahadur efficiency for location, skew and contamination alternatives.  相似文献   

8.
This paper develops a testing framework for comparing the predictive accuracy of competing multivariate density forecasts with different predictive copulas, focusing on specific parts of the copula support. The tests are framed in the context of the Kullback–Leibler Information Criterion, using (out-of-sample) conditional likelihood and censored likelihood in order to focus the evaluation on the region of interest. Monte Carlo simulations document that the resulting test statistics have satisfactory size and power properties for realistic sample sizes. In an empirical application to daily changes of yields on government bonds of the G7 countries we obtain insights into why the Student-t and Clayton mixture copula outperforms the other copulas considered; mixing in the Clayton copula with the t-copula is of particular importance to obtain high forecast accuracy in periods of jointly falling yields.  相似文献   

9.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

10.
This paper introduces tests for residual serial correlation in cointegrating regressions. The tests are devised in the frequency domain by using the spectral measure estimates. The asymptotic distributions of the tests are derived and test consistency is established. The asymptotic distributions are obtained by using the assumptions and methods that are different from those used in Grenander and Rosenblatt (1957) and Durlauf (1991). Small-scale simulation results are reported to illustrate the finite sample performance of the tests under various distributional assumptions on the data generating process. The distributions considered are normal and t-distributions. The tests are shown to have stable size at sample sizes as large as 50 or 100. Additionally, it is shown that the tests are reasonably powerful against the ARMA residuals. An empirical application of the tests to investigate the ‘weak-form’ efficiency in the foreign exchange market is also reported.  相似文献   

11.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

12.
A small-scale vector autoregression (VAR) is used to shed some light on the roles of extreme shocks and non-linearities during stress events observed in the economy. The model focuses on the link between credit/financial markets and the real economy and is estimated on US quarterly data for the period 1984–2013. Extreme shocks are accounted for by assuming t-distributed reduced-form shocks. Non-linearity is allowed by the possibility of regime switch in the shock propagation mechanism. Strong evidence for fat tails in error distributions is found. Moreover, the results suggest that accounting for extreme shocks rather than explicit modeling of non-linearity contributes to the explanatory power of the model. Finally, it is shown that the accuracy of density forecasts improves if non-linearities and shock distributions with fat tails are considered.  相似文献   

13.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

14.
This article deals with the estimation of the parameters of an α-stable distribution with indirect inference, using the skewed-t distribution as an auxiliary model. The latter distribution appears as a good candidate since it has the same number of parameters as the α-stable distribution, with each parameter playing a similar role. To improve the properties of the estimator in finite sample, we use constrained indirect inference. In a Monte Carlo study we show that this method delivers estimators with good properties in finite sample. We provide an empirical application to the distribution of jumps in the S&P 500 index returns.  相似文献   

15.
We analyze the asymptotic distributions associated with the seasonal unit root tests of Hylleberg et al. (1990) for quarterly data when the innovations follow a moving average process. Although both the t‐ and F‐type tests suffer from scale and shift effects compared with the presumed null distributions when a fixed order of autoregressive augmentation is applied, these effects disappear when the order of augmentation is sufficiently large. However, as found by Burridge and Taylor (2001) for the autoregressive case, individual t‐ratio tests at the semi‐annual frequency are not pivotal even with high orders of augmentation, although the corresponding joint F‐type statistic is pivotal. Monte Carlo simulations verify the importance of the order of augmentation for finite samples generated by seasonally integrated moving average processes.  相似文献   

16.
Maximum entropy autoregressive conditional heteroskedasticity model   总被引:2,自引:0,他引:2  
In many applications, it has been found that the autoregressive conditional heteroskedasticity (ARCH) model under the conditional normal or Student’s t distributions are not general enough to account for the excess kurtosis in the data. Moreover, asymmetry in the financial data is rarely modeled in a systematic way. In this paper, we suggest a general density function based on the maximum entropy (ME) approach that takes account of asymmetry, excess kurtosis and also of high peakedness. The ME principle is based on the efficient use of available information, and as is well known, many of the standard family of distributions can be derived from the ME approach. We demonstrate how we can extract information functional from the data in the form of moment functions. We also propose a test procedure for selecting appropriate moment functions. Our procedure is illustrated with an application to the NYSE stock returns. The empirical results reveal that the ME approach with a fewer moment functions leads to a model that captures the stylized facts quite effectively.  相似文献   

17.
In the paper, we propose residual based tests for cointegration in general panels with cross-sectional dependency, endogeneity and various heterogeneities. The residuals are obtained from the usual least squares estimation of the postulated cointegrating relationships from each individual unit, and the nonlinear IV panel unit root testing procedure is applied to the panels of the fitted residuals using as instruments the nonlinear transformations of the adaptively   fitted lagged residuals. The tt-ratio, based on the nonlinear IV estimator, is then constructed to test for unit root in the fitted residuals for each cross-section. We show that such nonlinear IV tt-ratios are asymptotically normal and cross-sectionally independent under the null hypothesis of no cointegration. The average or the minimum of the IVtt-ratios can, therefore, be used to test for the null of a fully non-cointegrated panel against the alternative of a mixed panel, i.e., a panel with only some cointegrated units. We also consider the maximum of the IV tt-ratios to test for a mixed panel against a fully cointegrated panel. The critical values of the minimum, maximum as well as the average tests are easily obtained from the standard normal distribution function. Our simulation results indicate that the residual based tests for cointegration perform quite well in finite samples.  相似文献   

18.
Poly-t densities are defined by the property that their kernel is a product, or a ratio of products, of multivariate t-density kernels. As discussed in Drèze (1977), these densities arise as Bayesian posterior densities for regression coefficients under a variety of specifications for the prior density and the data generating process. We have therefore developed methods and computer algorithms to evaluate integrating constants and other characteristics of poly-t densities with no more than a single quadratic form in the numerator (section 2). As a by-product of our analysis we have also derived an algorithm for the computation of moments of positive definite quadratic forms in Normal variables (section 3). In section 4 we discuss inference on the sampling variances associated with the models discussed in Drèze (1977).  相似文献   

19.
Approximately normal tests for equal predictive accuracy in nested models   总被引:1,自引:0,他引:1  
Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods [West, K.D., 1996. Asymptotic inference about predictive ability. Econometrica 64, 1067–1084] to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken [2001. Tests of equal forecast accuracy and encompassing for nested models. Journal of Econometrics 105, 85–110; 2005a. Evaluating direct multistep forecasts. Econometric Reviews 24, 369–404] to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.  相似文献   

20.
We propose a novel semi-nonparametric distribution that is feasibly parameterized to represent the non-Gaussianities of the asset return distributions. Our Moments Expansion (ME) density presents gains in simplicity attributable to its innovative polynomials, which are defined by the difference between the nth power of the random variable and the nth moment of the density used as the basis. We show that the Gram–Charlier distribution is a particular case of the ME-type of densities. The latter being more tractable and easier to implement when quadratic transformations are used to ensure positiveness. In an empirical application to asset returns, the ME model outperforms both standard and non-Gaussian GARCH models along several risk forecasting dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号