首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Continuous-time stochastic volatility models are becoming an increasingly popular way to describe moderate and high-frequency financial data. Barndorff-Nielsen and Shephard (2001a) proposed a class of models where the volatility behaves according to an Ornstein–Uhlenbeck (OU) process, driven by a positive Lévy process without Gaussian component. These models introduce discontinuities, or jumps, into the volatility process. They also consider superpositions of such processes and we extend that to the inclusion of a jump component in the returns. In addition, we allow for leverage effects and we introduce separate risk pricing for the volatility components. We design and implement practically relevant inference methods for such models, within the Bayesian paradigm. The algorithm is based on Markov chain Monte Carlo (MCMC) methods and we use a series representation of Lévy processes. MCMC methods for such models are complicated by the fact that parameter changes will often induce a change in the distribution of the representation of the process and the associated problem of overconditioning. We avoid this problem by dependent thinning methods. An application to stock price data shows the models perform very well, even in the face of data with rapid changes, especially if a superposition of processes with different risk premiums and a leverage effect is used.  相似文献   

2.
We propose methods for testing hypothesis of non-causality at various horizons, as defined in Dufour and Renault (Econometrica 66, (1998) 1099–1125). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the US economy.  相似文献   

3.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

4.
This paper is concerned with estimating preference functionals for choice under risk from the choice behaviour of individuals. We note that there is heterogeneity in behaviour between individuals and within individuals. By ‘heterogeneity between individuals’ we mean that people are different, in terms of both their preference functionals and their parameters for these functionals. By ‘heterogeneity within individuals’ we mean that the behaviour may be different even by the same individual for the same choice problem. We propose methods of taking into account all forms of heterogeneity, concentrating particularly on using a Mixture Model to capture the heterogeneity of preference functionals.  相似文献   

5.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

6.
In situations where a regression model is subject to one or more breaks it is shown that it can be optimal to use pre-break data to estimate the parameters of the model used to compute out-of-sample forecasts. The issue of how best to exploit the trade-off that might exist between bias and forecast error variance is explored and illustrated for the multivariate regression model under the assumption of strictly exogenous regressors. In practice when this assumption cannot be maintained and both the time and size of the breaks are unknown, the optimal choice of the observation window will be subject to further uncertainties that make exploiting the bias–variance trade-off difficult. To that end we propose a new set of cross-validation methods for selection of a single estimation window and weighting or pooling methods for combination of forecasts based on estimation windows of different lengths. Monte Carlo simulations are used to show when these procedures work well compared with methods that ignore the presence of breaks.  相似文献   

7.
We provide an accessible introduction to graph‐theoretic methods for causal analysis. Building on the work of Swanson and Granger (Journal of the American Statistical Association, Vol. 92, pp. 357–367, 1997), and generalizing to a larger class of models, we show how to apply graph‐theoretic methods to selecting the causal order for a structural vector autoregression (SVAR). We evaluate the PC (causal search) algorithm in a Monte Carlo study. The PC algorithm uses tests of conditional independence to select among the possible causal orders – or at least to reduce the admissible causal orders to a narrow equivalence class. Our findings suggest that graph‐theoretic methods may prove to be a useful tool in the analysis of SVARs.  相似文献   

8.
Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fund size has a negative effect on the average fund manager’s benchmark-adjusted performance. Further, when we allow for time effects and the non-normality of fund returns, we find that there is no evidence that even the best performing fund managers can significantly out-perform the augmented benchmarks after fund management charges are taken into account.  相似文献   

9.
Graph‐theoretic methods of causal search based on the ideas of Pearl (2000), Spirtes et al. (2000), and others have been applied by a number of researchers to economic data, particularly by Swanson and Granger (1997) to the problem of finding a data‐based contemporaneous causal order for the structural vector autoregression, rather than, as is typically done, assuming a weakly justified Choleski order. Demiralp and Hoover (2003) provided Monte Carlo evidence that such methods were effective, provided that signal strengths were sufficiently high. Unfortunately, in applications to actual data, such Monte Carlo simulations are of limited value, as the causal structure of the true data‐generating process is necessarily unknown. In this paper, we present a bootstrap procedure that can be applied to actual data (i.e. without knowledge of the true causal structure). We show with an applied example and a simulation study that the procedure is an effective tool for assessing our confidence in causal orders identified by graph‐theoretic search algorithms.  相似文献   

10.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

11.
In this paper we derive a semiparametric efficient adaptive estimator of an asymmetric GARCH model. Applying some general results from Drost et al. [1997. The Annals of Statistics 25, 786–818], we first estimate the unknown density function of the disturbances by kernel methods, then apply a one-step Newton–Raphson method to obtain a more efficient estimator than the quasi-maximum likelihood estimator. The proposed semiparametric estimator is adaptive for parameters appearing in the conditional standard deviation model with respect to the unknown distribution of the disturbances.  相似文献   

12.
In this paper, we develop methods for estimation and forecasting in large time-varying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints, we draw on ideas from the dynamic model averaging literature which achieve reductions in the computational burden through the use forgetting factors. We then extend the TVP-VAR so that its dimension can change over time. For instance, we can have a large TVP-VAR as the forecasting model at some points in time, but a smaller TVP-VAR at others. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output and interest rates demonstrates the feasibility and usefulness of our approach.  相似文献   

13.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

14.
Regression discontinuity designs: A guide to practice   总被引:2,自引:0,他引:2  
In regression discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell [1960. Regression-discontinuity analysis: an alternative to the ex-post Facto experiment. Journal of Educational Psychology 51, 309–317] With the exception of a few unpublished theoretical papers, these methods did not attract much attention in the economics literature until recently. Starting in the late 1990s, there has been a large number of studies in economics applying and extending RD methods. In this paper we review some of the practical and theoretical issues in implementation of RD methods.  相似文献   

15.
In this paper, we consider bootstrapping cointegrating regressions. It is shown that the method of bootstrap, if properly implemented, generally yields consistent estimators and test statistics for cointegrating regressions. For the cointegrating regression models driven by general linear processes, we employ the sieve bootstrap based on the approximated finite-order vector autoregressions for the regression errors and the first differences of the regressors. In particular, we establish the bootstrap consistency for OLS method. The bootstrap method can thus be used to correct for the finite sample bias of the OLS estimator and to approximate the asymptotic critical values of the OLS-based test statistics in general cointegrating regressions. The bootstrap OLS procedure, however, is not efficient. For the efficient estimation and hypothesis testing, we consider the procedure proposed by Saikkonen [1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7, 1–21] and Stock and Watson [1993. A simple estimator of cointegrating vectors in higher order integrating systems. Econometrica 61, 783–820] relying on the regression augmented with the leads and lags of differenced regressors. The bootstrap versions of their procedures are shown to be consistent, and can be used to do asymptotically valid inferences. A Monte Carlo study is conducted to investigate the finite sample performances of the proposed bootstrap methods.  相似文献   

16.
In this work, we analyze the performance of production units using the directional distance function which allows to measure the distance to the frontier of the production set along any direction in the inputs/outputs space. We show that this distance can be expressed as a simple transformation of radial or hyperbolic distance. This formulation allows to define robust directional distances in the lines of α-quantile or order-m partial frontiers and also conditional directional distance functions, conditional to environmental factors. We propose simple methods of estimation and derive the asymptotic properties of our estimators.  相似文献   

17.
Dynamic Stochastic General Equilibrium (DSGE) models are now considered attractive by the profession not only from the theoretical perspective but also from an empirical standpoint. As a consequence of this development, methods for diagnosing the fit of these models are being proposed and implemented. In this article we illustrate how the concept of statistical identification, that was introduced and used by Spanos [Spanos, Aris, 1990. The simultaneous-equations model revisited: Statistical adequacy and identification. Journal of Econometrics 44, 87–105] to criticize traditional evaluation methods of Cowles Commission models, could be relevant for DSGE models. We conclude that the recently proposed model evaluation method, based on the DSGE–VAR(λ)(λ), might not satisfy the condition for statistical identification. However, our application also shows that the adoption of a FAVAR as a statistically identified benchmark leaves unaltered the support of the data for the DSGE model and that a DSGE–FAVAR can be an optimal forecasting model.  相似文献   

18.
This paper uses free-knot and fixed-knot regression splines in a Bayesian context to develop methods for the nonparametric estimation of functions subject to shape constraints in models with log-concave likelihood functions. The shape constraints we consider include monotonicity, convexity and functions with a single minimum. A computationally efficient MCMC sampling algorithm is developed that converges faster than previous methods for non-Gaussian models. Simulation results indicate the monotonically constrained function estimates have good small sample properties relative to (i) unconstrained function estimates, and (ii) function estimates obtained from other constrained estimation methods when such methods exist. Also, asymptotic results show the methodology provides consistent estimates for a large class of smooth functions. Two detailed illustrations exemplify the ideas.  相似文献   

19.
The methods listed in the title are compared by means of a simulation study and a real world application. The aspects compared via simulations are the performance of the tests for the cointegrating rank and the quality of the estimated cointegrating space. The subspace algorithm method, formulated in the state space framework and thus applicable for vector autoregressive moving average (VARMA) processes, performs at least comparably to the Johansen method. Both the Johansen procedure and the subspace algorithm cointegration analysis perform significantly better than Bierens’ method. The real‐world application is an investigation of the long‐run properties of the one‐sector neoclassical growth model for Austria. The results do not fully support the implications of the model with respect to cointegration. Furthermore, the results differ greatly between the different methods. The amount of variability depends strongly upon the number of variables considered and huge differences occur for the full system with six variables. Therefore we conclude that the results of such applications with about five or six variables and 100 observations, which are typical in the applied literature, should possibly be interpreted with more caution than is commonly done.  相似文献   

20.
We propose a new diagnostic tool for time series called the quantilogram. The tool can be used formally and we provide the inference tools to do this under general conditions, and it can also be used as a simple graphical device. We apply our method to measure directional predictability and to test the hypothesis that a given time series has no directional predictability. The test is based on comparing the correlogram of quantile hits to a pointwise confidence interval or on comparing the cumulated squared autocorrelations with the corresponding critical value. We provide the distribution theory needed to conduct inference, propose some model free upper bound critical values, and apply our methods to S&P500 stock index return data. The empirical results suggest some directional predictability in returns. The evidence is strongest in mid range quantiles like 5–10% and for daily data. The evidence for predictability at the median is of comparable strength to the evidence around the mean, and is strongest at the daily frequency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号