首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
This paper develops a pure simulation-based approach for computing maximum likelihood estimates in latent state variable models using Markov Chain Monte Carlo methods (MCMC). Our MCMC algorithm simultaneously evaluates and optimizes the likelihood function without resorting to gradient methods. The approach relies on data augmentation, with insights similar to simulated annealing and evolutionary Monte Carlo algorithms. We prove a limit theorem in the degree of data augmentation and use this to provide standard errors and convergence diagnostics. The resulting estimator inherits the sampling asymptotic properties of maximum likelihood. We demonstrate the approach on two latent state models central to financial econometrics: a stochastic volatility and a multivariate jump-diffusion models. We find that convergence to the MLE is fast, requiring only a small degree of augmentation.  相似文献   

2.
We propose a finite sample approach to some of the most common limited dependent variables models. The method rests on the maximized Monte Carlo (MMC) test technique proposed by Dufour [1998. Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and nonstandard asymptotics. Journal of Econometrics, this issue]. We provide a general way for implementing tests and confidence regions. We show that the decision rule associated with a MMC test may be written as a Mixed Integer Programming problem. The branch-and-bound algorithm yields a global maximum in finite time. An appropriate choice of the statistic yields a consistent test, while fulfilling the level constraint for any sample size. The technique is illustrated with numerical data for the logit model.  相似文献   

3.
This article proposes a class of joint and marginal spectral diagnostic tests for parametric conditional means and variances of linear and nonlinear time series models. The use of joint and marginal tests is motivated from the fact that marginal tests for the conditional variance may lead to misleading conclusions when the conditional mean is misspecified. The new tests are based on a generalized spectral approach and do not need to choose a lag order depending on the sample size or to smooth the data. Moreover, the proposed tests are robust to higher order dependence of unknown form, in particular to conditional skewness and kurtosis. It turns out that the asymptotic null distributions of the new tests depend on the data generating process. Hence, we implement the tests with the assistance of a wild bootstrap procedure. A simulation study compares the finite sample performance of the proposed and competing tests, and shows that our tests can play a valuable role in time series modeling. Finally, an application to the S&P 500 highlights the merits of our approach.  相似文献   

4.
Bayesian analysis of a Tobit quantile regression model   总被引:1,自引:0,他引:1  
This paper develops a Bayesian framework for Tobit quantile regression. Our approach is organized around a likelihood function that is based on the asymmetric Laplace distribution, a choice that turns out to be natural in this context. We discuss families of prior distributions on the quantile regression vector that lead to proper posterior distributions with finite moments. We show how the posterior distribution can be sampled and summarized by Markov chain Monte Carlo methods. A method for comparing alternative quantile regression models is also developed and illustrated. The techniques are illustrated with both simulated and real data. In particular, in an empirical comparison, our approach out-performed two other common classical estimators.  相似文献   

5.
Adaptive combining is generally a desirable approach for forecasting, which, however, has rarely been explored for discrete response time series. In this paper, we propose an adaptively combined forecasting method for such discrete response data. We demonstrate in theory that the proposed forecast is of the desired adaptation with respect to the widely used squared risk and other significant risk functions under mild conditions. Furthermore, we study the issue of adaptation for the proposed forecasting method in the presence of model screening that is often useful in applications. Our simulation study and two real-world data examples show promise for the proposed approach.  相似文献   

6.
Many structural break and regime-switching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible modeling approach which can accommodate virtually any of these specifications. We build on earlier work showing the relationship between flexible functional forms and random variation in parameters. Our contribution is based around the use of priors on the time variation that is developed from considering a hypothetical reordering of the data and distance between neighboring (reordered) observations. The range of priors produced in this way can accommodate a wide variety of nonlinear time series models, including those with regime-switching and structural breaks. By allowing the amount of random variation in parameters to depend on the distance between (reordered) observations, the parameters can evolve in a wide variety of ways, allowing for everything from models exhibiting abrupt change (e.g. threshold autoregressive models or standard structural break models) to those which allow for a gradual evolution of parameters (e.g. smooth transition autoregressive models or time varying parameter models). Bayesian econometric methods for inference are developed for estimating the distance function and types of hypothetical reordering. Conditional on a hypothetical reordering and distance function, a simple reordering of the actual data allows us to estimate our models with standard state space methods by a simple adjustment to the measurement equation. We use artificial data to show the advantages of our approach, before providing two empirical illustrations involving the modeling of real GDP growth.  相似文献   

7.
This paper addresses the problem of data errors in discrete variables. When data errors occur, the observed variable is a misclassified version of the variable of interest, whose distribution is not identified. Inferential problems caused by data errors have been conceptualized through convolution and mixture models. This paper introduces the direct misclassification approach. The approach is based on the observation that in the presence of classification errors, the relation between the distribution of the ‘true’ but unobservable variable and its misclassified representation is given by a linear system of simultaneous equations, in which the coefficient matrix is the matrix of misclassification probabilities. Formalizing the problem in these terms allows one to incorporate any prior information into the analysis through sets of restrictions on the matrix of misclassification probabilities. Such information can have strong identifying power. The direct misclassification approach fully exploits it to derive identification regions for any real functional of the distribution of interest. A method for estimating the identification regions and construct their confidence sets is given, and illustrated with an empirical analysis of the distribution of pension plan types using data from the Health and Retirement Study.  相似文献   

8.
This paper investigates the effect that covariate measurement error has on a treatment effect analysis built on an unconfoundedness restriction in which there is conditioning on error free covariates. The approach uses small parameter asymptotic methods to obtain the approximate effects of measurement error for estimators of average treatment effects. The approximations can be estimated using data on observed outcomes, the treatment indicator and error contaminated covariates without employing additional information from validation data or instrumental variables. The results can be used in a sensitivity analysis to probe the potential effects of measurement error on the evaluation of treatment effects.  相似文献   

9.
This paper proposes a new approach to handle nonparametric stochastic frontier (SF) models. It is based on local maximum likelihood techniques. The model is presented as encompassing some anchorage parametric model in a nonparametric way. First, we derive asymptotic properties of the estimator for the general case (local linear approximations). Then the results are tailored to a SF model where the convoluted error term (efficiency plus noise) is the sum of a half normal and a normal random variable. The parametric anchorage model is a linear production function with a homoscedastic error term. The local approximation is linear for both the production function and the parameters of the error terms. The performance of our estimator is then established in finite samples using simulated data sets as well as with a cross-sectional data on US commercial banks. The methods appear to be robust, numerically stable and particularly useful for investigating a production process and the derived efficiency scores.  相似文献   

10.
We present a Bayesian approach for analyzing aggregate level sales data in a market with differentiated products. We consider the aggregate share model proposed by Berry et al. [Berry, Steven, Levinsohn, James, Pakes, Ariel, 1995. Automobile prices in market equilibrium. Econometrica. 63 (4), 841–890], which introduces a common demand shock into an aggregated random coefficient logit model. A full likelihood approach is possible with a specification of the distribution of the common demand shock. We introduce a reparameterization of the covariance matrix to improve the performance of the random walk Metropolis for covariance parameters. We illustrate the usefulness of our approach with both actual and simulated data. Sampling experiments show that our approach performs well relative to the GMM estimator even in the presence of a mis-specified shock distribution. We view our approach as useful for those who are willing to trade off one additional distributional assumption for increased efficiency in estimation.  相似文献   

11.
Recent literature on panel data emphasizes the importance of accounting for time-varying unobservable individual effects, which may stem from either omitted individual characteristics or macro-level shocks that affect each individual unit differently. In this paper, we propose a simple specification test of the null hypothesis that the individual effects are time-invariant against the alternative that they are time-varying. Our test is an application of Hausman (1978) testing procedure and can be used for any generalized linear model for panel data that admits a sufficient statistic for the individual effect. This is a wide class of models which includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea of the test is to compare two alternative estimators of the model parameters based on two different formulations of the conditional maximum likelihood method. Our approach does not require assumptions on the distribution of unobserved heterogeneity, nor it requires the latter to be independent of the regressors in the model. We investigate the finite sample properties of the test through a set of Monte Carlo experiments. Our results show that the test performs well, with small size distortions and good power properties. We use a health economics example based on data from the Health and Retirement Study to illustrate the proposed test.  相似文献   

12.
This paper proposes a new testing procedure for detecting error cross section dependence after estimating a linear dynamic panel data model with regressors using the generalised method of moments (GMM). The test is valid when the cross-sectional dimension of the panel is large relative to the time series dimension. Importantly, our approach allows one to examine whether any error cross section dependence remains after including time dummies (or after transforming the data in terms of deviations from time-specific averages), which will be the case under heterogeneous error cross section dependence. Finite sample simulation-based results suggest that our tests perform well, particularly the version based on the [Blundell, R., Bond, S., 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87, 115–143] system GMM estimator. In addition, it is shown that the system GMM estimator, based only on partial instruments consisting of the regressors, can be a reliable alternative to the standard GMM estimators under heterogeneous error cross section dependence. The proposed tests are applied to employment equations using UK firm data and the results show little evidence of heterogeneous error cross section dependence.  相似文献   

13.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

14.
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts of simulated data, US macroeconomic time series and surveys of stock market prices. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. Also, substantial uncertainty appears in the weights when predictors are similar; residual uncertainty reduces when the model set is complete; and learning reduces this uncertainty. For the macro series we find that incompleteness of the models is relatively large in the 1970’s, the beginning of the 1980’s and during the recent financial crisis, and lower during the Great Moderation; the predicted probabilities of recession accurately compare with the NBER business cycle dating; model weights have substantial uncertainty attached. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 1990’s and switches to giving more weight to the professional forecasts over time. Information on the complete predictive distribution and not just on some moments turns out to be very important, above all during turbulent times such as the recent financial crisis. More generally, the proposed distributional state space representation offers great flexibility in combining densities.  相似文献   

15.
We analyze the properties of the implied volatility, the commonly used volatility estimator by direct option price inversion. It is found that the implied volatility is subject to a systematic bias in the presence of pricing errors, which makes it inconsistent to the underlying volatility. We propose an estimator of the underlying volatility by first estimating nonparametrically the option price function, followed by inverting the nonparametrically estimated price. It is shown that the approach removes the adverse impacts of the pricing errors and produces a consistent volatility estimator for a wide range of option price models. We demonstrate the effectiveness of the proposed approach by numerical simulation and empirical analysis on S&P 500 option data.  相似文献   

16.
We develop a sequential procedure to test the adequacy of jump-diffusion models for return distributions. We rely on intraday data and nonparametric volatility measures, along with a new jump detection technique and appropriate conditional moment tests, for assessing the import of jumps and leverage effects. A novel robust-to-jumps approach is utilized to alleviate microstructure frictions for realized volatility estimation. Size and power of the procedure are explored through Monte Carlo methods. Our empirical findings support the jump-diffusive representation for S&P500 futures returns but reveal it is critical to account for leverage effects and jumps to maintain the underlying semi-martingale assumption.  相似文献   

17.
We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445–458]. The test is based on the general Pareto–Koopmans notion of efficiency, and does not require price data. Statistical inference is based on the sampling distribution of the L norm of errors. The test statistic can be computed using a simple enumeration algorithm. The finite sample properties of the test are analyzed by means of a Monte Carlo simulation using real-world data of large EU commercial banks.  相似文献   

18.
In this paper, we propose a simple extension to the panel case of the covariate‐augmented Dickey–Fuller (CADF) test for unit roots developed in Hansen (1995) . The panel test we propose is based on a P values combination approach that takes into account cross‐section dependence. We show that the test has good size properties and gives power gains with respect to other popular panel approaches. An empirical application is carried out for illustration purposes on international data to test the purchasing power parity (PPP) hypothesis.  相似文献   

19.
This paper proposes an empirical Bayes approach for Markov switching autoregressions that can constrain some of the state-dependent parameters (regression coefficients and error variances) to be approximately equal across regimes. By flexibly reducing the dimension of the parameter space, this can help to ensure regime separation and to detect the Markov switching nature of the data. The permutation sampler with a hierarchical prior is used for choosing the prior moments, the identification constraint, and the parameters governing prior state dependence. The empirical relevance of the methodology is illustrated with an application to quarterly and monthly real interest rate data.  相似文献   

20.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号