首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is a survey of estimation techniques for stationary and ergodic diffusion processes observed at discrete points in time. The reader is introduced to the following techniques: (i) estimating functions with special emphasis on martingale estimating functions and so-called simple estimating functions; (ii) analytical and numerical approximations of the likelihood function which can in principle be made arbitrarily accurate; (iii) Bayesian analysis and MCMC methods; and (iv) indirect inference and EMM which both introduce auxiliary (but wrong) models and correct for the implied bias by simulation.  相似文献   

2.
Several authors have proposed stochastic and non‐stochastic approximations to the maximum likelihood estimate (MLE) for Gibbs point processes in modelling spatial point patterns with pairwise interactions. The approximations are necessary because of the difficulty of evaluating the normalizing constant. In this paper, we first provide a review of methods which yield crude approximations to the MLE. We also review methods based on Markov chain Monte Carlo techniques for which exact MLE has become feasible. We then present a comparative simulation study of the performance of such methods of estimation based on two simulation techniques, the Gibbs sampler and the Metropolis‐Hastings algorithm, carried out for the Strauss model.  相似文献   

3.
This paper uses free-knot and fixed-knot regression splines in a Bayesian context to develop methods for the nonparametric estimation of functions subject to shape constraints in models with log-concave likelihood functions. The shape constraints we consider include monotonicity, convexity and functions with a single minimum. A computationally efficient MCMC sampling algorithm is developed that converges faster than previous methods for non-Gaussian models. Simulation results indicate the monotonically constrained function estimates have good small sample properties relative to (i) unconstrained function estimates, and (ii) function estimates obtained from other constrained estimation methods when such methods exist. Also, asymptotic results show the methodology provides consistent estimates for a large class of smooth functions. Two detailed illustrations exemplify the ideas.  相似文献   

4.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

5.
Qualitative response models (QRM's) are analyzed from the Bayesian point of view, using diffuse and informative prior distributions. Exact finite-sample Bayesian and large-sample Bayesian and non-Bayesian estimation results are compared. In addition, the paper provides: (1) plots and discussion of the properties of likelihood functions for QRM's, (2) posterior distributions for logit models' derivatives and elasticities, (3) Bayesian prediction procedures for QRM's, (4) new estimates for the median and other fractiles of the logistic distribution, (5) posterior odds ratios for model selection problems, and (6) comparisons of two alternative Monte Carlo numerical integration procedures. It is concluded that asymptotic approximations are not accurate for small-to moderate-sized samples even when only a single input variable is used, and that operational Bayesian methods are available for providing both exact small-sample and large-sample approximate inferences for DRM's.  相似文献   

6.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

7.
This paper considers two empirical likelihood-based estimation, inference, and specification testing methods for quantile regression models. First, we apply the method of conditional empirical likelihood (CEL) by Kitamura et al. [2004. Empirical likelihood-based inference in conditional moment restriction models. Econometrica 72, 1667–1714] and Zhang and Gijbels [2003. Sieve empirical likelihood and extensions of the generalized least squares. Scandinavian Journal of Statistics 30, 1–24] to quantile regression models. Second, to avoid practical problems of the CEL method induced by the discontinuity in parameters of CEL, we propose a smoothed counterpart of CEL, called smoothed conditional empirical likelihood (SCEL). We derive asymptotic properties of the CEL and SCEL estimators, parameter hypothesis tests, and model specification tests. Important features are (i) the CEL and SCEL estimators are asymptotically efficient and do not require preliminary weight estimation; (ii) by inverting the CEL and SCEL ratio parameter hypothesis tests, asymptotically valid confidence intervals can be obtained without estimating the asymptotic variances of the estimators; and (iii) in contrast to CEL, the SCEL method can be implemented by some standard Newton-type optimization. Simulation results demonstrate that the SCEL method in particular compares favorably with existing alternatives.  相似文献   

8.
Here we consider the record data from the two-parameter of bathtub-shaped distribution. First, we develop simplified forms for the single moments, variances and covariance of records. These distributional properties are quite useful in obtaining the best linear unbiased estimators of the location and scale parameters which can be included in the model. The estimation of the unknown shape parameters and prediction of the future unobserved records based on some observed ones are discussed. Frequentist and Bayesian analyses are adopted for conducting the estimation and prediction problems. The likelihood method, moment based method, bootstrap methods as well as the Bayesian sampling techniques are applied for the inference problems. The point predictors and credible intervals of future record values based on an informative set of records can be developed. Monte Carlo simulations are performed to compare the so developed methods and one real data set is analyzed for illustrative purposes.  相似文献   

9.
This paper presents a new approximation to the exact sampling distribution of the instrumental variables estimator in simultaneous equations models. It differs from many of the approximations currently available, Edgeworth expansions for example, in that it is specifically designed to work well when the concentration parameter is small. The approximation is remarkable in that simultaneously: (i) it has an extremely simple final form; (ii) in situations for which it is designed it is typically much more accurate than is the large sample normal approximation; and (iii) it is able to capture most of those stylized facts that characterize lack of identification and weak instrument scenarios. The development leading to the approximation is also novel in that it introduces techniques of some independent interest not seen in this literature hitherto.  相似文献   

10.
This paper develops a new model for the analysis of stochastic volatility (SV) models. Since volatility is a latent variable in SV models, it is difficult to evaluate the exact likelihood. In this paper, a non-linear filter which yields the exact likelihood of SV models is employed. Solving a series of integrals in this filter by piecewise linear approximations with randomly chosen nodes produces the likelihood, which is maximized to obtain estimates of the SV parameters. A smoothing algorithm for volatility estimation is also constructed. Monte Carlo experiments show that the method performs well with respect to both parameter estimates and volatility estimates. We illustrate our model by analysing daily stock returns on the Tokyo Stock Exchange. Since the method can be applied to more general models, the SV model is extended so that several characteristics of daily stock returns are allowed, and this more general model is also estimated. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we investigate a spatial Durbin error model with finite distributed lags and consider the Bayesian MCMC estimation of the model with a smoothness prior. We study also the corresponding Bayesian model selection procedure for the spatial Durbin error model, the spatial autoregressive model and the matrix exponential spatial specification model. We derive expressions of the marginal likelihood of the three models, which greatly simplify the model selection procedure. Simulation results suggest that the Bayesian estimates of high order spatial distributed lag coefficients are more precise than the maximum likelihood estimates. When the data is generated with a general declining pattern or a unimodal pattern for lag coefficients, the spatial Durbin error model can better capture the pattern than the SAR and the MESS models in most cases. We apply the procedure to study the effect of right to work (RTW) laws on manufacturing employment.  相似文献   

12.
In the context of univariate GARCH models we show how analytic first and second derivatives of the log-likelihood can be successfully employed for estimation purposes. Maximum likelihood GARCH estimation usually relies on the numerical approximation to the log-likelihood derivatives, on the grounds that an exact analytic differentiation is much too burdensome. We argue that this is not the case and that the computational benefit of using the analytic derivatives (first and second) may be substantial. Furthermore, we make a comparison of various gradient algorithms that are used for the maximization of the GARCH Gaussian likelihood. We suggest the implementation of a globally efficient computation algorithm that is obtained by suitably combining the use of the estimated information matrix with that of the exact Hessian during the maximization process. As this would appear a straightforward extension, we then study the finite sample performance of the exact Hessian and its approximations (that is, the estimated information, outer products and misspecification robust matrices) in inference.  相似文献   

13.
《Journal of econometrics》1999,88(2):341-363
Optimal estimation of missing values in ARMA models is typically performed by using the Kalman filter for likelihood evaluation, ‘skipping’ in the computations the missing observations, obtaining the maximum likelihood (ML) estimators of the model parameters, and using some smoothing algorithm. The same type of procedure has been extended to nonstationary ARIMA models in Gómez and Maravall (1994). An alternative procedure suggests filling in the holes in the series with arbitrary values and then performing ML estimation of the ARIMA model with additive outliers (AO). When the model parameters are not known the two methods differ, since the AO likelihood is affected by the arbitrary values. We develop the proper likelihood for the AO approach in the general non-stationary case and show the equivalence of this and the skipping method. Finally, the two methods are compared through simulation, and their relative advantages assessed; the comparison also includes the AO method with the uncorrected likelihood.  相似文献   

14.
We propose a new dynamic copula model in which the parameter characterizing dependence follows an autoregressive process. As this model class includes the Gaussian copula with stochastic correlation process, it can be viewed as a generalization of multivariate stochastic volatility models. Despite the complexity of the model, the decoupling of marginals and dependence parameters facilitates estimation. We propose estimation in two steps, where first the parameters of the marginal distributions are estimated, and then those of the copula. Parameters of the latent processes (volatilities and dependence) are estimated using efficient importance sampling. We discuss goodness‐of‐fit tests and ways to forecast the dependence parameter. For two bivariate stock index series, we show that the proposed model outperforms standard competing models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper we develop new Markov chain Monte Carlo schemes for the estimation of Bayesian models. One key feature of our method, which we call the tailored randomized block Metropolis–Hastings (TaRB-MH) method, is the random clustering of the parameters at every iteration into an arbitrary number of blocks. Then each block is sequentially updated through an M–H step. Another feature is that the proposal density for each block is tailored to the location and curvature of the target density based on the output of simulated annealing, following  and  and Chib and Ergashev (in press). We also provide an extended version of our method for sampling multi-modal distributions in which at a pre-specified mode jumping iteration, a single-block proposal is generated from one of the modal regions using a mixture proposal density, and this proposal is then accepted according to an M–H probability of move. At the non-mode jumping iterations, the draws are obtained by applying the TaRB-MH algorithm. We also discuss how the approaches of Chib (1995) and Chib and Jeliazkov (2001) can be adapted to these sampling schemes for estimating the model marginal likelihood. The methods are illustrated in several problems. In the DSGE model of Smets and Wouters (2007), for example, which involves a 36-dimensional posterior distribution, we show that the autocorrelations of the sampled draws from the TaRB-MH algorithm decay to zero within 30–40 lags for most parameters. In contrast, the sampled draws from the random-walk M–H method, the algorithm that has been used to date in the context of DSGE models, exhibit significant autocorrelations even at lags 2500 and beyond. Additionally, the RW-MH does not explore the same high density regions of the posterior distribution as the TaRB-MH algorithm. Another example concerns the model of An and Schorfheide (2007) where the posterior distribution is multi-modal. While the RW-MH algorithm is unable to jump from the low modal region to the high modal region, and vice-versa, we show that the extended TaRB-MH method explores the posterior distribution globally in an efficient manner.  相似文献   

16.
Newey and Powell [2003. Instrumental variable estimation of nonparametric models. Econometrica 71, 1565–1578] and Ai and Chen [2003. Efficient estimation of conditional moment restrictions models containing unknown functions. Econometrica 71, 1795–1843] propose sieve minimum distance (SMD) estimation of both finite dimensional parameter (θ)(θ) and infinite dimensional parameter (h) that are identified through a conditional moment restriction model, in which h could depend on endogenous variables. This paper modifies their SMD procedure to allow for different conditioning variables to be used in different equations, and derives the asymptotic properties when the model may be misspecified  . Under low-level sufficient conditions, we show that: (i) the modified SMD estimators of both θθ and h   converge to some pseudo-true values in probability; (ii) the SMD estimators of smooth functionals, including the θθ estimator and the average derivative estimator, are asymptotically normally distributed; and (iii) the estimators for the asymptotic covariances of the SMD estimators of smooth functionals are consistent and easy to compute. These results allow for asymptotically valid tests of various hypotheses on the smooth functionals regardless of whether the semiparametric model is correctly specified or not.  相似文献   

17.
Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping.  相似文献   

18.
Ten empirical models of travel behavior are used to measure the variability of structural equation model goodness-of-fit as a function of sample size, multivariate kurtosis, and estimation technique. The estimation techniques are maximum likelihood, asymptotic distribution free, bootstrapping, and the Mplus approach. The results highlight the divergence of these techniques when sample sizes are small and/or multivariate kurtosis high. Recommendations include using multiple estimation techniques and, when sample sizes are large, sampling the data and reestimating the models to test both the robustness of the specifications and to quantify, to some extent, the large sample bias inherent in the χ 2 test statistic.  相似文献   

19.
The paper compares the pseudo real‐time forecasting performance of three dynamic factor models: (i) the standard principal component model introduced by Stock and Watson in 2002; (ii) the model based on generalized principal components, introduced by Forni, Hallin, Lippi, and Reichlin in 2005; (iii) the model recently proposed by Forni, Hallin, Lippi, and Zaffaroni in 2015. We employ a large monthly dataset of macroeconomic and financial time series for the US economy, which includes the Great Moderation, the Great Recession and the subsequent recovery (an update of the so‐called Stock and Watson dataset). Using a rolling window for estimation and prediction, we find that model (iii) significantly outperforms models (i) and (ii) in the Great Moderation period for both industrial production and inflation, and that model (iii) is also the best method for inflation over the full sample. However, model (iii) is outperformed by models (ii) and (i) over the full sample for industrial production.  相似文献   

20.
This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic volatility model. A unified analysis of the model, and its special cases, is developed that encompasses estimation, filtering and model choice. The centerpieces of the estimation algorithm (which relies on MCMC methods) are: (1) a reduced blocking scheme for sampling the free elements of the loading matrix and the factors and (2) a special method for sampling the parameters of the univariate SV process. The resulting algorithm is scalable in terms of series and factors and simulation-efficient. Methods for estimating the log-likelihood function and the filtered values of the time-varying volatilities and correlations are also provided. The performance and effectiveness of the inferential methods are extensively tested using simulated data where models up to 50 dimensions and 688 parameters are fit and studied. The performance of our model, in relation to various multivariate GARCH models, is also evaluated using a real data set of weekly returns on a set of 10 international stock indices. We consider the performance along two dimensions: the ability to correctly estimate the conditional covariance matrix of future returns and the unconditional and conditional coverage of the 5% and 1% value-at-risk (VaR) measures of four pre-defined portfolios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号