首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several authors have proposed stochastic and non‐stochastic approximations to the maximum likelihood estimate (MLE) for Gibbs point processes in modelling spatial point patterns with pairwise interactions. The approximations are necessary because of the difficulty of evaluating the normalizing constant. In this paper, we first provide a review of methods which yield crude approximations to the MLE. We also review methods based on Markov chain Monte Carlo techniques for which exact MLE has become feasible. We then present a comparative simulation study of the performance of such methods of estimation based on two simulation techniques, the Gibbs sampler and the Metropolis‐Hastings algorithm, carried out for the Strauss model.  相似文献   

2.
有效价差的极大似然估计   总被引:1,自引:0,他引:1  
有效价差是刻画金融资产交易成本的一种重要度量。本文基于Roll的价格模型,利用对数价格极差分布的近似正态特征,提出了一种有效价差的近似极大似然估计,并通过数值模拟比较了这一新的估计与以往文献中提出的Roll的协方差估计、贝叶斯估计以及High-Low估计在各种不同状况下的精度。模拟的结果表明,无论是在连续交易的理想状态还是交易不连续且价格不能被完全观测到的非理想状态下,极大似然估计和High-Low估计的精度均高于协方差和贝叶斯估计;当波动率相对较小的时候,极大似然估计的精度优于High-Low估计;另外,在非理想情形下,极大似然估计要比High-Low估计更加稳健。  相似文献   

3.
This paper develops a pure simulation-based approach for computing maximum likelihood estimates in latent state variable models using Markov Chain Monte Carlo methods (MCMC). Our MCMC algorithm simultaneously evaluates and optimizes the likelihood function without resorting to gradient methods. The approach relies on data augmentation, with insights similar to simulated annealing and evolutionary Monte Carlo algorithms. We prove a limit theorem in the degree of data augmentation and use this to provide standard errors and convergence diagnostics. The resulting estimator inherits the sampling asymptotic properties of maximum likelihood. We demonstrate the approach on two latent state models central to financial econometrics: a stochastic volatility and a multivariate jump-diffusion models. We find that convergence to the MLE is fast, requiring only a small degree of augmentation.  相似文献   

4.
Numerical Tools for the Bayesian Analysis of Stochastic Frontier Models   总被引:2,自引:2,他引:0  
In this paper we describe the use of modern numerical integration methods for making posterior inferences in composed error stochastic frontier models for panel data or individual cross- sections. Two Monte Carlo methods have been used in practical applications. We survey these two methods in some detail and argue that Gibbs sampling methods can greatly reduce the computational difficulties involved in analyzing such models.  相似文献   

5.
This paper presents results from a Monte Carlo study concerning inference with spatially dependent data. We investigate the impact of location/distance measurement errors upon the accuracy of parametric and nonparametric estimators of asymptotic variances. Nonparametric estimators are quite robust to such errors, method of moments estimators perform surprisingly well, and MLE estimators are very poor. We also present and evaluate a specification test based on a parametric bootstrap that has good power properties for the types of measurement error we consider.  相似文献   

6.
For reasons of time constraint and cost reduction, censoring is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I right censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. In this article, we consider a progressively Type-I censored life-test under the assumption that the lifetime of each test unit is exponentially distributed. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE under the condition that its existence is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we then discuss the construction of confidence intervals for the mean parameter and their performance is assessed through Monte Carlo simulations. Finally, an example is presented in order to illustrate all the methods of inference discussed here.  相似文献   

7.
This paper considers the estimation of Kumbhakar et al. (J Prod Anal. doi:10.1007/s11123-012-0303-1, 2012) (KLH) four random components stochastic frontier (SF) model using MLE techniques. We derive the log-likelihood function of the model using results from the closed-skew normal distribution. Our Monte Carlo analysis shows that MLE is more efficient and less biased than the multi-step KLH estimator. Moreover, we obtain closed-form expressions for the posterior expected values of the random effects, used to estimate short-run and long-run (in)efficiency as well as random-firm effects. The model is general enough to nest most of the currently used panel SF models; hence, its appropriateness can be tested. This is exemplified by analyzing empirical results from three different applications.  相似文献   

8.
In Bayesian analysis of vector autoregressive models, and especially in forecasting applications, the Minnesota prior of Litterman is frequently used. In many cases other prior distributions provide better forecasts and are preferable from a theoretical standpoint. Several of these priors require numerical methods in order to evaluate the posterior distribution. Different ways of implementing Monte Carlo integration are considered. It is found that Gibbs sampling performs as well as, or better, then importance sampling and that the Gibbs sampling algorithms are less adversely affected by model size. We also report on the forecasting performance of the different prior distributions. © 1997 by John Wiley & Sons, Ltd.  相似文献   

9.
Markov chain Monte Carlo methods are frequently used in the analyses of genetic data on pedigrees for the estimation of probabilities and likelihoods which cannot be calculated by existing exact methods. In the case of discrete data, the underlying Markov chain may be reducible and care must be taken to ensure that reliable estimates are obtained. Potential reducibility thus has implications for the analysis of the mixed inheritance model, for example, where genetic variation is assumed to be due to one single locus of large effect and many loci each with a small effect. Similarly, reducibility arises in the detection of quantitative trait loci from incomplete discrete marker data. This paper aims to describe the estimation problem in terms of simple discrete genetic models and the single-site Gibbs sampler. Reducibility of the Gibbs sampler is discussed and some current methods for circumventing the problem outlined.  相似文献   

10.
Many applied researchers have to deal with spatially autocorrelated residuals (SAR). Available tests that identify spatial spillovers as captured by a significant SAR parameter, are either based on maximum likelihood (MLE) or generalized method of moments (GMM) estimates. This paper illustrates the properties of various tests for the null hypothesis of a zero SAR parameter in a comprehensive Monte Carlo study. The main finding is that Wald tests generally perform well regarding both size and power even in small samples. The GMM-based Wald test is correctly sized even for non-normally distributed disturbances and small samples, and it exhibits a similar power as its MLE-based counterpart. Hence, for the applied researcher the GMM Wald test can be recommended, because it is easy to implement.  相似文献   

11.
We address the issue of modelling and forecasting macroeconomic variables using rich datasets by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (IOLS) estimator. We establish the consistency and asymptotic distribution of the estimator for weak and strong VARMA(p,q) models. Monte Carlo results show that IOLS is consistent and feasible for large systems, outperforming the MLE and other linear regression based efficient estimators under alternative scenarios. Our empirical application shows that VARMA models are feasible alternatives when forecasting with many predictors. We show that VARMA models outperform the AR(1), ARMA(1,1), Bayesian VAR, and factor models, considering different model dimensions.  相似文献   

12.
This paper proposes a two-step maximum likelihood estimation (MLE) procedure to deal with the problem of endogeneity in Markov-switching regression models. A joint estimation procedure provides us with an asymptotically most efficient estimator, but it is not always feasible, due to the ‘curse of dimensionality’ in the matrix of transition probabilities. A two-step estimation procedure, which ignores potential correlation between the latent state variables, suffers less from the ‘curse of dimensionality’, and it provides a reasonable alternative to the joint estimation procedure. In addition, our Monte Carlo experiments show that the two-step estimation procedure can be more efficient than the joint estimation procedure in finite samples, when there is zero or low correlation between the latent state variables.  相似文献   

13.
In this paper, we introduce the one-step generalized method of moments (GMM) estimation methods considered in Lee (2007a) and Liu, Lee, and Bollinger (2010) to spatial models that impose a spatial moving average process for the disturbance term. First, we determine the set of best linear and quadratic moment functions for GMM estimation. Second, we show that the optimal GMM estimator (GMME) formulated from this set is the most efficient estimator within the class of GMMEs formulated from the set of linear and quadratic moment functions. Our analytical results show that the one-step GMME can be more efficient than the quasi maximum likelihood (QMLE), when the disturbance term is simply i.i.d. With an extensive Monte Carlo study, we compare its finite sample properties against the MLE, the QMLE and the estimators suggested in Fingleton (2008a).  相似文献   

14.
空间单元大小以及其它的经济特征上的差异,常常会导致空间异方差问题。本文给出了广义空间模型异方差问题的三种不同估计方法。第一种方法是将异方差形式参数化,来克服自由度的不足,使用ML估计进行实现。而针对异方差形式未知时,分别采用了基于2SLS的迭代GMM估计和更加直接的MCMC抽样方法加以解决,特别是MCMC方法表现得更加优美。蒙特卡罗模拟表明,给定异方差形式条件下, ML估计通过异方差参数化的方法依然可以获得较好的估计效果。而异方差形式未知的情况下,另外两种方法随着样本数的增大时也可以与ML的估计结果趋于一致。  相似文献   

15.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

16.
In this paper, we introduce a threshold stochastic volatility model with explanatory variables. The Bayesian method is considered in estimating the parameters of the proposed model via the Markov chain Monte Carlo (MCMC) algorithm. Gibbs sampling and Metropolis–Hastings sampling methods are used for drawing the posterior samples of the parameters and the latent variables. In the simulation study, the accuracy of the MCMC algorithm, the sensitivity of the algorithm for model assumptions, and the robustness of the posterior distribution under different priors are considered. Simulation results indicate that our MCMC algorithm converges fast and that the posterior distribution is robust under different priors and model assumptions. A real data example was analyzed to explain the asymmetric behavior of stock markets.  相似文献   

17.
蒙特卡洛模拟方法是一种非常重要的矿业投资风险分析方法。文中介绍了蒙特卡洛模拟方法的思想和具体步骤,以及常见随机数产生方式,同时讲述了该方法的成功例子,最后简单分析了该方法的优点以及目前在使用中存在的问题。  相似文献   

18.
ABSTRACT

Parameter uncertainty has fuelled criticisms on the robustness of results from computable general equilibrium models. This has led to the development of alternative sensitivity analysis approaches. Researchers have used Monte Carlo analysis for systematic sensitivity analysis because of its flexibility. But Monte Carlo analysis may yield biased simulation results. Gaussian quadratures have also been widely applied, although they can be difficult to apply in practice. This paper applies an alternative approach to systematic sensitivity analysis, Monte Carlo filtering and examines how its results compare to both Monte Carlo and Gaussian quadrature approaches. It does so via an application to rural development policies in Aberdeenshire, Scotland. We find that Monte Carlo filtering outperforms the conventional Monte Carlo approach and is a viable alternative when a Gaussian quadrature approach cannot be applied or is too complex to implement.  相似文献   

19.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

20.
Effective linkage detection and gene mapping requires analysis of data jointly on members of extended pedigrees, jointly at multiple genetic markers. Exact likelihood computation is then often infeasible, but Markov chain Monte Carlo (MCMC) methods permit estimation of posterior probabilities of genome sharing among relatives, conditional upon marker data. In principle, MCMC also permits estimation of linkage analysis location score curves, but in practice effective MCMC samplers are hard to find. Although the whole-meiosis Gibbs sampler (M-sampler) performs well in some cases, for extended pedigrees and tightly linked markers better samplers are needed. However, using the M-sampler as a proposal distribution in a Metropolis-Hastings algorithm does allow genetic interference to be incorporated into the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号