首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

Parameter uncertainty has fuelled criticisms on the robustness of results from computable general equilibrium models. This has led to the development of alternative sensitivity analysis approaches. Researchers have used Monte Carlo analysis for systematic sensitivity analysis because of its flexibility. But Monte Carlo analysis may yield biased simulation results. Gaussian quadratures have also been widely applied, although they can be difficult to apply in practice. This paper applies an alternative approach to systematic sensitivity analysis, Monte Carlo filtering and examines how its results compare to both Monte Carlo and Gaussian quadrature approaches. It does so via an application to rural development policies in Aberdeenshire, Scotland. We find that Monte Carlo filtering outperforms the conventional Monte Carlo approach and is a viable alternative when a Gaussian quadrature approach cannot be applied or is too complex to implement.  相似文献   

2.
3.
Today quasi-Monte Carlo methods are used successfully in computational finance and economics as an alternative to the Monte Carlo method. One drawback of these methods, however, is the lack of a practical way of error estimation. To address this issue several researchers introduced the so-called randomized quasi-Monte Carlo methods in the last decade. In this paper we will present a survey of randomized quasi-Monte Carlo methods, and compare their efficiencies with the efficiency of the Monte Carlo method in pricing certain securities. We will also investigate the effects of Box–Muller and inverse transformation techniques when they are applied to low-discrepancy sequences.  相似文献   

4.
蒙特卡洛模拟方法是一种非常重要的矿业投资风险分析方法。文中介绍了蒙特卡洛模拟方法的思想和具体步骤,以及常见随机数产生方式,同时讲述了该方法的成功例子,最后简单分析了该方法的优点以及目前在使用中存在的问题。  相似文献   

5.
An article by Chan et al. ( 2013 ) published in the Journal of Business and Economic Statistics introduces a new model for trend inflation. They allow the trend inflation to evolve according to a bounded random walk. In order to draw the latent states from their respective conditional posteriors, they use accept–reject Metropolis–Hastings procedures. We reproduce their results using particle Markov chain Monte Carlo (PMCMC), which approaches drawing the latent states from a different technical point of view by relying on combining Markov chain Monte Carlo and sequential Monte Carlo methods. To conclude: we are able to reproduce the results of Chan et al. ( 2013 ). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
A simulation-based non-linear filter is developed for prediction and smoothing in non-linear and/or non-normal structural time-series models. Recursive algorithms of weighting functions are derived by applying Monte Carlo integration. Through Monte Carlo experiments, it is shown that (1) for a small number of random draws (or nodes) our simulation-based density estimator using Monte Carlo integration (SDE) performs better than Kitagawa's numerical integration procedure (KNI), and (2) SDE and KNI give less biased parameter estimates than the extended Kalman filter (EKF). Finally, an estimation of per capita final consumption data is taken as an application to the non-linear filtering problem.  相似文献   

7.
An earlier paper [Kloek and Van Dijk (1978)] is extended in three ways. First, Monte Carlo integration is performed in a nine-dimensional parameter space of Klein's model I [Klein (1950)]. Second, Monte Carlo is used as a tool for the elicitation of a uniform prior on a finite region by making use of several types of prior information. Third, special attention is given to procedures for the construction of importance functions which make use of nonlinear optimization methods.  相似文献   

8.
This article reviews the application of some advanced Monte Carlo techniques in the context of multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations, which can be biassed in some sense, for instance, by using the discretization of an associated probability law. The MLMC approach works with a hierarchy of biassed approximations, which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider some Markov chain Monte Carlo and sequential Monte Carlo methods, which have been introduced in the literature, and we describe different strategies that facilitate the application of MLMC within these methods.  相似文献   

9.
This paper compares two methods for undertaking likelihood‐based inference in dynamic equilibrium economies: a sequential Monte Carlo filter and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. We report two main results. First, both for simulated and for real data, the sequential Monte Carlo filter delivers a substantially better fit of the model to the data as measured by the marginal likelihood. This is true even for a nearly linear case. Second, the differences in terms of point estimates, although relatively small in absolute values, have important effects on the moments of the model. We conclude that the nonlinear filter is a superior procedure for taking models to the data. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Environmental sustainability problems frequently require the need for decision-making in situations containing considerable uncertainty. Monte Carlo simulation methods have been used in a wide array of environmental planning settings to incorporate these uncertain features. Simulation-generated outputs are commonly displayed as probability distributions. Recently simulation decomposition (SD) has enhanced the visualization of the cause-effect relationships of multi-variable combinations of inputs on the corresponding simulated outputs. SD partitions sub-distributions of the Monte Carlo outputs by pre-classifying selected input variables into states, grouping combinations of these states into scenarios, and then collecting simulated outputs attributable to each multi-variable input scenario. Since it is a straightforward task to visually project the contribution of the subdivided scenarios onto the overall output, SD can illuminate previously unidentified connections between the multi-variable combinations of inputs on the outputs. SD is generalizable to any Monte Carlo method with negligible additional computational overhead and, therefore, can be readily extended into most environmental analyses that use simulation models. This study demonstrates the efficacy of SD for environmental sustainability decision-making on a carbon footprint analysis case for wooden pallets.  相似文献   

11.
We describe in this paper a variance reduction method based on control variates. The technique uses the fact that, if all stochastic assets but one are replaced in the payoff function by their mean, the resulting integral can most often be evaluated in closed form. We exploit this idea by applying the univariate payoff as control variate and develop a general Monte Carlo procedure, called Mean Monte Carlo (MMC). The method is then tested on a variety of multifactor options and compared to other Monte Carlo approaches or numerical techniques. The method is of easy and broad applicability and gives good results especially for low to medium dimension and in high volatility environments.  相似文献   

12.
ABSTRACT This paper investigates through Monte Carlo experiments both size and power properties of a bootstrapped trace statistic in two prototypical DGPs. The Monte Carlo results indicate that the ordinary bootstrap has similar size and power properties as inference procedures based on asymptotic critical values. Considering empirical size, the stationary bootstrap is found to provide a uniform improvement over the ordinary bootstrap if the dynamics is underspecified. The use of the stationary bootstrap as a diagnostic tool is suggested. In two illustrative examples this seems to work, and again it appears that the bootstrap incorporates the finite-sample correction required for the asymptotic critical values to apply.  相似文献   

13.
14.
In this paper we present an exact maximum likelihood treatment for the estimation of a Stochastic Volatility in Mean (SVM) model based on Monte Carlo simulation methods. The SVM model incorporates the unobserved volatility as an explanatory variable in the mean equation. The same extension is developed elsewhere for Autoregressive Conditional Heteroscedastic (ARCH) models, known as the ARCH in Mean (ARCH‐M) model. The estimation of ARCH models is relatively easy compared with that of the Stochastic Volatility (SV) model. However, efficient Monte Carlo simulation methods for SV models have been developed to overcome some of these problems. The details of modifications required for estimating the volatility‐in‐mean effect are presented in this paper together with a Monte Carlo study to investigate the finite sample properties of the SVM estimators. Taking these developments of estimation methods into account, we regard SV and SVM models as practical alternatives to their ARCH counterparts and therefore it is of interest to study and compare the two classes of volatility models. We present an empirical study of the intertemporal relationship between stock index returns and their volatility for the United Kingdom, the United States and Japan. This phenomenon has been discussed in the financial economic literature but has proved hard to find empirically. We provide evidence of a negative but weak relationship between returns and contemporaneous volatility which is indirect evidence of a positive relation between the expected components of the return and the volatility process. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
《Statistica Neerlandica》1960,22(3):179-198
Summary  This paper describes an experiment with "importance sampling", to show how much reduction of the computation time and sample size can be achieved in comparison with the usual Monte Carlo method. A comparison is made between each of the three methods of "importance sampling" and the usual Monte Carlo method by the determination of the expression

Of the three methods A, B and C the first one uses the shifted exponential distribution, the second one uses the gamma distribution, and the third one uses the exponential distribution with modified parameter. These three methods have all smaller variances, ranges and sample sizes than the usual Monte Carlo method. Their order of preference is A, B, C. With respect to computing time only the method A is significantly better. So only the method A is an improvement in respect of both the sample size and the computing time.  相似文献   

16.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

17.
This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite‐sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71–90] support this explanation. The Monte Carlo evidence, together with a large‐sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight.  相似文献   

18.
This paper develops estimators for dynamic microeconomic models with serially correlated unobserved state variables using sequential Monte Carlo methods to estimate the parameters and the distribution of the unobservables. If persistent unobservables are ignored, the estimates can be subject to a dynamic form of sample selection bias. We focus on single‐agent dynamic discrete‐choice models and dynamic games of incomplete information. We propose a full‐solution maximum likelihood procedure and a two‐step method and use them to estimate an extended version of the capital replacement model of Rust with the original data and in a Monte Carlo study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper we use Monte Carlo study to investigate the finite sample properties of the Bayesian estimator obtained by the Gibbs sampler and its classical counterpart (i.e. the MLE) for a stochastic frontier model. Our Monte Carlo results show that the MSE performance of the estimates of Gibbs sampling are substantially better than that of the MLE.  相似文献   

20.
Summary This paper describes an experiment with “importance sampling”, to show how much reduction of the computation time and sample size can be achieved in comparison with the usual Monte Carlo method. A comparison is made between each of the three methods of “importance sampling” and the usual Monte Carlo method by the determination of the expression Of the three methods A, B and C the first one uses the shifted exponential distribution, the second one uses the gamma distribution, and the third one uses the exponential distribution with modified parameter. These three methods have all smaller variances, ranges and sample sizes than the usual Monte Carlo method. Their order of preference is A, B, C. With respect to computing time only the method A is significantly better. So only the method A is an improvement in respect of both the sample size and the computing time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号