首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The generalized likelihood ratio (GLR) method is a recently introduced gradient estimation method for handling discontinuities in a wide range of sample performances. We put the GLR methods from previous work into a single framework, simplify regularity conditions to justify the unbiasedness of GLR, and relax some of those conditions that are difficult to verify in practice. Moreover, we combine GLR with conditional Monte Carlo methods and randomized quasi-Monte Carlo methods to reduce the variance. Numerical experiments show that variance reduction could be significant in various applications.  相似文献   

2.
Monte Carlo methods are used to compared the small sample properties of several approaches to seasonal adjustment when the objective is to estimate regression coefficients. The methods compared are band spectrum regression, the dummy variable method and the moving average method. The results seem to indicate that band spectrum regression will have superior small sample properties compared with the dummy variable method, which again seemto be preferable to ordinary least squares and to the moving average method.  相似文献   

3.
《Statistica Neerlandica》1960,22(3):179-198
Summary  This paper describes an experiment with "importance sampling", to show how much reduction of the computation time and sample size can be achieved in comparison with the usual Monte Carlo method. A comparison is made between each of the three methods of "importance sampling" and the usual Monte Carlo method by the determination of the expression

Of the three methods A, B and C the first one uses the shifted exponential distribution, the second one uses the gamma distribution, and the third one uses the exponential distribution with modified parameter. These three methods have all smaller variances, ranges and sample sizes than the usual Monte Carlo method. Their order of preference is A, B, C. With respect to computing time only the method A is significantly better. So only the method A is an improvement in respect of both the sample size and the computing time.  相似文献   

4.
This article reviews the application of some advanced Monte Carlo techniques in the context of multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations, which can be biassed in some sense, for instance, by using the discretization of an associated probability law. The MLMC approach works with a hierarchy of biassed approximations, which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider some Markov chain Monte Carlo and sequential Monte Carlo methods, which have been introduced in the literature, and we describe different strategies that facilitate the application of MLMC within these methods.  相似文献   

5.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

6.
7.
Summary This paper describes an experiment with “importance sampling”, to show how much reduction of the computation time and sample size can be achieved in comparison with the usual Monte Carlo method. A comparison is made between each of the three methods of “importance sampling” and the usual Monte Carlo method by the determination of the expression Of the three methods A, B and C the first one uses the shifted exponential distribution, the second one uses the gamma distribution, and the third one uses the exponential distribution with modified parameter. These three methods have all smaller variances, ranges and sample sizes than the usual Monte Carlo method. Their order of preference is A, B, C. With respect to computing time only the method A is significantly better. So only the method A is an improvement in respect of both the sample size and the computing time.  相似文献   

8.
Many cases of strategic interaction between agents involve a continuous set of choices. It is natural to model these problems as continuous space games. Consequently, the population of agents playing the game will be represented with a density function defined over the continuous set of strategy choices. Simulating evolutionary dynamics on continuous strategy spaces is a challenging problem. The classic approach of discretizing the strategy space is ineffective for multidimensional strategy spaces. We present a principled approach to simulation of adaptive dynamics in continuous space games using sequential Monte Carlo methods. Sequential Monte Carlo methods use a set of weighted random samples, also named particles to represent density functions over multidimensional spaces. Sequential Monte Carlo methods provide computationally efficient ways of computing the evolution of probability density functions. We employ resampling and smoothing steps to prevent particle degeneration problem associated with particle estimates. The resulting algorithm can be interpreted as an agent based simulation with elements of natural selection, regression to mean and mutation. We illustrate the performance of the proposed simulation technique using two examples: continuous version of the repeated prisoner dilemma game and evolution of bidding functions in first-price closed-bid auctions.  相似文献   

9.
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster.  相似文献   

10.
We describe in this paper a variance reduction method based on control variates. The technique uses the fact that, if all stochastic assets but one are replaced in the payoff function by their mean, the resulting integral can most often be evaluated in closed form. We exploit this idea by applying the univariate payoff as control variate and develop a general Monte Carlo procedure, called Mean Monte Carlo (MMC). The method is then tested on a variety of multifactor options and compared to other Monte Carlo approaches or numerical techniques. The method is of easy and broad applicability and gives good results especially for low to medium dimension and in high volatility environments.  相似文献   

11.
Regressor and random-effects dependencies in multilevel models   总被引:1,自引:0,他引:1  
The objectives of this paper are (1) to review methods that can be used to test for different types of random effects and regressor dependencies, (2) to present results from Monte Carlo studies designed to investigate the performance of these methods, and (3) to discuss estimation methods that can be used when some but not all of the random effects and regressor independence assumptions, are violated. Because current methods are limited in various ways, we will also present a list of open problems and suggest solutions for some of them. As we will show, the issue of regressor random-effects independence has received some attention in the econometrics literature, but this important work has had little impact on current research practices in the social and behavioral sciences.  相似文献   

12.
蒙特卡洛模拟方法是一种非常重要的矿业投资风险分析方法。文中介绍了蒙特卡洛模拟方法的思想和具体步骤,以及常见随机数产生方式,同时讲述了该方法的成功例子,最后简单分析了该方法的优点以及目前在使用中存在的问题。  相似文献   

13.
Abstract.  This survey presents the set of methods available in the literature on selection bias correction, when selection is specified as a multinomial logit model. It contrasts the underlying assumptions made by the different methods and shows results from a set of Monte Carlo experiments. We find that, in many cases, the approach initiated by Dubin and MacFadden (1984) as well as the semi-parametric alternative recently proposed by Dahl (2002) are to be preferred to the most commonly used Lee (1983) method. We also find that a restriction imposed in the original Dubin and MacFadden paper can be waived to achieve more robust estimators. Monte Carlo experiments also show that selection bias correction based on the multinomial logit model can provide fairly good correction for the outcome equation, even when the IIA hypothesis is violated.  相似文献   

14.
An article by Chan et al. ( 2013 ) published in the Journal of Business and Economic Statistics introduces a new model for trend inflation. They allow the trend inflation to evolve according to a bounded random walk. In order to draw the latent states from their respective conditional posteriors, they use accept–reject Metropolis–Hastings procedures. We reproduce their results using particle Markov chain Monte Carlo (PMCMC), which approaches drawing the latent states from a different technical point of view by relying on combining Markov chain Monte Carlo and sequential Monte Carlo methods. To conclude: we are able to reproduce the results of Chan et al. ( 2013 ). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In this review paper, we discuss the theoretical background of multiple imputation, describe how to build an imputation model and how to create proper imputations. We also present the rules for making repeated imputation inferences. Three widely used multiple imputation methods, the propensity score method, the predictive model method and the Markov chain Monte Carlo (MCMC) method, are presented and discussed.  相似文献   

16.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

17.
The US and other national governments invest in research and development to spur competitiveness in their domestic manufacturing industries. However, there are limited studies on identifying the research efforts that will have the largest possible return on investment, resulting in suboptimal returns. Manufacturers commonly measure production time in order to identify areas for efficiency improvement, but this is typically not applied at the national level where efficiency issues may cross between enterprises and industries. Such methods and results can be used to prioritize efficiency improvement efforts at an industry supply-chain level. This paper utilizes data on manufacturing inventory along with data on inter-industry interactions to develop a method for tracking industry-level flow time and identifying bottlenecks in US manufacturing. As a proof of concept, this method is applied to the production of three commodities: aircraft, automobiles/trucks, and computers. The robustness of bottleneck identification is tested utilizing Monte Carlo techniques.  相似文献   

18.
Nonlinear taxes create econometric difficulties when estimating labor supply functions. One estimation method that tackles these problems accounts for the complete form of the budget constraint and uses the maximum likelihood method to estimate parameters. Another method linearizes budget constraints and uses instrumental variables techniques. Using Monte Carlo simulations I investigate the small-sample properties of these estimation methods and how they are affected by measurement errors in independent variables. No estimator is uniquely best. Hence, in actual estimation the choice of estimator should depend on the sample size and type of measurement errors in the data. Complementing actual estimates with a Monte Carlo study of the estimator used, given the type of measurement errors that characterize the data, would often help interpreting the estimates. This paper shows how such a study can be performed.  相似文献   

19.
This article develops a new portfolio selection method using Bayesian theory. The proposed method accounts for the uncertainties in estimation parameters and the model specification itself, both of which are ignored by the standard mean-variance method. The critical issue in constructing an appropriate predictive distribution for asset returns is evaluating the goodness of individual factors and models. This problem is investigated from a statistical point of view; we propose using the Bayesian predictive information criterion. Two Bayesian methods and the standard mean-variance method are compared through Monte Carlo simulations and in a real financial data set. The Bayesian methods perform very well compared to the standard mean-variance method.  相似文献   

20.
The Monte Carlo method of exploring the properties of econometric estimators and significance tests has yielded a considerable amount of information that has practical value in guiding choice of technique in applied research. This paper presents a bibliography of such Monte Carlo studies over the period 1948–1972. About 150 citations are listed alphabetically by author, and also under a detailed subject-matter classification scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号