首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an approach of constructing confidence intervals by means of Monte Carlo simulation. This technique attempts to incorporate the uncertainty involved in projecting human populations by letting the fertility and net immigration rates vary as a random variable with a specific distribution. Since fertility and migration are by far the most volatile, and therefore, the most critical components to population forecasting, this technique has the potential of accounting for this uncertainty, if the subjective distributions are specified with enough care. Considering the results of the model for the U.S. in 2082, for example, it is shown that the population will number between 255 million and 355 million with a probability of 90 percent.  相似文献   

2.
"This paper presents an approach of constructing confidence intervals by means of Monte Carlo simulation. This technique attempts to incorporate the uncertainty involved in projecting human populations by letting the fertility and net immigration rates vary as a random variable with a specific distribution. Since fertility and migration are by far the most volatile, and therefore, the most critical components to population forecasting, this technique has the potential of accounting for this uncertainty, if the subjective distributions are specified with enough care. Considering the results of the model for the U.S. in 2082, for example, it is shown that the population will number between 255 million and 355 million with a probability of 90 percent."  相似文献   

3.
The paper considers the problem of discriminating between the autoregressive forms of a Koyck distributed lag model and a regression model with autocorrelated distrubances. Several interpretations of an ad hoc rule-of-thumb suggested by Griliches are compared with Bayesian posterior odds analysis in a Monte Carlo experiment. The Bayesian analysis is generally superior to the rules-of-thumb, the latter exhibiting large probabilities of type I error, and low power. The rules-of-thumb excessively favour the distributed lag model, while the Bayesian method is free from such bias. All methods improve with increased sample size.  相似文献   

4.
The generalized likelihood ratio (GLR) method is a recently introduced gradient estimation method for handling discontinuities in a wide range of sample performances. We put the GLR methods from previous work into a single framework, simplify regularity conditions to justify the unbiasedness of GLR, and relax some of those conditions that are difficult to verify in practice. Moreover, we combine GLR with conditional Monte Carlo methods and randomized quasi-Monte Carlo methods to reduce the variance. Numerical experiments show that variance reduction could be significant in various applications.  相似文献   

5.
Data with large dimensions will bring various problems to the application of data envelopment analysis (DEA). In this study, we focus on a “big data” problem related to the considerably large dimensions of the input-output data. The four most widely used approaches to guide dimension reduction in DEA are compared via Monte Carlo simulation, including principal component analysis (PCA-DEA), which is based on the idea of aggregating input and output, efficiency contribution measurement (ECM), average efficiency measure (AEC), and regression-based detection (RB), which is based on the idea of variable selection. We compare the performance of these methods under different scenarios and a brand-new comparison benchmark for the simulation test. In addition, we discuss the effect of initial variable selection in RB for the first time. Based on the results, we offer guidelines that are more reliable on how to choose an appropriate method.  相似文献   

6.
The statistical power and Type I error rate of several homogeneity tests, usually applied in meta-analysis, are compared using Monte Carlo simulation: (1) The chi-square test applied to standardized mean differences, correlation coefficients, and Fisher's r-to-Z transformations, and (2) S&H-75 (and 90 percent) procedure applied to standardized mean differences and correlation coefficients. Chi-square tests adjusted correctly Type I error rates to the nominal significance level while the S&H procedures showed higher rates; consequently, the S&H procedures presented greater statistical power. In all conditions, the statistical power was very low, particularly when the sample had few studies, small sample sizes, and presented short differences between the parametric effect sizes. Finally, the criteria for selecting homogeneity tests are discussed.  相似文献   

7.
One- and two-factor stochastic volatility models are assessed over three sets of stock returns data: S&P 500, DJIA, and Nasdaq. Estimation is done by simulated maximum likelihood using techniques that are computationally efficient, robust, straightforward to implement, and easy to adapt to different models. The models are evaluated using standard, easily interpretable time-series tools. The results are broadly similar across the three data sets. The tests provide no evidence that even the simple single-factor models are unable to capture the dynamics of volatility adequately; the problem is to get the shape of the conditional returns distribution right. None of the models come close to matching the tails of this distribution. Including a second factor provides only a relatively small improvement over the single-factor models. Fitting this aspect of the data is important for option pricing and risk management.  相似文献   

8.
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner.  相似文献   

9.
《Journal of econometrics》1986,31(2):219-231
Szroeter's asymptotically normal test outperforms the Goldfeld-Qu2ndt test, the Breusch-Pagan Lagrange multiplier test and BAMSET, when it is possible to order the observations according to increasing variance. With no prior information on variance ordering, BAMSET is best. Some observations concerning degree of heteroscedasticity and model specification are made.  相似文献   

10.
ABSTRACT

Parameter uncertainty has fuelled criticisms on the robustness of results from computable general equilibrium models. This has led to the development of alternative sensitivity analysis approaches. Researchers have used Monte Carlo analysis for systematic sensitivity analysis because of its flexibility. But Monte Carlo analysis may yield biased simulation results. Gaussian quadratures have also been widely applied, although they can be difficult to apply in practice. This paper applies an alternative approach to systematic sensitivity analysis, Monte Carlo filtering and examines how its results compare to both Monte Carlo and Gaussian quadrature approaches. It does so via an application to rural development policies in Aberdeenshire, Scotland. We find that Monte Carlo filtering outperforms the conventional Monte Carlo approach and is a viable alternative when a Gaussian quadrature approach cannot be applied or is too complex to implement.  相似文献   

11.
Computationally efficient methods for Bayesian analysis of seemingly unrelated regression (SUR) models are described and applied that involve the use of a direct Monte Carlo (DMC) approach to calculate Bayesian estimation and prediction results using diffuse or informative priors. This DMC approach is employed to compute Bayesian marginal posterior densities, moments, intervals and other quantities, using data simulated from known models and also using data from an empirical example involving firms’ sales. The results obtained by the DMC approach are compared to those yielded by the use of a Markov Chain Monte Carlo (MCMC) approach. It is concluded from these comparisons that the DMC approach is worthwhile and applicable to many SUR and other problems.  相似文献   

12.
张利容 《价值工程》2014,(34):40-42
输电线路工程受外界因素影响大,造成招标工程量与竣工工程量存在差异,从而引起投标单位采用不平衡报价导致结算超概算情况的出现,增加了建设单位的管理风险。在深入分析输电线路风险变量的基础上,构建出基于蒙特卡洛模拟的风险评估模型,并通过实例,对投标报价的风险概率进行了直观描述,提出了建设性意见。  相似文献   

13.
本文以某连续式退火生产线R区和H区石墨碳套可靠性模型为基础,以综合更换维修成本最低为目标,采用全面实验思想和蒙特卡洛仿真相结合的方法,建立了基于多部件串联系统的多石墨碳套更换策略优化仿真模型.通过C#编写程序对该退火炉R区和H区的石墨碳套更换策略进行仿真分析,实验结果表明,新的石墨碳套更换策略能够显著降低停炉次数和更换...  相似文献   

14.
利用蒙特卡洛分析方法,对目前文献中八种不同跳跃检验方法的检验水平和检验功效进行综合比较分析,并特别关注跳跃类型、市场微观结构噪声、零日内收益、日内周期性波动模式等对各种检验方法的影响。同时,为了克服直接利用非参数统计量进行检验时,因为多重检验而高估跳跃发生的次数的难题,将FDR阈值理论扩展到全部跳跃检验八种方法中,采用FDR阈值理论对价格跳跃检验中错误检验问题进行研究,发现FDR阈值方法能一定程度上减少错误检验问题,将错误检验率控制在FDR值内。  相似文献   

15.
蒙特卡罗方法在核类本科专业的教学初探   总被引:1,自引:0,他引:1  
向东  李小华  王振华  宋碧英 《价值工程》2010,29(29):182-183
蒙特卡罗方法是以概率统计理论为基础、通过计算机编程而实现的一种数值计算方法,在核科学与技术等领域有着广泛的应用。新时期下,初步尝试面向核类本科专业开设蒙特卡罗方法这门课程,将概率统计思想和编程实践训练融入到蒙特卡罗方法教学中,教学取得了一定成效。  相似文献   

16.
The analysis of aggregate economic phenomena by VAR's as suggested by Sims often results in a small sample relative to the number of estimated parameters. Since the model is identified by a dimensionality criterion, the small-sample properties of available criteria are important. This paper presents a study of small-sample properties for six criteria with Monte Carlo methods. It is found that no criterion performs well, and that underfitting of models may be quite common.  相似文献   

17.
We present a new non-nested approach for computing additive upper bounds for callable derivatives using Monte Carlo simulation. It relies on the regression of Greeks computed using adjoint methods. We also show that it is possible to early terminate paths once points of optimal exercise have been reached. A natural control variate for the multiplicative upper bound is introduced which renders it competitive to the additive one. In addition, a new bi-iterative family of upper bounds is introduced which takes a stopping time, an upper bound, and a martingale as inputs.  相似文献   

18.
The paper aims to analyse the behaviour of a battery of non-survey techniques of constructing regional I-O tables in estimating impact. For this aim, a Monte Carlo simulation, based on the generation of ‘true’ multiregional I-O tables, was carried out. By aggregating multi-regional I-O tables, national I-O tables were obtained. From the latter, indirect regional tables were derived through the application of various regionalisation methods and the relevant multipliers were compared with the ‘true’ multipliers using a set of statistics. Three aspects of the behaviour of the methods have been analysed: performances to reproduce ‘true’ multipliers, variability of simulation error and direction of bias. The results have demonstrated that the Flegg et al. Location Quotient (FLQ) and its augmented version (AFLQ) represent an effective improvement of conventional techniques based on the use of location quotients in both reproducing ‘true’ multipliers and generating more stable simulation errors. In addition, the results have confirmed the existence of a tendency of the methods to over/underestimate impact. In the cases of the FLQ and the AFLQ, this tendency depends on the value of the parameter δ.  相似文献   

19.
James E. Bruno 《Socio》1970,4(4):415-428
Providing substitute teachers in large school districts usually requires both an extensive organizational structure to coordinate substitute teacher demands with available supply and a large operating budget. In addition, the temporary nature of the substitute teacher position tends to preclude their effective integrating into regular school district inservice programs. The replacement of present substitute teacher procurement practices by a permanent pool of substitute teachers has been suggested as one possible method for improving substitute teacher quality and morale as well as reducing school district cost. This study demonstrates how Monte Carlo techniques might be utilized to determine the optimal size for a substitute teacher pool in a given district where optimal size is defined as (1) the size required to minimize cost to the district, (2) the size required to maintain present substitute teacher service levels in the district, and (3) the size needed to financially break even with the current substitute teacher program. Both uniform and non-uniform by day in the school week substitute teacher pool sizes were calculated for a school district using this technique. The results of the study strongly suggest that not only might permanent substitute teacher pools result in their better integration into regular school district programs, but substantial cost savings might also result.  相似文献   

20.
In this paper we use Monte Carlo study to investigate the finite sample properties of the Bayesian estimator obtained by the Gibbs sampler and its classical counterpart (i.e. the MLE) for a stochastic frontier model. Our Monte Carlo results show that the MSE performance of the estimates of Gibbs sampling are substantially better than that of the MLE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号