首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This article reviews the application of some advanced Monte Carlo techniques in the context of multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations, which can be biassed in some sense, for instance, by using the discretization of an associated probability law. The MLMC approach works with a hierarchy of biassed approximations, which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider some Markov chain Monte Carlo and sequential Monte Carlo methods, which have been introduced in the literature, and we describe different strategies that facilitate the application of MLMC within these methods.  相似文献   

2.
本文首先从计量经济学以及资本结构动态调整的视角逐一评述了资本结构部分调整模型的六种主要估计方法,即混合OLS估计法、Fama-MacBeth估计法、固定效应估计法、GMM估计法、长差分LD估计法以及双边截取Tobit估计法。然后,针对我国上市公司的实际样本使用这六种估计方法进行了实证检验。在此基础上,依据我国上市公司资本结构的分布形态,采用蒙特卡罗模拟技术来识别和判断这六种方法在估计我国上市公司资本结构调整速度中的有效性。本文发现,传统的混合OLS估计法及Fama-MacBeth估计法反而是检验我国上市公司资本结构动态调整的有效方法,其估计所得的市值杠杆平均调整速度为16.3%-20.1%。  相似文献   

3.
金婷  秦学志 《价值工程》2007,26(12):10-14
运用蒙特卡罗模拟方法估计银行业操作风险时,对于事件发生频率,一般不考虑预测模型参数的时变性,致使预测结果存在较大偏差。针对这一不足,建立了灰色动态残差GM(1,1)模型来估计与预测损失事件的发生频率,并通过对起始时点的比较选择和残差的修正,进一步改进了预测模型。再用蒙特卡罗方法对操作风险的损失金额进行模拟,配合使用所建立的损失事件发生频率预测模型,得到商业银行操作风险的损失值,并据以确定监管资本和减少商业银行操作风险的对策。  相似文献   

4.
The Fourier flexible form possesses desirable asymptotic properties that are not shared by other flexible forms such as the translog, generalized Leontief, and generalized Box-Cox. One of them is that an elasticity of substitution can be estimated with negligible bias in sufficiently large samples regardless of what the true form actually is, save that it be smooth enough. This article reports the results of an experiment designed to determine whether or not this property obtains in samples of the sizes customarily encountered in practice. A three-input, homothetic version of the generalized Box-Cox cost function was used to generate technologies that were oriented in a two-dimensional design space according to a central composite rotatable design; the two factors of the design were the Box-Cox parameter and a measure of the dispersion of the substitution matrix. The Fourier cost function was used to estimate the substitution elasticities at each design point, and the bias at each point was estimated using the Monte Carlo method. A response surface over the entire design space was fitted to these estimates. An examination of the surface reveals that the bias is small over the entire design space. Roughly speaking, the estimates of elasticities of substitution are unbiased to three significant digits using the Fourier flexible form no matter what the true technology. Our conclusion is that the small bias property of the Fourier form does obtain in samples of reasonable size; this claim must be tampered by the usual caveats associated with inductive inference.  相似文献   

5.
6.
本文首先基于诸多Libor市场模型改进方法的基础之上,在标准市场模型中加入Heston随机波动率过程,建立随机波动率假设的新型Libor市场模型;其次,运用Black逆推参数校正方法和MCMC参数估计方法对该Libor利率市场模型中的局部波动率和随机波动率过程中的参数进行校正和估计;最后是实证模拟。研究结论认为,在构建Libor利率动态模型时,若在单因子Libor利率市场模型基础上引入随机波动率过程,可大大提高利率模型的解释力。  相似文献   

7.
Good statistical practice dictates that summaries in Monte Carlo studies should always be accompanied by standard errors. Those standard errors are easy to provide for summaries that are sample means over the replications of the Monte Carlo output: for example, bias estimates, power estimates for tests and mean squared error estimates. But often more complex summaries are of interest: medians (often displayed in boxplots), sample variances, ratios of sample variances and non‐normality measures such as skewness and kurtosis. In principle, standard errors for most of these latter summaries may be derived from the Delta Method, but that extra step is often a barrier for standard errors to be provided. Here, we highlight the simplicity of using the jackknife and bootstrap to compute these standard errors, even when the summaries are somewhat complicated. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

8.
9.
Researchers commonly use co-occurrence counts to assess the similarity of objects. This paper illustrates how traditional association measures can lead to misguided significance tests of co-occurrence in settings where the usual multinomial sampling assumptions do not hold. I propose a Monte Carlo permutation test that preserves the original distributions of the co-occurrence data. I illustrate the test on a dataset of organizational categorization, in which I investigate the relations between organizational categories (such as “Argentine restaurants” and “Steakhouses”).  相似文献   

10.
11.
《价值工程》2013,(4):196-198
提出了一种模糊DEA求解及排序的完整解决方案,其基本思想是:将α截集转化为线性变量,通过求解,不仅能得到目标函数的乐观及悲观状态下的极值,而且还能得到各变量最适宜的α值,然后引入Monte Carlo模拟方法进行排序。通过算例说明该方法的有效性。  相似文献   

12.
The paper aims to analyse the behaviour of a battery of non-survey techniques of constructing regional I-O tables in estimating impact. For this aim, a Monte Carlo simulation, based on the generation of ‘true’ multiregional I-O tables, was carried out. By aggregating multi-regional I-O tables, national I-O tables were obtained. From the latter, indirect regional tables were derived through the application of various regionalisation methods and the relevant multipliers were compared with the ‘true’ multipliers using a set of statistics. Three aspects of the behaviour of the methods have been analysed: performances to reproduce ‘true’ multipliers, variability of simulation error and direction of bias. The results have demonstrated that the Flegg et al. Location Quotient (FLQ) and its augmented version (AFLQ) represent an effective improvement of conventional techniques based on the use of location quotients in both reproducing ‘true’ multipliers and generating more stable simulation errors. In addition, the results have confirmed the existence of a tendency of the methods to over/underestimate impact. In the cases of the FLQ and the AFLQ, this tendency depends on the value of the parameter δ.  相似文献   

13.
Markov chain Monte Carlo methods are frequently used in the analyses of genetic data on pedigrees for the estimation of probabilities and likelihoods which cannot be calculated by existing exact methods. In the case of discrete data, the underlying Markov chain may be reducible and care must be taken to ensure that reliable estimates are obtained. Potential reducibility thus has implications for the analysis of the mixed inheritance model, for example, where genetic variation is assumed to be due to one single locus of large effect and many loci each with a small effect. Similarly, reducibility arises in the detection of quantitative trait loci from incomplete discrete marker data. This paper aims to describe the estimation problem in terms of simple discrete genetic models and the single-site Gibbs sampler. Reducibility of the Gibbs sampler is discussed and some current methods for circumventing the problem outlined.  相似文献   

14.
"This paper presents an approach of constructing confidence intervals by means of Monte Carlo simulation. This technique attempts to incorporate the uncertainty involved in projecting human populations by letting the fertility and net immigration rates vary as a random variable with a specific distribution. Since fertility and migration are by far the most volatile, and therefore, the most critical components to population forecasting, this technique has the potential of accounting for this uncertainty, if the subjective distributions are specified with enough care. Considering the results of the model for the U.S. in 2082, for example, it is shown that the population will number between 255 million and 355 million with a probability of 90 percent."  相似文献   

15.
This paper presents an approach of constructing confidence intervals by means of Monte Carlo simulation. This technique attempts to incorporate the uncertainty involved in projecting human populations by letting the fertility and net immigration rates vary as a random variable with a specific distribution. Since fertility and migration are by far the most volatile, and therefore, the most critical components to population forecasting, this technique has the potential of accounting for this uncertainty, if the subjective distributions are specified with enough care. Considering the results of the model for the U.S. in 2082, for example, it is shown that the population will number between 255 million and 355 million with a probability of 90 percent.  相似文献   

16.
赵东  俞泓莹 《价值工程》2022,41(9):160-162
目前我国对于核设施退役与放射性废物治理相关项目的需求日益增长,为确定现有规定中的各设计阶段的基本预备费费率是否能满足工程实际需求,本文针对低放固体废物处置场项目使用CRYSTAL BALL软件进行蒙特卡罗模拟,对工程的基本预备费进行计算。本文通过对两种方案分别进行Beta-PERT分布和正态分布模拟,得到不同置信区间下的基本预备费费用和费率并进行验证,得出基本预备费费率基本目前项目的实际情况,验证了蒙特卡罗模拟方法在低放固体废物处置场可行性研究阶段基本预备费的估算具有可行性,有良好的应用和推广价值。  相似文献   

17.
Burnout is a consequence of unobservable predictive variables. This paper describes a methodology for estimating mortgage prepayment models which corrects for burnout. The paper generalizes the approach of Deng, Quigley, and Van Order (Econometrica, 68, 275–307, 1998) and Stanton (Rev. Finan. Stud.8, 677–708, 1995) in modeling the impact of unobservable variables as a probability distribution. The estimator is applied to a sample of loan histories and the results compared to a conventional logit analysis of the data. Predictions and simulations from both models are compared to illustrate the properties of the new estimator.  相似文献   

18.
Monte Carlo Evidence on Cointegration and Causation   总被引:1,自引:0,他引:1  
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.  相似文献   

19.
Count data models have found a wide variety of applications not only in applied economics and finance but also in diverse fields ranging from biometrics to political science. Poisson and negative binomial (NB) models have been extensively used in count data analysis. Two particular NB model specifications, NBI and NBII, have been especially popular. However, these models impose arbitrary restrictions on the relation between the conditional mean and variance of the dependent variable, limiting their generality. This study proposes tests for selection among the Poisson and NB models by formally demonstrating that the log likelihood function (LLF) of a general NB model parametrically nests the LLF of the Poisson, NBI and NBII as testable special cases. It also proposes estimation of the general NB model since it allows greater flexibility in the relationship between the mean and variance of the dependent variable than NBI and NBII. The empirical application, which uses micro-level data on recreational boating, provides support for the paper's main theme. Tests clearly reject not only the Poisson, but also NBI and NBII, in favour of a different NB model, underscoring the importance of the general model specification.  相似文献   

20.
The generalized likelihood ratio (GLR) method is a recently introduced gradient estimation method for handling discontinuities in a wide range of sample performances. We put the GLR methods from previous work into a single framework, simplify regularity conditions to justify the unbiasedness of GLR, and relax some of those conditions that are difficult to verify in practice. Moreover, we combine GLR with conditional Monte Carlo methods and randomized quasi-Monte Carlo methods to reduce the variance. Numerical experiments show that variance reduction could be significant in various applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号