首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. It presents two alternative approaches that can be implemented using Gibbs sampling methods in a straightforward way and which allow one to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. A simulation study shows that the variable selection approaches tend to outperform existing Bayesian model averaging techniques in terms of both in-sample predictive performance and computational efficiency. The alternative approaches are compared in an empirical application using data on economic growth for European NUTS-2 regions.  相似文献   

2.
针对目前随机系数动态面板模型中存在内生变量初始值固定、个体自回归系数平稳以及不存在结构突变的种种限制,本文提出用分层贝叶斯方法首次检测和估计了含未知结构突变的随机系数动态面板模型。容许初始值与个体相关,自回归系数服从logitnormal分布保证平稳性,得到了未知结构突变和随机系数的后验密度估计。对1995年到2012年中国五省市出口总值月度数据进行实证分析,检测出四个结构突变,分析突变前后的情况表明出口总值存在三大特征:呈现稳定增长态势,但省市间差距逐渐扩大;重大的外部需求冲击对出口有显著影响;出口总值的结构突变有明显的季节特征.  相似文献   

3.
In this paper, we derive restrictions for Granger noncausality in MS‐VAR models and show under what conditions a variable does not affect the forecast of the hidden Markov process. To assess the noncausality hypotheses, we apply Bayesian inference. The computational tools include a novel block Metropolis–Hastings sampling algorithm for the estimation of the underlying models. We analyze a system of monthly US data on money and income. The results of testing in MS‐VARs contradict those obtained with linear VARs: the money aggregate M1 helps in forecasting industrial production and in predicting the next period's state. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Identification in most sample selection models depends on the independence of the regressors and the error terms conditional on the selection probability. All quantile and mean functions are parallel in these models; this implies that quantile estimators cannot reveal any—per assumption non‐existing—heterogeneity. Quantile estimators are nevertheless useful for testing the conditional independence assumption because they are consistent under the null hypothesis. We propose tests of the Kolmogorov–Smirnov type based on the conditional quantile regression process. Monte Carlo simulations show that their size is satisfactory and their power sufficient to detect deviations under plausible data‐generating processes. We apply our procedures to female wage data from the 2011 Current Population Survey and show that homogeneity is clearly rejected. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
王芳 《审计月刊》2005,(10):31-31
审计人员在选取样本时,应使审计对象总体内所有项目均有被选取的机会。选取样本的主要方法有以下四种:  相似文献   

6.
本文认为,资产组合P的均值-方差有效性能够通过有效性指标ρ的后验分布获得。根据贝叶斯法则,我们选取上证综合指数为样本获得了ρ的后验分布。研究结论表明,如果资产组合集中包含有无风险资产,当先验信仰以标准无信息先验(标准扩散先验)的形式出现时,上证综合指数缺乏有效性,表明资产组合集中加入无风险资产将降低资产组合的均值-方差有效性,这与理论推导结果是一致的。  相似文献   

7.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:1,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

8.
The paper takes up inference in the stochastic frontier model with gamma distributed inefficiency terms, without restricting the gamma distribution to known integer values of its shape parameter (the Erlang form). The paper shows that Gibbs sampling with data augmentation can be used in a computationally efficient way to explore the posterior distribution of the model and conduct inference regarding parameters as well as functions of interest related to technical inefficiency.  相似文献   

9.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

10.
11.
The need for new methods to deal with big data is a common theme in most scientific fields, although its definition tends to vary with the context. Statistical ideas are an essential part of this, and as a partial response, a thematic program on statistical inference, learning and models in big data was held in 2015 in Canada, under the general direction of the Canadian Statistical Sciences Institute, with major funding from, and most activities located at, the Fields Institute for Research in Mathematical Sciences. This paper gives an overview of the topics covered, describing challenges and strategies that seem common to many different areas of application and including some examples of applications to make these challenges and strategies more concrete.  相似文献   

12.
Inference in Cointegrating Models: UK M1 Revisited   总被引:3,自引:0,他引:3  
The paper addresses the practical determination of cointegration rank. This is difficult for many reasons: deterministic terms play a crucial role in limiting distributions, and systems may not be formulated to ensure similarity to nuisance parameters; finite-sample critical values may differ from asymptotic equivalents; dummy variables alter critical values, often greatly; multiple cointegration vectors must be identified to allow inference; the data may be I(2) rather than I(1), altering distributions; and conditioning must be done with care. These issues are illustrated by an empirical application of multivariate cointegration analysis to a small model of narrow money, prices, output and interest rates in the UK.  相似文献   

13.
This paper appliesa large number of models to three previously-analyzed data sets,and compares the point estimates and confidence intervals fortechnical efficiency levels. Classical procedures include multiplecomparisons with the best, based on the fixed effects estimates;a univariate version, marginal comparisons with the best; bootstrappingof the fixed effects estimates; and maximum likelihood givena distributional assumption. Bayesian procedures include a Bayesianversion of the fixed effects model, and various Bayesian modelswith informative priors for efficiencies. We find that fixedeffects models generally perform poorly; there is a large payoffto distributional assumptions for efficiencies. We do not findmuch difference between Bayesian and classical procedures, inthe sense that the classical MLE based on a distributional assumptionfor efficiencies gives results that are rather similar to a Bayesiananalysis with the corresponding prior.  相似文献   

14.
Abstract

This article considers autoregressive (SAR) models. We method to estimate the parameters of likelihood (ML) method. Our Bayesian by the Monte Carlo studies. We found the efficient as the ML estimators.  相似文献   

15.
针对传统B櫣hlmann-Straub信用模型不能有效地解决缺失数据信息处理问题,本文利用贝叶斯统计方法,构造了一类新的贝叶斯信用分析模型,引入基于吉布斯抽样的马尔科夫链蒙特卡洛方法进行数值计算,建立了一个索赔后验分层正态模型进行实证分析,证明模型的有效性。研究结果表明,基于MCMC的贝叶斯信用模型能够动态模拟模型参数的后验分布,提高模型估计的精度,对保险公司经验费率厘定方法的改进具有重要的现实意义。  相似文献   

16.
Vast amounts of data that could be used in the development and evaluation of policy for the benefit of society are collected by statistical agencies. It is therefore no surprise that there is very strong demand from analysts, within business, government, universities and other organisations, to access such data. When allowing access to micro‐data, a statistical agency is obliged, often legally, to ensure that it is unlikely to result in the disclosure of information about a particular person or organisation. Managing the risk of disclosure is referred to as statistical disclosure control (SDC). This paper describes an approach to SDC for output from analysis using generalised linear models, including estimates of regression parameters and their variances, diagnostic statistics and plots. The Australian Bureau of Statistics has implemented the approach in a remote analysis system, which returns analysis output from remotely submitted queries. A framework for measuring disclosure risk associated with a remote server is proposed. The disclosure risk and utility of approach are measured in two real‐life case studies and in simulation.  相似文献   

17.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

18.
本文采用Bayes方法对非参数空间滞后模型进行全面分析,包括参数的估计以及用自由节点样条来拟合未知联系函数。所建议的Bayes方法通过逆跳Markov chain Monte carlo算法(RJMCMC)来实现。在进行贝叶斯分析时,对样条系数与误差方差选取共轭的正态—逆伽玛先验分布,进而获得其他未知量的边际后验分布;另外,文章还设计了一个简单但一般的随机游动Metropolis抽样器,以方便从空间权重因子的条件后验分布中进行抽样。最后应用所建议的方法进行数值模拟。  相似文献   

19.
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号