首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Poly-t densities are defined by the property that their kernel is a product, or a ratio of products, of multivariate t-density kernels. As discussed in Drèze (1977), these densities arise as Bayesian posterior densities for regression coefficients under a variety of specifications for the prior density and the data generating process. We have therefore developed methods and computer algorithms to evaluate integrating constants and other characteristics of poly-t densities with no more than a single quadratic form in the numerator (section 2). As a by-product of our analysis we have also derived an algorithm for the computation of moments of positive definite quadratic forms in Normal variables (section 3). In section 4 we discuss inference on the sampling variances associated with the models discussed in Drèze (1977).  相似文献   

2.
《Journal of econometrics》2002,106(1):119-142
The entropy principle yields, for a given set of moments, a density that involves the smallest amount of prior information. We first show how entropy densities may be constructed in a numerically efficient way as the minimization of a potential. Next, for the case where the first four moments are given, we characterize the skewness–kurtosis domain for which densities are defined. This domain is found to be much larger than for Hermite or Edgeworth expansions. Last, we show how this technique can be used to estimate a GARCH model where skewness and kurtosis are time varying. We find that there is little predictability of skewness and kurtosis for weekly data.  相似文献   

3.
This paper is concerned with the construction of prior probability measures for parametric families of densities where the framework is such that only beliefs or knowledge about a single observable data point is required. We pay particular attention to the parameter which minimizes a measure of divergence to the distribution providing the data. The prior distribution reflects this attention and we discuss the application of the Bayes rule from this perspective. Our framework is fundamentally non‐parametric and we are able to interpret prior distributions on the parameter space using ideas of matching loss functions, one of which is coming from the data model and the other from the prior.  相似文献   

4.
The classes of monotone or convex (and necessarily monotone) densities on     can be viewed as special cases of the classes of k - monotone densities on     . These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (∞-monotone) densities on     . In this paper we consider non-parametric maximum likelihood and least squares estimators of a k -monotone density g 0. We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k −1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivatives     , at a fixed point x 0 under the assumption that     .  相似文献   

5.
In this paper, we propose a Bayesian estimation and forecasting procedure for noncausal autoregressive (AR) models. Specifically, we derive the joint posterior density of the past and future errors and the parameters, yielding predictive densities as a by‐product. We show that the posterior model probabilities provide a convenient model selection criterion in discriminating between alternative causal and noncausal specifications. As an empirical application, we consider US inflation. The posterior probability of noncausality is found to be high—over 98%. Furthermore, the purely noncausal specifications yield more accurate inflation forecasts than alternative causal and noncausal AR models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
The purpose of this paper is to provide a critical discussion on real-time estimation of dynamic generalized linear models. We describe and contrast three estimation schemes, the first of which is based on conjugate analysis and linear Bayes methods, the second based on posterior mode estimation, and the third based on sequential Monte Carlo sampling methods, also known as particle filters. For the first scheme, we give a summary of inference components, such as prior/posterior and forecast densities, for the most common response distributions. Considering data of arrivals of tourists in Cyprus, we illustrate the Poisson model, providing a comparative analysis of the above three schemes.  相似文献   

7.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
We model a regression density flexibly so that at each value of the covariates the density is a mixture of normals with the means, variances and mixture probabilities of the components changing smoothly as a function of the covariates. The model extends the existing models in two important ways. First, the components are allowed to be heteroscedastic regressions as the standard model with homoscedastic regressions can give a poor fit to heteroscedastic data, especially when the number of covariates is large. Furthermore, we typically need fewer components, which makes it easier to interpret the model and speeds up the computation. The second main extension is to introduce a novel variable selection prior into all the components of the model. The variable selection prior acts as a self-adjusting mechanism that prevents overfitting and makes it feasible to fit flexible high-dimensional surfaces. We use Bayesian inference and Markov Chain Monte Carlo methods to estimate the model. Simulated and real examples are used to show that the full generality of our model is required to fit a large class of densities, but also that special cases of the general model are interesting models for economic data.  相似文献   

9.
This paper re-evaluates the telecommunication policies often applied to create regional dispersion of services in developing countries. We observe that failure to consider the complexities of the regional telecommunication systems in creating policies and investment strategies has increased the telecom gap between urban and rural regions worldwide. In particular, the teledensities of rural telecommunications in developing countries have remained very low in spite of support through universal service obligation fees and cross-subsidization from international services. As traditional methods for economic analysis and modeling have failed to identify mechanisms that improve telephone dispersion in these countries, we use a system dynamics modeling approach to deal with complexities of the situation in order to evaluate how Universal Service Obligations (USOs) and International Cross-Subsidy (ICS) policies affect telephone densities. We demonstrate that these policies may be counterproductive due to the structure of the telecom system itself. We also show that, when market-clearing pricing is combined with USOs once the urban telephone density reaches a minimum threshold, the dispersion of rural telecommunications can be considerably improved.  相似文献   

10.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

11.
In the context of either Bayesian or classical sensitivity analyses of over‐parametrized models for incomplete categorical data, it is well known that prior‐dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over‐parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior‐dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over‐parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.  相似文献   

12.
We consider the problem of unobserved components in time series from a Bayesian non-parametric perspective. The identification conditions are treated as unknown and analyzed in a probabilistic framework. In particular, informative prior distributions force the spectral decomposition to be in an identifiable region. Then, the likelihood function adapts the prior decompositions to the data.  相似文献   

13.
It is very common in applied frequentist ("classical") statistics to carry out a preliminary statistical (i.e. data-based) model selection by, for example, using preliminary hypothesis tests or minimizing AIC. This is usually followed by the inference of interest, using the same data, based on the assumption that the selected model had been given to us  a priori . This assumption is false and it can lead to an inaccurate and misleading inference. We consider the important case that the inference of interest is a confidence region. We review the literature that shows that the resulting confidence regions typically have very poor coverage properties. We also briefly review the closely related literature that describes the coverage properties of prediction intervals after preliminary statistical model selection. A possible motivation for preliminary statistical model selection is a wish to utilize uncertain prior information in the inference of interest. We review the literature in which the aim is to utilize uncertain prior information directly in the construction of confidence regions, without requiring the intermediate step of a preliminary statistical model selection. We also point out this aim as a future direction for research.  相似文献   

14.
Do house prices reflect fundamentals? Aggregate and panel data evidence   总被引:2,自引:1,他引:1  
We investigate whether recently high and consequently rapidly decreasing U.S. house prices have been justified by fundamental factors such as personal income, population, house rent, stock market wealth, building costs, and mortgage rate. We first conduct the standard unit root and cointegration tests with aggregate data. Nationwide analysis potentially suffers from problems of the low power of stationarity tests and the ignorance of dependence among regional house markets. Therefore, we also employ panel data stationarity tests which are robust to cross-sectional dependence. Contrary to previous panel studies of the U.S. housing market, we consider several, not just one, fundamental factors. Our results confirm that panel data unit root tests have greater power as compared with univariate tests. However, the overall conclusions are the same for both methodologies. The house price does not align with the fundamentals in sub-samples prior to 1996 and from 1997 to 2006. It appears that the real estate prices take long swings from their fundamental value and it can take decades before they revert to it. The most recent correction (a collapsed bubble) occurred around 2006.  相似文献   

15.
Generalized least squares estimators, with estimated variance-covariance matrices, and maximum likelihood estimators have been proposed in the literature to deal with the problem of estimating autoregressive models with autocorrelated disturbances. In this paper we compare the small sample efficiencies of these estimators with those of some approximate Bayes estimators. The comparison is done with the help of a sampling experiment applied to a model specification. Though these Bayes estimators utilize very weak prior information, they out-perform the sampling theory estimators in every case we consider.  相似文献   

16.
Bayesian techniques for samples from classical, generalized and multivariate Pareto distributions are described. We place emphasis on choosing proper prior distributions that do not lead to anomalous posterior densities.  相似文献   

17.
In this research, we disentangle the relationship between several key aspects of a team leader's experience and the likelihood of improvement project success. Using the lens of socio-technical systems, we argue that the effect of team leader experience derives from the social system as well as the technical system. The aspects of team leader experience we examine include team leader social capital (a part of the social system) and team leader experience leading projects of the same type (a part of the technical system).We examine four different, yet related, dimensions of a team leader's social capital, which we motivate based on the social networks literature. One dimension, team leader familiarity, suggests that social capital is created when team leaders have experience working with current team members on prior improvement projects, and that such social capital increases the likelihood of improvement project success. We develop three additional dimensions, using social network analysis (SNA), to capture the idea that the improvement team leader's social capital extends beyond the current team to include everyone the leader has previously worked with on improvement projects. Contrasting our SNA-based dimensions with team leader familiarity enables us to better understand the impact of a team leader's social capital both inside and beyond the team. We also examine the effect of a team leader's experience leading prior projects of the same type, and consider the extent to which organizational experience may moderate the impact of both team leader social capital and same-type project experience.Based on analysis of archival data of six sigma projects spanning six years from a Fortune 500 consumer products manufacturer, we find that two of our SNA-based dimensions of team leader social capital, as well as experience leading projects of the same type, increase the likelihood of project success. In addition, we show that organizational experience moderates the relationship between team leader same-type project experience and project success. However, this is not the case for the relationship between the dimensions of team leader social capital and project success. These results provide insights regarding how dimensions of team leader experience and organizational experience collectively impact the operational performance of improvement teams.  相似文献   

18.
We deal with general mixture of hierarchical models of the form m(x) = føf(x |θ) g (θ)dθ , where g(θ) and m(x) are called mixing and mixed or compound densities respectively, and θ is called the mixing parameter. The usual statistical application of these models emerges when we have data xi, i = 1,…,n with densities f(xii) for given θi, and the θ1 are independent with common density g(θ) . For a certain well known class of densities f(x |θ) , we present a sample-based approach to reconstruct g(θ) . We first provide theoretical results and then we use, in an empirical Bayes spirit, the first four moments of the data to estimate the first four moments of g(θ) . By using sampling techniques we proceed in a fully Bayesian fashion to obtain any posterior summaries of interest. Simulations which investigate the operating characteristics of our proposed methodology are presented. We illustrate our approach using data from mixed Poisson and mixed exponential densities.  相似文献   

19.
We consider Bayesian estimation of a stochastic production frontier with ordered categorical output, where the inefficiency error is assumed to follow an exponential distribution, and where output, conditional on the inefficiency error, is modelled as an ordered probit model. Gibbs sampling algorithms are provided for estimation with both cross-sectional and panel data, with panel data being our main focus. A Monte Carlo study and a comparison of results from an example where data are used in both continuous and categorical form supports the usefulness of the approach. New efficiency measures are suggested to overcome a lack-of-invariance problem suffered by traditional efficiency measures. Potential applications include health and happiness production, university research output, financial credit ratings, and agricultural output recorded in broad bands. In our application to individual health production we use data from an Australian panel survey to compute posterior densities for marginal effects, outcome probabilities, and a number of within-sample and out-of-sample efficiency measures.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号