首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
How much can be learned from a noisy signal about the state of the world not only depends on the accuracy of the signal, but also on the distribution of the prior. Therefore, we define a general information system as a tuple consisting of both a signal technology and a prior. In this paper we develop a learning order for general information systems and characterize the order in two different ways: first, in terms of the dispersion of posterior beliefs about state quantiles and, second, in terms of the value of learning for two different classes of decision makers. The first class includes all agents with quasi-linear quantile preferences, and the second class contains all agents with supermodular quantile preferences.  相似文献   

2.
  总被引:2,自引:0,他引:2  
Using computer simulations, the finite sample performance of a number of classical and Bayesian wavelet shrinkage estimators for Poisson counts is examined. For the purpose of comparison, a variety of intensity functions, background intensity levels, sample sizes, primary resolution levels, wavelet filters and performance criteria are employed. A demonstration is given of the use of some of the estimators to analyse a data set arising in high-energy astrophysics. Following the philosophy of reproducible research, the M atlab programs and real-life data example used in this study are made freely available.  相似文献   

3.
The present paper provides the original formulation and a joint response of a group of statistically trained scientists to fourteen cryptic issues for discussion, which were handed out to the public by Professor Dr. D.R. Cox after his Bernoulli Lecture 1997 at Groningen University.  相似文献   

4.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:1,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

5.
Fisher and "Student" quarreled in the early days of statistics about the design of experiments, meant to measure the difference in yield between to breeds of corn. This discussion comes down to randomization versus model building. More than half a century has passed since, but the different views remain. In this paper the discussion is put in terms of artificial randomization and natural randomization, the latter being what remains after appropriate modeling. Also the Bayesian position is discussed. An example in terms of the old corn-breeding discussion is given, showing that a simple robust model may lead to inference and experimental design that outperforms the inference from randomized experiments by far. Finally similar possibilities are suggested in statistical auditing.  相似文献   

6.
Bayesian approaches to the estimation of DSGE models are becoming increasingly popular. Prior knowledge is normally formalized either directly on deep parameters' values (‘microprior’) or indirectly, on macroeconomic indicators, e.g. moments of observable variables (‘macroprior’). We introduce a non-parametric macroprior which is elicited from impulse response functions and assess its performance in shaping posterior estimates. We find that using a macroprior can lead to substantially different posterior estimates. We probe into the details of our result, showing that model misspecification is likely to be responsible of that. In addition, we assess to what extent the use of macropriors is impaired by the need of calibrating some hyperparameters.  相似文献   

7.
This paper criticizes the use of regression in audit samples to obtain confidence intervals for the error rate. Also the methodology to evaluate methods by simulation studies using real–life populations is criticized. This is done from a Bayesian viewpoint, which goes as far as stating that in this type of research the wrong questions are answered. A fundamental discussion on the role of model building, illustrated by the use of models in auditing, forms the centre of the paper.  相似文献   

8.
    
I thoroughly enjoyed reading the article by Bhadra et. al. (2020) and convey my congratulations to the authors for providing a comprehensive and coherent review of horseshoe-based regularization approaches for machine learning models. I am thankful to the editors for providing this opportunity to write a discussion on this useful article, which I expect will turn out to be a good guide in the future for statisticians and practitioners alike. It is quite amazing to see the rapid progress and the magnitude of work advancing the horseshoe regularization approach since the seminal paper by Carvalho et al. (2010). The current review article is a testimony for this. While I have been primarily working with continuous spike and slab priors for high-dimensional Bayesian modeling, I have been following the literature on horseshoe regularization with a keen interest. For my comments on this article, I will focus on some comparisons between these two approaches particularly in terms of model building and methodology and some computational considerations. I would like to first provide some comments on performing valid inference based on the horsheshoe prior framework.  相似文献   

9.
    
In this paper, we provide an intensive review of the recent developments for semiparametric and fully nonparametric panel data models that are linearly separable in the innovation and the individual-specific term. We analyze these developments under two alternative model specifications: fixed and random effects panel data models. More precisely, in the random effects setting, we focus our attention in the analysis of some efficiency issues that have to do with the so-called working independence condition. This assumption is introduced when estimating the asymptotic variance–covariance matrix of nonparametric estimators. In the fixed effects setting, to cope with the so-called incidental parameters problem, we consider two different estimation approaches: profiling techniques and differencing methods. Furthermore, we are also interested in the endogeneity problem and how instrumental variables are used in this context. In addition, for practitioners, we also show different ways of avoiding the so-called curse of dimensionality problem in pure nonparametric models. In this way, semiparametric and additive models appear as a solution when the number of explanatory variables is large.  相似文献   

10.
    
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster.  相似文献   

11.
The Dirichlet‐multinomial process can be seen as the generalisation of the binomial model with beta prior distribution when the number of categories is larger than two. In such a scenario, setting informative prior distributions when the number of categories is great becomes difficult, so the need for an objective approach arises. However, what does objective mean in the Dirichlet‐multinomial process? To deal with this question, we study the sensitivity of the posterior distribution to the choice of an objective Dirichlet prior from those presented in the available literature. We illustrate the impact of the selection of the prior distribution in several scenarios and discuss the most sensible ones.  相似文献   

12.
    
This paper considers how the concepts of likelihood and identification became part of Bayesian theory. This makes a nice study in the development of concepts in statistical theory. Likelihood slipped in easily but there was a protracted debate about how identification should be treated. Initially there was no agreement on whether identification involved the prior, the likelihood or the posterior.  相似文献   

13.
The problem of forecasting a time series with only a small amount of data is addressed within a Bayesian framework. The quantity to be predicted is the accumulated value of a positive and continuous variable for which partially accumulated data are available. These conditions appear in a natural way in many situations. A simple model is proposed to describe the relationship between the partial and total values of the variable to be forecasted assuming stable seasonality, which is specified in stochastic terms. Analytical results are obtained for both the point forecast and the entire posterior predictive distribution. The proposed technique does not involve approximations. It allows the use of non-informative priors so that implementation may be automatic. The procedure works well when standard methods cannot be applied due to the reduced number of observations. It also improves on previous results published by the authors. Some real examples are included.  相似文献   

14.
Croston’s method is generally viewed as being superior to exponential smoothing when the demand is intermittent, but it has the drawbacks of bias and an inability to deal with obsolescence, where the demand for an item ceases altogether. Several variants have been reported, some of which are unbiased on certain types of demand, but only one recent variant addresses the problem of obsolescence. We describe a new hybrid of Croston’s method and Bayesian inference called Hyperbolic-Exponential Smoothing, which is unbiased on non-intermittent and stochastic intermittent demand, decays hyperbolically when obsolescence occurs, and performs well in experiments.  相似文献   

15.
The use of stochastic models and performance measures for the analysis of real life queuing scenarios are based on the fundamental premise that parameters values are known. This is a rarity since more often than not, parameters are usually unknown and require to be estimated. This paper presents techniques for the same from Bayesian perspective. The queue we intend to deal with is the M/M/1 queuing model. Several closed form expressions on posterior inference and prediction are presented which can be readily implemented using standard spreadsheet tools. Previous work in this direction resulted in non-existence of posterior moments. A way out is suggested. Interval estimates and tests of hypothesis on performance measures are also presented.  相似文献   

16.
  总被引:1,自引:0,他引:1  
Hyperparameter estimation in dynamic linear models leads to inference that is not available analytically. Recently, the most common approach is through MCMC approximations. A number of sampling schemes that have been proposed in the literature are compared. They basically differ in their blocking structure. In this paper, comparison between the most common schemes is performed in terms of different efficiency criteria, including efficiency ratio and processing time. A sample of time series was simulated to reflect different relevant features such as series length and system volatility.  相似文献   

17.
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner.  相似文献   

18.
In this paper we compare classical econometrics, calibration and Bayesian inference in the context of the empirical analysis of factor demands. Our application is based on a popular flexible functional form for the firm's cost function, namely Diewert's Generalized Leontief function, and uses the well-known Berndt and Wood 1947–1971 KLEM data on the US manufacturing sector. We illustrate how the Gibbs sampling methodology can be easily used to calibrate parameter values and elasticities on the basis of previous knowledge from alternative studies on the same data, but with different functional forms. We rely on a system of mixed non-informative diffuse priors for some key parameters and informative tight priors for others. Within the Gibbs sampler, we employ rejection sampling to incorporate parameter restrictions, which are suggested by economic theory but in general rejected by economic data. Our results show that values of those parameters that relate to non-informative priors are almost equal to the standard SUR estimates, whereas differences come out for those parameters to which we have assigned informative priors. Moreover, discrepancies can be appreciated in some crucial parameter estimates obtained with or without rejection sampling.  相似文献   

19.
Despite the state of flux in media today, television remains the dominant player globally for advertising spending. Since television advertising time is purchased on the basis of projected future ratings, and ad costs have skyrocketed, there is increasingly pressure to forecast television ratings accurately. The forecasting methods that have been used in the past are not generally very reliable, and many have not been validated; also, even more distressingly, none have been tested in today’s multichannel environment. In this study we compare eight different forecasting models, ranging from a naïve empirical method to a state-of-the-art Bayesian model-averaging method. Our data come from a recent time period, namely 2004-2008, in a market with over 70 channels, making the data more typical of today’s viewing environment. The simple models that are commonly used in industry do not forecast as well as any econometric models. Furthermore, time series methods are not applicable, as many programs are broadcast only once. However, we find that a relatively straightforward random effects regression model often performs as well as more sophisticated Bayesian models in out-of-sample forecasting. Finally, we demonstrate that making improvements in ratings forecasts could save the television industry between $250 and $586 million per year.  相似文献   

20.
Dynamic stochastic general equilibrium (DSGE) models have recently become standard tools for policy analysis. Nevertheless, their forecasting properties have still barely been explored. In this article, we address this problem by examining the quality of forecasts of the key U.S. economic variables: the three-month Treasury bill yield, the GDP growth rate and GDP price index inflation, from a small-size DSGE model, trivariate vector autoregression (VAR) models and the Philadelphia Fed Survey of Professional Forecasters (SPF). The ex post forecast errors are evaluated on the basis of the data from the period 1994–2006. We apply the Philadelphia Fed “Real-Time Data Set for Macroeconomists” to ensure that the data used in estimating the DSGE and VAR models was comparable to the information available to the SPF.Overall, the results are mixed. When comparing the root mean squared errors for some forecast horizons, it appears that the DSGE model outperforms the other methods in forecasting the GDP growth rate. However, this characteristic turned out to be statistically insignificant. Most of the SPF's forecasts of GDP price index inflation and the short-term interest rate are better than those from the DSGE and VAR models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号