首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 99 毫秒
1.
This note develops the Bayesian estimation of the parameters of Solow's distributed lag model with implicit autocorrelation of disturbances in its autoregressive form. The estimation technique extends Chetty's method for independent disturbances. The results of some Monte Carlo experiments are given comparing point estimates from the posterior distributions with the maximum likelihood estimates. The characteristics of the Bayesian and maximum likelihood estimates are very similar.  相似文献   

2.
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) βi> and βi<0, where βi is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) θ=0, θ> and θ<0 where θ is the mean of a normal distribution, (4) β=0 and β≠0, where β is a vector of regression coefficients, (5) β2=0 vs. β2≠0 where β' =(β'1β2) is a vector regression coefficients and β1's value is unrestricted. In several cases, is a vector of regression coefficients and β1's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems.  相似文献   

3.
This paper considers a two-equation linear model when a subset of the parameters in one of the equations is subject to zero constraints. Inference procedures are presented both in the Bayesian and sampling theory framework. Specifically, posterior distributions and confidence distributions of the parameters, as well as various test procedures, are derived and illustrated with examples. The effect of the zero constraints on these procedures are discussed, and a comparison of the Bayesian with the sampling results is given.  相似文献   

4.
The present investigation is concerned with deriving Bayesian statistical inferences for the bivariate exponential (BVE) distribution of Marshall and Olkin (1967) applied as a failure model for a two-component parallel system. In this paper joint posterior distributions for the BVE parameters and marginal posterior densities for individual parameters are developed. The posterior distributions are derived for the case of informative prior knowledge. Bayesian estimators for the BVE parameters and the corresponding reliability are derived in a closed form. Bayesian approximated credibility intervals (‘confidence’ intervals) for parameters are derived by utilizing a gamma approximation to the marginal posterior densities.  相似文献   

5.
6.
This paper presents a straightforward set of Bayesian techniques for analyzing models involving limited dependent variables; the techniques are demonstrated in an analysis of Kennan's (1985) data on contract strikes in US manufacturing. The data are analyzed by deriving posterior distributions—including probability distributions—of hazard functions for strike duration using numerical Monte Carlo methods. The distributions are employed to derive coverage intervals for hazard functions, to assess the relative plausibility of nonnested hypotheses concerning the shape of the functions, and to assess the impact of industrial production on duration.  相似文献   

7.
This paper presents a Bayesian limited-information estimation method that can be used to estimate a single nonlinear equation that forms part of a system of simultaneous equations. The method can be looked upon as the Bayesian counterpart of Amemiya's nonlinear limited-information maximum-likelihood estimator as well as a generalization of Drèze's Bayesian limited-information estimator for linear simultaneous equations systems. The method is illustrated by applying it to the problem of estimating a CES-production function which forms part of a complete model of firm behavior.  相似文献   

8.
A Bayesian procedure is proposed for the estimation of the weights of the alternatives in a multi-criteria decision model with data that stem from pair-wise comparison of alternatives. The prior information restricts the weights to the unit simplex. The posterior results are computed by Monte Carlo integration procedures based on importance sampling. The Bayesian procedure is applied to a case study concerning the choice of a professor of Operations Research (OR). Results are: (1) according to the Bayesian procedure a different candidate would be chosen as professor of OR than according to the maximum likelihood procedure; (2) given the prior and data information, there exists a substantial probability of taking the wrong decision; (3) there exists a ranking of the candidates with a posterior probability greater than one half.  相似文献   

9.
This paper investigates, for a one-sector model in which the, consumer-investor's preferences are randomly changing, the relationship between an economy's size and variability. It is observed that means and variances of the stationary probability distributions on capital stock and output move together for given parameter changes unless some specific form of government intervention is introduced.  相似文献   

10.
Some sampling properties of Zellner's (1978) MELO estimates of structural coefficients of linear simultaneous equation models are examined by a series of sampling experiments. The MELO estimates appear to have more pronounced biases in estimating structural coefficients than the 2SLS estimates. However, MELO is found to outperform 2SLS according to several criteria, including MSE and MAE in a wide range of situations generated by varying structural coefficients, the variance-covariance matrix of structural disturbances, and the sample size. The magnitude of absolute sampling errors, the estimation of the variance of structural disturbances, and the large-sample standard errors are also compared among OLS, 2SLS, and MELO.  相似文献   

11.
12.
Bayesian techniques for samples from classical, generalized and multivariate Pareto distributions are described. We place emphasis on choosing proper prior distributions that do not lead to anomalous posterior densities.  相似文献   

13.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

14.
This paper is concerned with the Bayesian analysis of stochastic volatility (SV) models with leverage. Specifically, the paper shows how the often used Kim et al. [1998. Stochastic volatility: likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361–393] method that was developed for SV models without leverage can be extended to models with leverage. The approach relies on the novel idea of approximating the joint distribution of the outcome and volatility innovations by a suitably constructed ten-component mixture of bivariate normal distributions. The resulting posterior distribution is summarized by MCMC methods and the small approximation error in working with the mixture approximation is corrected by a reweighting procedure. The overall procedure is fast and highly efficient. We illustrate the ideas on daily returns of the Tokyo Stock Price Index. Finally, extensions of the method are described for superposition models (where the log-volatility is made up of a linear combination of heterogenous and independent autoregressions) and heavy-tailed error distributions (student and log-normal).  相似文献   

15.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

16.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

17.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

18.
The problem of incentives for correct revelation is studied as a game with incomplete information where players have individual beliefs concerning other's types. General conditions on the beliefs are given which are shown to be sufficient for the existence of a Pareto-efficient mechanism for which truth-telling is a Bayesian equilibrium.  相似文献   

19.
Due to weaknesses in traditional tests, a Bayesian approach is developed to investigate whether unit roots exist in macroeconomic time-series. Bayesian posterior odds comparing unit root models to stationary and trend-stationary alternatives are calculated using informative priors. Two classes of reference priors which are informative but require minimal subjective prior input are used. In this sense the Bayesian unit root tests developed here are objective. Bayesian procedures are carried out on the Nelson–Plosser and Shiller data sets as well as on generated data. The conclusion is that the failure of classical procedures to reject the unit root hypothesis is not necessarily proof that a unit root is present with high probability.  相似文献   

20.
The traditional rationale for differencing time series data is to attain stationarity. For a nearly non-stationary first-order autoregressive process—AR (1) with positive slope parameter near unity—we were led to a complementary rationale. If one suspects near non-stationarity of the AR (1) process, if the sample size is ‘small’ or ‘moderate’, and if good one-step-ahead prediction performance is the goal, then it is wise to difference the data and treat the differences as observations on a stationary AR (1) process. Estimation by Ordinary Least Squares then appears to be at least as satisfactory as nonlinear least squares. Use of differencing for an already stationary process can be motivated by Bayesian concepts: differencing can be viewed as an easy way to incorporate non-diffuse prior judgement—that the process is nearly non-stationary—into one's analysis. Random walks and near random walks are often encountered in economics. Unless one's sample size is large, the same statistical analyses apply to either.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号