首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

2.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

3.
The familiar logit and probit models provide convenient settings for many binary response applications, but a larger class of link functions may be occasionally desirable. Two parametric families of link functions are investigated: the Gosset link based on the Student t latent variable model with the degrees of freedom parameter controlling the tail behavior, and the Pregibon link based on the (generalized) Tukey λ family, with two shape parameters controlling skewness and tail behavior. Both Bayesian and maximum likelihood methods for estimation and inference are explored, compared and contrasted. In applications, like the propensity score matching problem discussed below, where it is critical to have accurate estimates of the conditional probabilities, we find that misspecification of the link function can create serious bias. Bayesian point estimation via MCMC performs quite competitively with MLE methods; however nominal coverage of Bayes credible regions is somewhat more problematic.  相似文献   

4.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

5.
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns by taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and can lead to some well-documented drawbacks, including model misspecification, parameter uncertainty, and overfitting. To address these issues we first consider mortality modeling in a Bayesian negative-binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation and compare them to standard Bayesian model averaging (BMA) based on marginal likelihood. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets, along with a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on an out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness.  相似文献   

6.
To study the influence of a bandwidth parameter in inference with conditional moments, we propose a new class of estimators and establish an asymptotic representation of our estimator as a process indexed by a bandwidth, which can vary within a wide range including bandwidths independent of the sample size. We study its behavior under misspecification. We also propose an efficient version of our estimator. We develop a procedure based on a distance metric statistic for testing restrictions on parameters as well as a bootstrap technique to account for the bandwidth’s influence. Our new methods are simple to implement, apply to non-smooth problems, and perform well in our simulations.  相似文献   

7.
Growing-dimensional data with likelihood function unavailable are often encountered in various fields. This paper presents a penalized exponentially tilted (PET) likelihood for variable selection and parameter estimation for growing dimensional unconditional moment models in the presence of correlation among variables and model misspecification. Under some regularity conditions, we investigate the consistent and oracle properties of the PET estimators of parameters, and show that the constrained PET likelihood ratio statistic for testing contrast hypothesis asymptotically follows the chi-squared distribution. Theoretical results reveal that the PET likelihood approach is robust to model misspecification. We study high-order asymptotic properties of the proposed PET estimators. Simulation studies are conducted to investigate the finite performance of the proposed methodologies. An example from the Boston Housing Study is illustrated.  相似文献   

8.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

9.
The paper takes up Bayesian inference in time series models when essentially nothing is known about the distribution of the dependent variable given past realizations or other covariates. It proposes the use of kernel quasi likelihoods upon which formal inference can be based. Gibbs sampling with data augmentation is used to perform the computations related to numerical Bayesian analysis of the model. The method is illustrated with artificial and real data sets.  相似文献   

10.
We develop an omnibus specification test for multivariate continuous-time models using the conditional characteristic function, which often has a convenient closed-form or can be accurately approximated for many multivariate continuous-time models in finance and economics. The proposed test fully exploits the information in the joint conditional distribution of underlying economic processes and hence is expected to have good power in a multivariate context. A class of easy-to-interpret diagnostic procedures is supplemented to gauge possible sources of model misspecification. Our tests are also applicable to discrete-time distribution models. Simulation studies show that the tests provide reliable inference in finite samples.  相似文献   

11.
Information criteria (IC) are often used to decide between forecasting models. Commonly used criteria include Akaike's IC and Schwarz's Bayesian IC. They involve the sum of two terms: the model's log likelihood and a penalty for the number of model parameters. The likelihood is calculated with equal weight being given to all observations. We propose that greater weight should be put on more recent observations in order to reflect more recent accuracy. This seems particularly pertinent when selecting among exponential smoothing methods, as they are based on an exponential weighting principle. In this paper, we use exponential weighting within the calculation of the log likelihood for the IC. Our empirical analysis uses supermarket sales and call centre arrivals data. The results show that basing model selection on the new exponentially weighted IC can outperform individual models and selection based on the standard IC.  相似文献   

12.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

13.
We provide a methodology for testing a polynomial model hypothesis by generalizing the approach and results of Baek, Cho, and Phillips (Journal of Econometrics, 2015, 187, 376–384; BCP), which test for neglected nonlinearity using power transforms of regressors against arbitrary nonlinearity. We use the BCP quasi‐likelihood ratio test and deal with the new multifold identification problem that arises under the null of the polynomial model. The approach leads to convenient asymptotic theory for inference, has omnibus power against general nonlinear alternatives, and allows estimation of an unknown polynomial degree in a model by way of sequential testing, a technique that is useful in the application of sieve approximations. Simulations show good performance in the sequential test procedure in both identifying and estimating unknown polynomial order. The approach, which can be used empirically to test for misspecification, is applied to a Mincer (Journal of Political Economy, 1958, 66, 281–302; Schooling, Experience and Earnings, Columbia University Press, 1974) equation using data from Card (in Christofides, Grant, and Swidinsky (Eds.), Aspects of Labour Market Behaviour: Essays in Honour of John Vanderkamp, University of Toronto Press, 1995, 201‐222) and Bierens and Ginther (Empirical Economics, 2001, 26, 307–324). The results confirm that the standard Mincer log earnings equation is readily shown to be misspecified. The applications consider different datasets and examine the impact of nonlinear effects of experience and schooling on earnings, allowing for flexibility in the respective polynomial representations.  相似文献   

14.
Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter θ based on the data is measured by the quantity q (θ) = l ( , θ) w (θ), where l ( , θ) is the likelihood function and w (θ) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q (θ) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example.  相似文献   

15.
This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model may have a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). The testing problem is addressed by a misspecification-type approach in which the overidentifying restrictions test obtained from the estimation of the system of Euler equations of the LRE model through the generalized method of moments is combined with a likelihood-based test for the cross-equation restrictions that the model places on its reduced form solution under determinacy. The resulting test has no power against a particular class of indeterminate equilibria, hence the non rejection of the null hypothesis can not be interpreted conclusively as evidence of determinacy. On the other hand, this test (i) circumvents the nonstandard inferential problem generated by the presence of the auxiliary parameters that appear under indeterminacy and that are not identifiable under determinacy, (ii) does not involve inequality parametric restrictions and hence the use of nonstandard inference, (iii) is consistent against the dynamic misspecification of the LRE model, and (iv) is computationally simple. Monte Carlo simulations show that the suggested testing strategy delivers reasonable size coverage and power against dynamic misspecification in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model of the US economy.  相似文献   

16.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

17.
This paper proposes a two-equation price-wage model that enables to test whether the inflationary pressure on wage rate is only present when the rate of inflation is greater than some threshold value. Since the likelihood function for this model is very nonstandard, we develop a small-sample Bayesian approach to estimate its parameters. Our empirical results for Poland, 1962–1993, give support to the hypothesis of the price- wage spiral with a positive threshold value of inflation. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

18.
This paper outlines an approach to Bayesian semiparametric regression in multiple equation models which can be used to carry out inference in seemingly unrelated regressions or simultaneous equations models with nonparametric components. The approach treats the points on each nonparametric regression line as unknown parameters and uses a prior on the degree of smoothness of each line to ensure valid posterior inference despite the fact that the number of parameters is greater than the number of observations. We develop an empirical Bayesian approach that allows us to estimate the prior smoothing hyperparameters from the data. An advantage of our semiparametric model is that it is written as a seemingly unrelated regressions model with independent normal–Wishart prior. Since this model is a common one, textbook results for posterior inference, model comparison, prediction and posterior computation are immediately available. We use this model in an application involving a two‐equation structural model drawn from the labour and returns to schooling literatures. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

20.
The likelihood of the parameters in structural macroeconomic models typically has non‐identification regions over which it is constant. When sufficiently diffuse priors are used, the posterior piles up in such non‐identification regions. Use of informative priors can lead to the opposite, so both can generate spurious inference. We propose priors/posteriors on the structural parameters that are implied by priors/posteriors on the parameters of an embedding reduced‐form model. An example of such a prior is the Jeffreys prior. We use it to conduct Bayesian limited‐information inference on the new Keynesian Phillips curve with a VAR reduced form for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号