首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

2.
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements.  相似文献   

3.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

4.
In this study we focus attention on model selection in the presence of panel data. Our approach is eclectic in that it combines both classical and Bayesian techniques. It is also novel in that we address not only model selection, but also model occurrence, i.e., the process by which ‘nature’ chooses a statistical framework in which to generate the data of interest. For a given data subset, there exist competing models each of which have an ex ante positive probability of being the correct model, but for any one generated sample, ex post exactly one such model is the basis for the observed data set. Attention focuses on how the underlying model occurrence probabilities of the competing models depend on characteristics of the environments in which the data subsets are generated. Classical, Bayesian, and mixed estimation approaches are developed. Bayesian approaches are shown to be especially attractive whenever the models are nested.  相似文献   

5.
Consider the design problem for the approximately linear model with serially correlated errors. The correlated structure is the qth degree moving average process, MA(q), especially for q = 1, 2. The optimal design is derived by using Bayesian approach. The Bayesian designs derived with various priors are compared with the classical designs with respect to some specific correlated structures. The results show that any prior knowledge about the sign of the MA(q) process parameters leads to designs that are considerately more efficient than the classical ones based on homoscedastic assumptions.  相似文献   

6.
The mixed logit model is widely used in applied econometrics. Researchers typically rely on the free choice between the classical and Bayesian estimation approach. However, empirical evidence of the similarity of their parameter estimates is sparse. The presumed similarity is mainly based on one empirical study that analyzes a single dataset (Huber J, Train KE. 2001. On the similarity of classical and Bayesian estimates of individual mean partworths. Marketing Letters 12 (3): 259–269). Our replication study offers a generalization of their results by comparing classical and Bayesian parameter estimates from six additional datasets and specifically for panel versus cross‐sectional data. In general, our results suggest that the two methods provide similar results, with less similarity for cross‐sectional data than for panel data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
This paper presents a Bayesian approach to regression models with time-varying parameters, or state vector models. Unlike most previous research in this field the model allows for multiple observations for each time period. Bayesian estimators and their properties are developed for the general case where the regression parameters follow an ARMA(s,q) process over time. This methodology is applied to the estimation of time-varying price elasticity for a consumer product, using biweekly sales data for eleven domestic markets. The parameter estimates and forecasting performance of the model are compared with various alternative approaches.  相似文献   

8.
This paper considers the issue of selecting the number of regressors and the number of structural breaks in multivariate regression models in the possible presence of multiple structural changes. We develop a modified Akaike information criterion (AIC), a modified Mallows’ Cp criterion and a modified Bayesian information criterion (BIC). The penalty terms in these criteria are shown to be different from the usual terms. We prove that the modified BIC consistently selects the regressors and the number of breaks whereas the modified AIC and the modified Cp criterion tend to overfit with positive probability. The finite sample performance of these criteria is investigated through Monte Carlo simulations and it turns out that our modification is successful in comparison to the classical model selection criteria and the sequential testing procedure robust to heteroskedasticity and autocorrelation.  相似文献   

9.
In this paper, we revisit the well-known UK inflation model by Hendry (Journal of Applied Econometrics, 2001, 16, 255–275. We replicate the results in a narrow sense using the gretl and PcGive programs. In a wide sense, we extend the study of model uncertainty using the Bayesian averaging of classical estimates (BACE) approach as an automatic model reduction strategy. We consider three different specifications to compare BACE variable selection with Hendry's reduction. We find that the BACE method can recover the path of nontrivial reduction strategy.  相似文献   

10.
11.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model.  相似文献   

12.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

13.
Two methods are given for adapting a kernel density estimate to obtain an estimate of a density function with bias O(h p ) for any given p, where h = h(n) is the bandwidth and n is the sample size. The first method is standard. The second method is new and involves use of Bell polynomials. The second method is shown to yield smaller biases and smaller mean squared errors than classical kernel density estimates and those due to Jones et al. (Biometrika 82:327–338, 1995).  相似文献   

14.
W. L. Pearn  Chien-Wei Wu 《Metrika》2005,61(2):221-234
Process capability indices have been proposed in the manufacturing industry to provide numerical measures on process reproduction capability, which are effective tools for quality assurance and guidance for process improvement. In process capability analysis, the usual practice for testing capability indices from sample data are based on traditional distribution frequency approach. Bayesian statistical techniques are an alternative to the frequency approach. Shiau, Chiang and Hung (1999) applied Bayesian method to index Cpm and the index Cpk but under the restriction that the process mean μ equals to the midpoint of the two specification limits, m. We note that this restriction is a rather impractical assumption for most factory applications, since in this case Cpk will reduce to Cp. In this paper, we consider testing the most popular capability index Cpk for general situation – no restriction on the process mean based on Bayesian approach. The results obtained are more general and practical for real applications. We derive the posterior probability, p, for which the process under investigation is capable and propose accordingly a Bayesian procedure for capability testing. To make this Bayesian procedure practical for in-plant applications, we tabulate the minimum values of Ĉpk for which the posterior probability p reaches desirable confidence levels with various pre-specified capability levels.  相似文献   

15.
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) βi> and βi<0, where βi is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) θ=0, θ> and θ<0 where θ is the mean of a normal distribution, (4) β=0 and β≠0, where β is a vector of regression coefficients, (5) β2=0 vs. β2≠0 where β' =(β'1β2) is a vector regression coefficients and β1's value is unrestricted. In several cases, is a vector of regression coefficients and β1's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems.  相似文献   

16.
We propose a nonparametric Bayesian approach to estimate time‐varying grouped patterns of heterogeneity in linear panel data models. Unlike the classical approach in Bonhomme and Manresa (Econometrica, 2015, 83, 1147–1184), our approach can accommodate selection of the optimal number of groups and model estimation jointly, and also be readily extended to quantify uncertainties in the estimated group structure. Our proposed approach performs well in Monte Carlo simulations. Using our approach, we successfully replicate the estimated relationship between income and democracy in Bonhomme and Manresa and the group characteristics when we use the same number of groups. Furthermore, we find that the optimal number of groups could depend on model specifications on heteroskedasticity and discuss ways to choose models in practice.  相似文献   

17.
18.
This paper analyzes the empirical performance of two alternative ways in which multi-factor models with time-varying risk exposures and premia may be estimated. The first method echoes the seminal two-pass approach introduced by Fama and MacBeth (1973). The second approach is based on a Bayesian latent mixture model with breaks in risk exposures and idiosyncratic volatility. Our application to monthly, 1980–2010 U.S. data on stock, bond, and publicly traded real estate returns shows that the classical, two-stage approach that relies on a nonparametric, rolling window estimation of time-varying betas yields results that are unreasonable. There is evidence that most portfolios of stocks, bonds, and REITs have been grossly over-priced. On the contrary, the Bayesian approach yields sensible results and a few factor risk premia are precisely estimated with a plausible sign. Predictive log-likelihood scores indicate that discrete breaks in both risk exposures and variances are required to fit the data.  相似文献   

19.
In standard regression analysis the relationship between the (response) variable and a set of (explanatory) variables is investigated. In the classical framework the response is affected by probabilistic uncertainty (randomness) and, thus, treated as a random variable. However, the data can also be subjected to other kinds of uncertainty such as imprecision. A possible way to manage all of these uncertainties is represented by the concept of fuzzy random variable (FRV). The most common class of FRVs is the LR family (LR FRV), which allows us to express every FRV in terms of three random variables, namely, the center, the left spread and the right spread. In this work, limiting our attention to the LR FRV class, we consider the linear regression problem in the presence of one or more imprecise random elements. The procedure for estimating the model parameters and the determination coefficient are discussed and the hypothesis testing problem is addressed following a bootstrap approach. Furthermore, in order to illustrate how the proposed model works in practice, the results of a real-life example are given together with a comparison with those obtained by applying classical regression analysis.  相似文献   

20.
An F test [Nelson (1976)] of Parzen's prediction variance horizon [Parzen (1982)] of an ARMA model yields the number of steps ahead that forecasts contain information (short memory). A special 10 year pattern in Finnish GDP is introduced as a ‘seasonal’ in an ARMA-model. Forecasts three years ahead are statistically informative but exploiting the complete 10 year pattern raises doubts both about model memory and model validity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号