首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a Bayesian approach for analyzing aggregate level sales data in a market with differentiated products. We consider the aggregate share model proposed by Berry et al. [Berry, Steven, Levinsohn, James, Pakes, Ariel, 1995. Automobile prices in market equilibrium. Econometrica. 63 (4), 841–890], which introduces a common demand shock into an aggregated random coefficient logit model. A full likelihood approach is possible with a specification of the distribution of the common demand shock. We introduce a reparameterization of the covariance matrix to improve the performance of the random walk Metropolis for covariance parameters. We illustrate the usefulness of our approach with both actual and simulated data. Sampling experiments show that our approach performs well relative to the GMM estimator even in the presence of a mis-specified shock distribution. We view our approach as useful for those who are willing to trade off one additional distributional assumption for increased efficiency in estimation.  相似文献   

2.
Scanner data for fast moving consumer goods typically amount to panels of time series where both N and T are large. To reduce the number of parameters and to shrink parameters towards plausible and interpretable values, Hierarchical Bayes models turn out to be useful. Such models contain in the second level a stochastic model to describe the parameters in the first level.  相似文献   

3.
This paper develops a pure simulation-based approach for computing maximum likelihood estimates in latent state variable models using Markov Chain Monte Carlo methods (MCMC). Our MCMC algorithm simultaneously evaluates and optimizes the likelihood function without resorting to gradient methods. The approach relies on data augmentation, with insights similar to simulated annealing and evolutionary Monte Carlo algorithms. We prove a limit theorem in the degree of data augmentation and use this to provide standard errors and convergence diagnostics. The resulting estimator inherits the sampling asymptotic properties of maximum likelihood. We demonstrate the approach on two latent state models central to financial econometrics: a stochastic volatility and a multivariate jump-diffusion models. We find that convergence to the MLE is fast, requiring only a small degree of augmentation.  相似文献   

4.
This paper considers a panel data model with time-varying individual effects. The data are assumed to contain a large number of cross-sectional units repeatedly observed over a fixed number of time periods. The model has a feature of the fixed-effects model in that the effects are assumed to be correlated with the regressors. The unobservable individual effects are assumed to have a factor structure. For consistent estimation of the model, it is important to estimate the true number of individual effects. We propose a generalized methods of moments procedure by which both the number of individual effects and the regression coefficients can be consistently estimated. Some important identification issues are also discussed. Our simulation results indicate that the proposed methods produce reliable estimates.  相似文献   

5.
This brief article first investigates key dimensions underlying the progress realized by data envelopment analysis (DEA) methodologies. The resulting perspective is then used to encourage reflection on future paths for the field. Borrowing from the social sciences literature, we distinguish between problematization and gap identification in suggesting strategies to push the DEA research envelope. Emerging evidence of a declining number of influential methodological (theory)-based publications, and a flattening diffusion of applications imply an unfolding maturity of the field. Such findings suggest that focusing on known limitations of DEA, and/or of its applications, while searching for synergistic partnerships with other methodologies, can create new and fertile grounds for research. Possible future directions might thus include ‘DEA in practice’, ‘opening the black-box of production,’ ‘rationalizing inefficiency,’ and ‘the productivity dilemma.’ What we are therefore proposing is a strengthening of the methodology's contribution to fields of endeavor both including, and beyond, those considered in the past.  相似文献   

6.
The structural consumer demand methods used to estimate the parameters of collective household models are typically either very restrictive and easy to implement or very general and difficult to estimate. In this paper, we provide a middle ground. We adapt the very general framework of [Browning, M., Chiappori, P.A., Lewbel, A., 2004. Estimating Consumption Economies of Scale, Adult Equivalence Scales, and Household Bargaining Power, Boston College Working Papers in Economics 588] by adding a simple restriction that recasts the empirical model from a highly nonlinear demand system with price variation to a slightly nonlinear Engel curve system. Our restriction has an interpretation in terms of the behaviour of household scale economies and is testable. Our method identifies the levels of (not just changes in) household resource shares, and a variant of equivalence scales called indifference scales. We apply our methodology to Canadian expenditure data.  相似文献   

7.
Dynamic Stochastic General Equilibrium (DSGE) models are now considered attractive by the profession not only from the theoretical perspective but also from an empirical standpoint. As a consequence of this development, methods for diagnosing the fit of these models are being proposed and implemented. In this article we illustrate how the concept of statistical identification, that was introduced and used by Spanos [Spanos, Aris, 1990. The simultaneous-equations model revisited: Statistical adequacy and identification. Journal of Econometrics 44, 87–105] to criticize traditional evaluation methods of Cowles Commission models, could be relevant for DSGE models. We conclude that the recently proposed model evaluation method, based on the DSGE–VAR(λ)(λ), might not satisfy the condition for statistical identification. However, our application also shows that the adoption of a FAVAR as a statistically identified benchmark leaves unaltered the support of the data for the DSGE model and that a DSGE–FAVAR can be an optimal forecasting model.  相似文献   

8.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model.  相似文献   

9.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right.  相似文献   

10.
Discrete choice experiments are widely used to learn about the distribution of individual preferences for product attributes. Such experiments are often designed and conducted deliberately for the purpose of designing new products. There is a long-standing literature on nonparametric and Bayesian modelling of preferences for the study of consumer choice when there is a market for each product, but this work does not apply when such markets fail to exist as is the case with most product attributes. This paper takes up the common case in which attributes can be quantified and preferences over these attributes are monotone. It shows that monotonicity is the only shape constraint appropriate for a utility function in these circumstances. The paper models components of utility using a Dirichlet prior distribution and demonstrates that all monotone nondecreasing utility functions are supported by the prior. It develops a Markov chain Monte Carlo algorithm for posterior simulation that is reliable and practical given the number of attributes, choices and sample sizes characteristic of discrete choice experiments. The paper uses the algorithm to demonstrate the flexibility of the model in capturing heterogeneous preferences and applies it to a discrete choice experiment that elicits preferences for different auto insurance policies.  相似文献   

11.
We introduce a class of instrumental quantile regression methods for heterogeneous treatment effect models and simultaneous equations models with nonadditive errors and offer computable methods for estimation and inference. These methods can be used to evaluate the impact of endogenous variables or treatments on the entire distribution of outcomes. We describe an estimator of the instrumental variable quantile regression process and the set of inference procedures derived from it. We focus our discussion of inference on tests of distributional equality, constancy of effects, conditional dominance, and exogeneity. We apply the procedures to characterize the returns to schooling in the U.S.  相似文献   

12.
This paper develops a maximum likelihood (ML) method to estimate partially observed diffusion models based on data sampled at discrete times. The method combines two techniques recently proposed in the literature in two separate steps. In the first step, the closed form approach of Aït-Sahalia (2008) is used to obtain a highly accurate approximation to the joint transition probability density of the latent and the observed states. In the second step, the efficient importance sampling technique of Richard and Zhang (2007) is used to integrate out the latent states, thereby yielding the likelihood function. Using both simulated and real data, we show that the proposed ML method works better than alternative methods. The new method does not require the underlying diffusion to have an affine structure and does not involve infill simulations. Therefore, the method has a wide range of applicability and its computational cost is moderate.  相似文献   

13.
Within the affiliated private-values paradigm, we develop a tractable empirical model of equilibrium behaviour at first-price, sealed-bid auctions. The model is non-parametrically identified, but the rate of convergence in estimation is slow when the number of bidders is even moderately large, so we develop a semiparametric estimation strategy, focusing on the Archimedean family of copulae and implementing this framework using particular members—the Clayton, Frank, and Gumbel copulae. We apply our framework to data from low-price, sealed-bid auctions used by the Michigan Department of Transportation to procure road-resurfacing services, rejecting the hypothesis of independence and finding significant (and high) affiliation in cost signals.  相似文献   

14.
We introduce a new class of models that has both stochastic volatility and moving average errors, where the conditional mean has a state space representation. Having a moving average component, however, means that the errors in the measurement equation are no longer serially independent, and estimation becomes more difficult. We develop a posterior simulator that builds upon recent advances in precision-based algorithms for estimating these new models. In an empirical application involving US inflation we find that these moving average stochastic volatility models provide better in-sample fitness and out-of-sample forecast performance than the standard variants with only stochastic volatility.  相似文献   

15.
Bayesian hypothesis testing in latent variable models   总被引:1,自引:0,他引:1  
Hypothesis testing using Bayes factors (BFs) is known not to be well defined under the improper prior. In the context of latent variable models, an additional problem with BFs is that they are difficult to compute. In this paper, a new Bayesian method, based on the decision theory and the EM algorithm, is introduced to test a point hypothesis in latent variable models. The new statistic is a by-product of the Bayesian MCMC output and, hence, easy to compute. It is shown that the new statistic is appropriately defined under improper priors because the method employs a continuous loss function. In addition, it is easy to interpret. The method is illustrated using a one-factor asset pricing model and a stochastic volatility model with jumps.  相似文献   

16.
The purpose of the present paper is to clarify the relation between choice theory for individual consumers, i.e., the observed demand behavior, and the preference ordering ?? of that individual. Specifically, we study how concavifiability (i.e., representability of ?? by a concave utility function) is expressed by quantities (cross-coefficients) appearing in revealed preferences theory. We present a sequence of rather explicit necessary conditions for concavifiability. All these conditions are quantitative asymptotic strengthenings of the strong axiom of revealed preference. The results and concepts are illustrated by means of examples in which an expenditure data is defined by providing its generating utility function.  相似文献   

17.
In the spirit of Smale’s work, we consider pure exchange economies with general consumption sets. In this paper, the consumption set of each household is described in terms of a function called possibility function. The main innovation comes from the dependency of each possibility function with respect to the individual endowments. We prove that, generically in the space of endowments and possibility functions, economies are regular. A regular economy has a finite number of equilibria, which locally depend on endowments and possibility functions in a continuous manner.  相似文献   

18.
Traditionally the codomain of a utility function is the set of real numbers. This choice has the advantage of ensuring the existence of a continuous representation but does not allow to represent many preference structures that are relevant to utility theory. Recently, some authors have started a systematic study of utility representations that are not real-valued, introducing the notion of a Debreu chain. We continue their analysis defining two Debreu-like properties, which are connected to a local continuity of a utility representation. The classes of locally Debreu and pointwise Debreu chains here introduced enlarge the class of Debreu chains. We give several examples and analyze some properties of these two classes of chains, with particular attention to lexicographic products.  相似文献   

19.
A novel Bayesian method for inference in dynamic regression models is proposed where both the values of the regression coefficients and the importance of the variables are allowed to change over time. We focus on forecasting and so the parsimony of the model is important for good performance. A prior is developed which allows the shrinkage of the regression coefficients to suitably change over time and an efficient Markov chain Monte Carlo method for posterior inference is described. The new method is applied to two forecasting problems in econometrics: equity premium prediction and inflation forecasting. The results show that this method outperforms current competing Bayesian methods.  相似文献   

20.
The objective of this article is to propose a Bayesian method for estimating a system of Engel functions using survey data that include zero expenditures. We deal explicitly with the problem of zero expenditures in the model and estimate a system of Engel functions that satisfy the adding‐up condition. Furthermore, using Markov chain Monte Carlo method, we estimate unobservable parameters, including consumption of commodities, total consumption and equivalence scale, and use their posterior distributions to calculate inequality measures and total consumption elasticities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号