首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Equilibrium business cycle models have typically less shocks than variables. As pointed out by Altug (1989) International Economic Review 30 (4) 889–920 and Sargent (1989) The Journal of Political Economy 97 (2) 251–287, if variables are measured with error, this characteristic implies that the model solution for measured variables has a factor structure. This paper compares estimation performance for the impulse response coefficients based on a VAR approximation to this class of models and an estimation method that explicitly takes into account the restrictions implied by the factor structure. Bias and mean-squared error for both factor- and VAR-based estimates of impulse response functions are quantified using, as data-generating process, a calibrated standard equilibrium business cycle model. We show that, at short horizons, VAR estimates of impulse response functions are less accurate than factor estimates while the two methods perform similarly at medium and long run horizons.  相似文献   

2.
We present examples based on actual and synthetic datasets to illustrate how simulation methods can mask identification problems in the estimation of discrete choice models such as mixed logit. Simulation methods approximate an integral (without a closed form) by taking draws from the underlying distribution of the random variable of integration. Our examples reveal how a low number of draws can generate estimates that appear identified, but in fact, are either not theoretically identified by the model or not empirically identified by the data. For the particular case of maximum simulated likelihood estimation, we investigate the underlying source of the problem by focusing on the shape of the simulated log-likelihood function under different conditions.  相似文献   

3.
Maximum Likelihood (ML) estimation of probit models with correlated errors typically requires high-dimensional truncated integration. Prominent examples of such models are multinomial probit models and binomial panel probit models with serially correlated errors. In this paper we propose to use a generic procedure known as Efficient Importance Sampling (EIS) for the evaluation of likelihood functions for probit models with correlated errors. Our proposed EIS algorithm covers the standard GHK probability simulator as a special case. We perform a set of Monte Carlo experiments in order to illustrate the relative performance of both procedures for the estimation of a multinomial multiperiod probit model. Our results indicate substantial numerical efficiency gains for ML estimates based on the GHK–EIS procedure relative to those obtained by using the GHK procedure.  相似文献   

4.
We develop new methods for the estimation of time-varying risk-neutral jump tails in asset returns. In contrast to existing procedures based on tightly parameterized models, our approach imposes much fewer structural assumptions, relying on extreme-value theory approximations together with short-maturity options. The new estimation approach explicitly allows the parameters characterizing the shape of the right and the left tails to differ, and importantly for the tail shape parameters to change over time. On implementing the procedures with a panel of S&P 500 options, our estimates clearly suggest the existence of highly statistically significant temporal variation in both of the tails. We further relate this temporal variation in the shape and the magnitude of the jump tails to the underlying return variation through the formulation of simple time series models for the tail parameters.  相似文献   

5.
The familiar logit and probit models provide convenient settings for many binary response applications, but a larger class of link functions may be occasionally desirable. Two parametric families of link functions are investigated: the Gosset link based on the Student t latent variable model with the degrees of freedom parameter controlling the tail behavior, and the Pregibon link based on the (generalized) Tukey λ family, with two shape parameters controlling skewness and tail behavior. Both Bayesian and maximum likelihood methods for estimation and inference are explored, compared and contrasted. In applications, like the propensity score matching problem discussed below, where it is critical to have accurate estimates of the conditional probabilities, we find that misspecification of the link function can create serious bias. Bayesian point estimation via MCMC performs quite competitively with MLE methods; however nominal coverage of Bayes credible regions is somewhat more problematic.  相似文献   

6.
The economic theory of option pricing imposes constraints on the structure of call functions and state price densities. Except in a few polar cases, it does not prescribe functional forms. This paper proposes a nonparametric estimator of option pricing models which incorporates various restrictions (such as monotonicity and convexity) within a single least squares procedure. The bootstrap is used to produce confidence intervals for the call function and its first two derivatives and to calibrate a residual regression test of shape constraints. We apply the techniques to option pricing data on the DAX.  相似文献   

7.
A commonly used defining property of long memory time series is the power law decay of the autocovariance function. Some alternative methods of deriving this property are considered, working from the alternate definition in terms of a fractional pole in the spectrum at the origin. The methods considered involve the use of (i) Fourier transforms of generalized functions, (ii) asymptotic expansions of Fourier integrals with singularities, (iii) direct evaluation using hypergeometric function algebra, and (iv) conversion to a simple gamma integral. The paper is largely pedagogical but some novel methods and results involving complete asymptotic series representations are presented. The formulae are useful in many ways, including the calculation of long run variation matrices for multivariate time series with long memory and the econometric estimation of such models.  相似文献   

8.
This paper proposes a common and tractable framework for analyzing fixed and random effects models, in particular constant‐slope variable‐intercept designs. It is shown that, regardless of whether effects (i) are treated as parameters or as an error term, (ii) are estimated in different stages of a hierarchical model, or whether (iii) correlation between effects and regressors is allowed, when the same prior information on idiosyncratic parameters is introduced into all estimation methods, the resulting common slope estimator is also the same across methods. These results are illustrated using the Grünfeld investment data with different prior distributions. Random effects estimates are shown to be more efficient than fixed effects estimates. This efficiency gain, however, comes at the cost of neglecting information obtained in the computation of the prior unknown variance of idiosyncratic parameters.  相似文献   

9.
We show that exact computation of a family of ‘max weighted score’ estimators, including Manski’s max score estimator, can be achieved efficiently by reformulating them as mixed integer programs (MIP) with disjunctive constraints. The advantage of our MIP formulation is that estimates are exact and can be computed using widely available solvers in reasonable time. In a classic work-trip mode choice application, our method delivers exact estimates that lead to a different economic interpretation of the data than previous heuristic estimates. In a small Monte Carlo study we find that our approach is computationally efficient for usual estimation problem sizes.  相似文献   

10.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

11.
We introduce the matrix exponential as a way of modelling spatially dependent data. The matrix exponential spatial specification (MESS) simplifies the log-likelihood allowing a closed form solution to the problem of maximum-likelihood estimation, and greatly simplifies the Bayesian estimation of the model. The MESS can produce estimates and inferences similar to those from conventional spatial autoregressive models, but has analytical, computational, and interpretive advantages. We present maximum likelihood and Bayesian approaches to the estimation of this spatial model specification along with methods of model comparisons over different explanatory variables and spatial specifications.  相似文献   

12.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right.  相似文献   

13.
This paper provides a probabilistic and statistical comparison of the log-GARCH and EGARCH models, which both rely on multiplicative volatility dynamics without positivity constraints. We compare the main probabilistic properties (strict stationarity, existence of moments, tails) of the EGARCH model, which are already known, with those of an asymmetric version of the log-GARCH. The quasi-maximum likelihood estimation of the log-GARCH parameters is shown to be strongly consistent and asymptotically normal. Similar estimation results are only available for the EGARCH (1,1) model, and under much stronger assumptions. The comparison is pursued via simulation experiments and estimation on real data.  相似文献   

14.
Insufficient price variation seriously hampers many applications of consumer demand models. This paper examines the empirical performance of a potential remedy for this problem that was suggested by [Lewbel, A., 1989. Identification and estimation of equivalence scales under weak separability. Review of Economic Studies 56, 311–316], the construction of individual specific price indices for bundles of goods. These individual specific price indices allow for a population with heterogeneity in preferences for goods within a given bundle of goods. We confine ourselves to heterogeneous Cobb Douglas within bundle preferences, while between bundles, we allow for several parametric and even general nonparametric specifications. In a variety of settings, we show that such prices produce superior empirical results than the ones obtained through the traditional practice of using aggregate price indices. Our empirical analysis is based on the British Family Expenditure Survey data, and uses several categories of food. Both in parametric as well as nonparametric models, we obtain higher precision of estimates for parameters or functions, as well as economically more plausible results.  相似文献   

15.
We propose two new types of nonparametric tests for investigating multivariate regression functions. The tests are based on cumulative sums coupled with either minimum volume sets or inverse regression ideas; involving no multivariate nonparametric regression estimation. The methods proposed facilitate the investigation for different features such as if a multivariate regression function is (i) constant, (ii) of a bathtub shape, and (iii) in a given parametric form. The inference based on those tests may be further enhanced through associated diagnostic plots. Although the potential use of those ideas is much wider, we focus on the inference for multivariate volatility functions in this paper, i.e. we test for (i) heteroscedasticity, (ii) the so-called ‘smiling effect’, and (iii) some parametric volatility models. The asymptotic behavior of the proposed tests is investigated, and practical feasibility is shown via simulation studies. We further illustrate our methods with real financial data.  相似文献   

16.
Microeconometric treatments of discrete choice under risk are typically homoscedastic latent variable models. Specifically, choice probabilities are given by preference functional differences (given by expected utility, rank-dependent utility, etc.) embedded in cumulative distribution functions. This approach has a problem: Estimated utility function parameters meant to represent agents’ degree of risk aversion in the sense of Pratt (1964) do not imply a suggested “stochastically more risk averse” relation within such models. A new heteroscedastic model called “contextual utility” remedies this, and estimates in one data set suggest it explains (and especially predicts) as well as or better than other stochastic models.  相似文献   

17.
Under a conditional mean restriction Das et al. (2003) considered nonparametric estimation of sample selection models. However, their method can only identify the outcome regression function up to a constant. In this paper we strengthen the conditional mean restriction to a symmetry restriction under which selection biases due to selection on unobservables can be eliminated through proper matching of propensity scores; consequently we are able to identify and obtain consistent estimators for the average treatment effects and the structural regression functions. The results from a simulation study suggest that our estimators perform satisfactorily.  相似文献   

18.
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.  相似文献   

19.
In this paper we investigate a spatial Durbin error model with finite distributed lags and consider the Bayesian MCMC estimation of the model with a smoothness prior. We study also the corresponding Bayesian model selection procedure for the spatial Durbin error model, the spatial autoregressive model and the matrix exponential spatial specification model. We derive expressions of the marginal likelihood of the three models, which greatly simplify the model selection procedure. Simulation results suggest that the Bayesian estimates of high order spatial distributed lag coefficients are more precise than the maximum likelihood estimates. When the data is generated with a general declining pattern or a unimodal pattern for lag coefficients, the spatial Durbin error model can better capture the pattern than the SAR and the MESS models in most cases. We apply the procedure to study the effect of right to work (RTW) laws on manufacturing employment.  相似文献   

20.
Housing tenure choice has been the subject of a very large literature. Many treatments have sought to estimate the effect of household income on the likelihood of home ownership. To date, no study has ever disaggregated the household income of married couples into the separate labor income components to see if one partner’s income has a different effect than the other. Using a derived likelihood function to control for censoring in the wife’s income, this paper estimates the effect of separate incomes on housing tenure choice, accounting for possible endogeneity of the wife’s income. To compare the results of this estimation method, the paper also estimates the standard IV models, 2SLS and IV probit. While the results show that there is no endogeneity of the wife’s income, ignoring the censoring of the endogenous variable (when a large fraction of observations are censored) can possibly lead to biased coefficient estimates. Also, this paper confirms the importance of total household income, which has a larger effect than the total disaggregated components.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号