首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The problem of estimating the weights associated with mixture distributions subject to several constraints, such as the percentile and/or moment constraints, is analyzed using the general theory of polyhedral convex cones and systems of inequalities. We address three problems associated with constrained mixture distributions: (a) Compatibility: a set of inequalities is obtained to check whether or not any given set of constraints lead to a feasible solution for the weights, (b) Feasible solutions: a general expression for building feasible solutions for the weights associated with the given constraints is obtained, and (c) Equivalence: the set of all feasible weights is obtained. In addition, the problem is shown to lead to a new mixture distribution, without extra constraints. This new mixture distribution can then be easily used for the statistical analysis (e.g. estimation and hypotheses testing) instead of the original mixture distribution with extra constraints. The proposed methods are illustrated by numerical examples.  相似文献   

2.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

3.
Collusion and heterogeneity across firms may introduce asymmetry in bidding games. A major difficulty in asymmetric auctions is that the Bayesian Nash equilibrium strategies are solutions of an intractable system of differential equations. We propose a simple method for estimating asymmetric first‐price auctions with affiliated private values. Considering two types of bidders, we show that these differential equations can be rewritten using the observed bid distribution. We establish the identification of the model, characterize its theoretical restrictions, and propose a two‐step non‐parametric estimation procedure for estimating the private value distributions. An empirical analysis of joint bidding in OCS auctions is provided. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
This article considers some of the technical issues involved in using the global vector autoregression (GVAR) approach to construct a multi‐country rational expectations (RE) model and illustrates them with a new Keynesian model for 33 countries estimated with quarterly data over the period 1980–2011. The issues considered are: the measurement of steady states; the determination of exchange rates and the specification of the short‐run country‐specific models; the identification and estimation of the model subject to the theoretical constraints required for a determinate rational expectations solution; the solution of a large RE model; the structure and estimation of the covariance matrix and the simulation of shocks. The model used as an illustration shows that global demand and supply shocks are the most important drivers of output, inflation and interest rates in the long run. By contrast, monetary or exchange rate shocks have only a short‐run impact in the evolution of the world economy. The article also shows the importance of international connections, directly as well as indirectly through spillover effects. Overall, ignoring global inter‐connections as country‐specific models do, could give rise to misleading conclusions.  相似文献   

5.
We propose a new dynamic copula model in which the parameter characterizing dependence follows an autoregressive process. As this model class includes the Gaussian copula with stochastic correlation process, it can be viewed as a generalization of multivariate stochastic volatility models. Despite the complexity of the model, the decoupling of marginals and dependence parameters facilitates estimation. We propose estimation in two steps, where first the parameters of the marginal distributions are estimated, and then those of the copula. Parameters of the latent processes (volatilities and dependence) are estimated using efficient importance sampling. We discuss goodness‐of‐fit tests and ways to forecast the dependence parameter. For two bivariate stock index series, we show that the proposed model outperforms standard competing models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

7.
Estimation of spatial autoregressive panel data models with fixed effects   总被引:13,自引:0,他引:13  
This paper establishes asymptotic properties of quasi-maximum likelihood estimators for SAR panel data models with fixed effects and SAR disturbances. A direct approach is to estimate all the parameters including the fixed effects. Because of the incidental parameter problem, some parameter estimators may be inconsistent or their distributions are not properly centered. We propose an alternative estimation method based on transformation which yields consistent estimators with properly centered distributions. For the model with individual effects only, the direct approach does not yield a consistent estimator of the variance parameter unless T is large, but the estimators for other common parameters are the same as those of the transformation approach. We also consider the estimation of the model with both individual and time effects.  相似文献   

8.
We develop new tests of the capital asset pricing model that take account of and are valid under the assumption that the distribution generating returns is elliptically symmetric; this assumption is necessary and sufficient for the validity of the CAPM. Our test is based on semiparametric efficient estimation procedures for a seemingly unrelated regression model where the multivariate error density is elliptically symmetric, but otherwise unrestricted. The elliptical symmetry assumption allows us to avoid the curse of dimensionality problem that typically arises in multivariate semiparametric estimation procedures, because the multivariate elliptically symmetric density function can be written as a function of a scalar transformation of the observed multivariate data. The elliptically symmetric family includes a number of thick‐tailed distributions and so is potentially relevant in financial applications. Our estimated betas are lower than the OLS estimates, and our parameter estimates are much less consistent with the CAPM restrictions than the corresponding OLS estimates. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small‐sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey‐based probability forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
We study the filtering problem for the stochastic volatility model of Heston by using the nonlinear estimation theory. To solve the estimation problem for the stochastic volatility process, we use the random time change method. The derived basic equation for the filtering is the so-called Zakai equation and its numerically realized algorithm is proposed with the aid of the splitting-up method. Regarding the European call option problem, the identification of the market price of the volatility risk is also studied. Some numerical simulation studies are demonstrated to show the advantage of the proposed method.  相似文献   

11.
We consider a utility‐consistent static labor supply model with flexible preferences and a nonlinear and possibly non‐convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
Our aim is to analyze the link between optimism and risk aversion in a subjective expected utility setting and to estimate the average level of optimism when weighted by risk tolerance. Its estimation leads to a non‐trivial statistical problem. We start from a large lottery survey (1536 individuals). We assume that individuals have true unobservable characteristics. We adopt a Bayesian approach and use a hybrid MCMC approximation method to numerically estimate the distributions of the unobservable characteristics. We find that individuals are on average pessimistic and that pessimism and risk tolerance are positively correlated. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
Applied econometricians often fail to impose economic regularity constraints in the exact form economic theory prescribes. We show how the Singular Value Decomposition (SVD) Theorem and Markov Chain Monte Carlo (MCMC) methods can be used to rigorously impose time‐ and firm‐varying equality and inequality constraints. To illustrate the technique we estimate a system of translog input demand functions subject to all the constraints implied by economic theory, including observation‐varying symmetry and concavity constraints. Results are presented in the form of characteristics of the estimated posterior distributions of functions of the parameters. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
Copulas are distributions with uniform marginals. Non‐parametric copula estimates may violate the uniformity condition in finite samples. We look at whether it is possible to obtain valid piecewise linear copula densities by triangulation. The copula property imposes strict constraints on design points, making an equi‐spaced grid a natural starting point. However, the mixed‐integer nature of the problem makes a pure triangulation approach impractical on fine grids. As an alternative, we study the ways of approximating copula densities with triangular functions which guarantees that the estimator is a valid copula density. The family of resulting estimators can be viewed as a non‐parametric MLE of B‐spline coefficients on possibly non‐equally spaced grids under simple linear constraints. As such, it can be easily solved using standard convex optimization tools and allows for a degree of localization. A simulation study shows an attractive performance of the estimator in small samples and compares it with some of the leading alternatives. We demonstrate empirical relevance of our approach using three applications. In the first application, we investigate how the body mass index of children depends on that of parents. In the second application, we construct a bivariate copula underlying the Gibson paradox from macroeconomics. In the third application, we show the benefit of using our approach in testing the null of independence against the alternative of an arbitrary dependence pattern.  相似文献   

15.
We propose two methods to choose the variables to be used in the estimation of the structural parameters of a singular DSGE model. The first selects the vector of observables that optimizes parameter identification; the second selects the vector that minimizes the informational discrepancy between the singular and non‐singular model. An application to a standard model is discussed and the estimation properties of different setups compared. Practical suggestions for applied researchers are provided. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
This paper considers estimation of discrete choice models when agents report their ranking of the alternatives (or some of them) rather than just the utility maximizing alternative. We investigate the parametric conditional rank‐ordered Logit model. We show that conditions for identification do not change even if we observe ranking. Moreover, we fill a gap in the literature and show analytically and by Monte Carlo simulations that efficiency increases as we use additional information on the ranking.  相似文献   

17.
We develop in this paper a novel portfolio selection framework with a feature of double robustness in both return distribution modeling and portfolio optimization. While predicting the future return distributions always represents the most compelling challenge in investment, any underlying distribution can be always well approximated by utilizing a mixture distribution, if we are able to ensure that the component list of a mixture distribution includes all possible distributions corresponding to the scenario analysis of potential market modes. Adopting a mixture distribution enables us to (1) reduce the problem of distribution prediction to a parameter estimation problem in which the mixture weights of a mixture distribution are estimated under a Bayesian learning scheme and the corresponding credible regions of the mixture weights are obtained as well and (2) harmonize information from different channels, such as historical data, market implied information and investors׳ subjective views. We further formulate a robust mean-CVaR portfolio selection problem to deal with the inherent uncertainty in predicting the future return distributions. By employing the duality theory, we show that the robust portfolio selection problem via learning with a mixture model can be reformulated as a linear program or a second-order cone program, which can be effectively solved in polynomial time. We present the results of simulation analyses and primary empirical tests to illustrate a significance of the proposed approach and demonstrate its pros and cons.  相似文献   

18.
This article deals with heterogeneity and spatial dependence in economic growth analysis by developing a two‐stage strategy that identifies clubs by a mapping analysis and estimates a club convergence model with spatial dependence. Since estimation of this class of convergence models in the presence of regional heterogeneity poses both identification and collinearity problems, we develop an entropy‐based estimation procedure that simultaneously takes account of ill‐posed and ill‐conditioned inference problems. The two‐step strategy is applied to assess the existence of club convergence and to estimate a two‐club spatial convergence model across Italian regions over the period 1970 to 2000.  相似文献   

19.
We present examples based on actual and synthetic datasets to illustrate how simulation methods can mask identification problems in the estimation of discrete choice models such as mixed logit. Simulation methods approximate an integral (without a closed form) by taking draws from the underlying distribution of the random variable of integration. Our examples reveal how a low number of draws can generate estimates that appear identified, but in fact, are either not theoretically identified by the model or not empirically identified by the data. For the particular case of maximum simulated likelihood estimation, we investigate the underlying source of the problem by focusing on the shape of the simulated log-likelihood function under different conditions.  相似文献   

20.
Recently, single‐equation estimation by the generalized method of moments (GMM) has become popular in the monetary economics literature, for estimating forward‐looking models with rational expectations. We discuss a method for analysing the empirical identification of such models that exploits their dynamic structure and the assumption of rational expectations. This allows us to judge the reliability of the resulting GMM estimation and inference and reveals the potential sources of weak identification. With reference to the New Keynesian Phillips curve of Galí and Gertler [Journal of Monetary Economics (1999) Vol. 44, 195] and the forward‐looking Taylor rules of Clarida, Galí and Gertler [Quarterly Journal of Economics (2000) Vol. 115, 147], we demonstrate that the usual ‘weak instruments’ problem can arise naturally, when the predictable variation in inflation is small relative to unpredictable future shocks (news). Hence, we conclude that those models are less reliably estimated over periods when inflation has been under effective policy control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号