首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We show how the dynamic logit model for binary panel data may be approximated by a quadratic exponential model. Under the approximating model, simple sufficient statistics exist for the subject-specific parameters introduced to capture the unobserved heterogeneity between subjects. The latter must be distinguished from the state dependence which is accounted for by including the lagged response variable among the regressors. By conditioning on the sufficient statistics, we derive a pseudo conditional likelihood estimator of the structural parameters of the dynamic logit model, which is simple to compute. Asymptotic properties of this estimator are studied in detail. Simulation results show that the estimator is competitive in terms of efficiency with estimators recently proposed in the econometric literature.  相似文献   

2.
The minimum discrimination information principle is used to identify an appropriate parametric family of probability distributions and the corresponding maximum likelihood estimators for binary response models. Estimators in the family subsume the conventional logit model and form the basis for a set of parametric estimation alternatives with the usual asymptotic properties. Sampling experiments are used to assess finite sample performance.  相似文献   

3.
  总被引:1,自引:0,他引:1  
High dimensionality comparable to sample size is common in many statistical problems. We examine covariance matrix estimation in the asymptotic framework that the dimensionality pp tends to ∞ as the sample size nn increases. Motivated by the Arbitrage Pricing Theory in finance, a multi-factor model is employed to reduce dimensionality and to estimate the covariance matrix. The factors are observable and the number of factors KK is allowed to grow with pp. We investigate the impact of pp and KK on the performance of the model-based covariance matrix estimator. Under mild assumptions, we have established convergence rates and asymptotic normality of the model-based estimator. Its performance is compared with that of the sample covariance matrix. We identify situations under which the factor approach increases performance substantially or marginally. The impacts of covariance matrix estimation on optimal portfolio allocation and portfolio risk assessment are studied. The asymptotic results are supported by a thorough simulation study.  相似文献   

4.
In this paper, we introduce a new Poisson mixture model for count panel data where the underlying Poisson process intensity is determined endogenously by consumer latent utility maximization over a set of choice alternatives. This formulation accommodates the choice and count in a single random utility framework with desirable theoretical properties. Individual heterogeneity is introduced through a random coefficient scheme with a flexible semiparametric distribution. We deal with the analytical intractability of the resulting mixture by recasting the model as an embedding of infinite sequences of scaled moments of the mixing distribution, and newly derive their cumulant representations along with bounds on their rate of numerical convergence. We further develop an efficient recursive algorithm for fast evaluation of the model likelihood within a Bayesian Gibbs sampling scheme. We apply our model to a recent household panel of supermarket visit counts. We estimate the nonparametric density of three key variables of interest-price, driving distance, and their interaction-while controlling for a range of consumer demographic characteristics. We use this econometric framework to assess the opportunity cost of time and analyze the interaction between store choice, trip frequency, search intensity, and household and store characteristics. We also conduct a counterfactual welfare experiment and compute the compensating variation for a 10%-30% increase in Walmart prices.  相似文献   

5.
In this paper, we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification, while other alternative-specific coefficients are assumed to be drawn from a multivariate Normal distribution, which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model, using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a “latent class” sampling algorithm, which is applicable to a general class of models, including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004–2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution, which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience.  相似文献   

6.
7.
    
We correct the limit theory presented in an earlier paper by Hu and Phillips [2004a. Nonstationary discrete choice. Journal of Econometrics 120, 103–138] for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n1/4n1/4 and n3/4n3/4), all parameters including the thresholds converge at the rate n3/4n3/4. The presence of nonzero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pile-up, or increasing density, towards the limits of the domain of definition.  相似文献   

8.
    
The finite sample behavior is analyzed of particular least squares (LS) and a range of (generalized) method of moments (MM) estimators in panel data models with individual effects and both a lagged dependent variable regressor and another explanatory variable. The latter may be affected by lagged feedbacks from the dependent variable too. Asymptotic expansions indicate how the order of magnitude of bias of MM estimators tends to increase with the number of moment conditions exploited. They also provide analytic evidence on how the bias of the various estimators depends on the feedbacks and on other model characteristics such as prominence of individual effects and correlation between observed and unobserved heterogeneity. Simulation results corroborate the theoretical findings and reveal that in small samples of models with dynamic feedbacks none of the techniques examined dominates regarding bias and mean squared error over all parametrizations examined.  相似文献   

9.
10.
    
In this paper, we investigate the effect of mean-nonstationarity on the first-difference generalized method of moments (FD-GMM) estimator in dynamic panel data models. We find that when data is mean-nonstationary and the variance of individual effects is significantly larger than that of disturbances, the FD-GMM estimator performs quite well. We demonstrate that this is because the correlation between the lagged dependent variable and instruments gets larger owing to the unremoved individual effects, i.e., instruments become strong. This implies that, under mean-nonstationarity, the FD-GMM estimator does not always suffer from the weak instruments problem even when data is persistent.  相似文献   

11.
Ornstein–Uhlenbeck models are continuous-time processes which have broad applications in finance as, e.g., volatility processes in stochastic volatility models or spread models in spread options and pairs trading. The paper presents a least squares estimator for the model parameter in a multivariate Ornstein–Uhlenbeck model driven by a multivariate regularly varying Lévy process with infinite variance. We show that the estimator is consistent. Moreover, we derive its asymptotic behavior and test statistics. The results are compared to the finite variance case. For the proof we require some new results on multivariate regular variation of products of random vectors and central limit theorems. Furthermore, we embed this model in the setup of a co-integrated model in continuous time.  相似文献   

12.
    
This paper derives an approximation of the mean square error (MSE) of the GMM estimator in dynamic panel data models. The approximation is based on higher-order asymptotic theory under double asymptotics. While first-order theory under double asymptotics provides information about the bias, it does not provide enough information about the variance of the estimator. Higher-order theory enables us to obtain information about the variance. From this result, a procedure for choosing the number of instruments is proposed. The simulations confirm that the proposed procedure improves the precision of the estimator.  相似文献   

13.
Common breaks in means and variances for panel data   总被引:3,自引:0,他引:3  
This paper establishes the consistency of the estimated common break point in panel data. Consistency is obtainable even when a regime contains a single observation, making it possible to quickly identify the onset of a new regime. We also propose a new framework for developing the limiting distribution for the estimated break point, and show how to construct confidence intervals. The least squares method is used for estimating breaks in means and the quasi-maximum likelihood (QML) method is used to estimate breaks in means and in variances. QML is shown to be more efficient than the least squares even if there is no change in the variances.  相似文献   

14.
We study regression models that involve data sampled at different frequencies. We derive the asymptotic properties of the NLS estimators of such regression models and compare them with the LS estimators of a traditional model that involves aggregating or equally weighting data to estimate a model at the same sampling frequency. In addition we propose new tests to examine the null hypothesis of equal weights in aggregating time series in a regression model. We explore the above theoretical aspects and verify them via an extensive Monte Carlo simulation study and an empirical application.  相似文献   

15.
    
We propose a general two-step estimator for a popular Markov discrete choice model that includes a class of Markovian games with continuous observable state space. Our estimation procedure generalizes the computationally attractive methodology of Pesendorfer and Schmidt-Dengler (2008) that assumed finite observable states. This extension is non-trivial as the policy value functions are solutions to some type II integral equations. We show that the inverse problem is well-posed. We provide a set of primitive conditions to ensure root-T consistent estimation for the finite dimensional structural parameters and the distribution theory for the value functions in a time series framework.  相似文献   

16.
This paper investigates the properties of the well-known maximum likelihood estimator in the presence of stochastic volatility and market microstructure noise, by extending the classic asymptotic results of quasi-maximum likelihood estimation. When trying to estimate the integrated volatility and the variance of noise, this parametric approach remains consistent, efficient and robust as a quasi-estimator under misspecified assumptions. Moreover, it shares the model-free feature with nonparametric alternatives, for instance realized kernels, while being advantageous over them in terms of finite sample performance. In light of quadratic representation, this estimator behaves like an iterative exponential realized kernel asymptotically. Comparisons with a variety of implementations of the Tukey–Hanning2 kernel are provided using Monte Carlo simulations, and an empirical study with the Euro/US Dollar future illustrates its application in practice.  相似文献   

17.
    
We introduce a modified conditional logit model that takes account of uncertainty associated with mis‐reporting in revealed preference experiments estimating willingness‐to‐pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239–269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non‐pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis‐reporting, are substantially revised downwards. We find a significant proportion of respondents mis‐reporting in favour of the non‐pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.  相似文献   

18.
    
Let S be the number of components in a finite discrete mixing distribution. We prove that the number of waves of panel being greater than or equal to 2S is a sufficient condition for global identification of a dynamic binary choice model in which all the parameters are heterogeneous. This model results in a mixture of S binary first‐order Markov Chains.  相似文献   

19.
    
This paper analyzes the S&P 500 index return variance dynamics and the variance risk premium by combining information in variance swap rates constructed from options and quadratic variation estimators constructed from tick data on S&P 500 index futures. Estimation shows that the index return variance jumps. The jump arrival rate is not constant over time, but is proportional to the variance rate level. The variance jumps are not rare events but arrive frequently. Estimation also identifies a strongly negative variance risk premium, the absolute magnitude of which is proportional to the variance rate level.  相似文献   

20.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号