首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

This paper considers the problem of prediction in a panel data regression model with spatial autocorrelation in the context of a simple demand equation for liquor. This is based on a panel of 43 states over the period 1965–1994. The spatial autocorrelation due to neighbouring states and the individual heterogeneity across states is taken explicitly into account. We compare the performance of several predictors of the states’ demand for liquor for 1 year and 5 years ahead. The estimators whose predictions are compared include OLS, fixed effects ignoring spatial correlation, fixed effects with spatial correlation, random-effects GLS estimator ignoring spatial correlation and random-effects estimator accounting for the spatial correlation. Based on RMSE forecast performance, estimators that take into account spatial correlation and heterogeneity across the states perform the best for forecasts 1 year ahead. However, for forecasts 2–5 years ahead, estimators that take into account the heterogeneity across the states yield the best forecasts.  相似文献   

2.
This paper proposes a framework to model welfare effects that are associated with a price change in a population of heterogeneous consumers. The framework is similar to that of Hausman and Newey (Econometrica, 1995, 63, 1445–1476), but allows for more general forms of heterogeneity. Individual demands are characterized by a general model that is nonparametric in the regressors, as well as monotonic in unobserved heterogeneity, allowing us to identify the distribution of welfare effects. We first argue why a decision maker should care about this distribution. Then we establish constructive identification, propose a sample counterparts estimator, and analyze its large‐sample properties. Finally, we apply all concepts to measuring the heterogeneous effect of a change of gasoline price using US consumer data and find very substantial differences in individual effects across quantiles.  相似文献   

3.
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

4.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

5.
This paper explores the possibility of using market data to identify consumer preferences. A utility function composed of ‘homogeneous’ characteristics and goods-specific effects is used as a basic link between the goods space and the characteristics space. The functional form for the hedonic price equation, the data requirements and issues of measurement errors for estimating demand and supply of characteristics are discussed. We illustrate the methodology by considering the US automobile demand using 1969–86 data compiled from Consumer Reports and Ward's Automotive Yearbook.  相似文献   

6.
In a bid to reduce greenhouse gas emissions, several countries worldwide are implementing policies to promote electric vehicles (EVs). However, contrary to expectations, the diffusion speed of EVs has been rather slow in South Korea. This study analyzes consumer preferences for the technological and environmental attributes of EVs and derives policy and environmental implications to promote market diffusion of EVs in South Korea. We conduct a choice‐based conjoint survey of 1,008 consumers in South Korea and estimate the consumer utility function using a mixed logit model considering consumer heterogeneity. Based on the consumer utility function, we analyze consumers' willingness‐to‐pay (WTP) for EV attributes such as driving range, charging method, charging time, autonomous driving function, carbon dioxide (CO2) reduction rate, and purchase price. The results indicate that the current low acceptance of EVs is due to their relatively high price and lack of a battery charging technology that satisfies consumers' expectations of the charging method and time. One interesting finding is that Korean consumers have a relatively higher WTP for the CO2 reduction rate of EVs than consumers in other countries; however, they do not consider CO2 reduction over other technological attributes when choosing EVs. This implies that the rate of CO2 reduction of EVs is not an important factor for South Korean consumers when buying EVs. We also calculate the effect of CO2 reduction with the market penetration of EVs and find that CO2 reduction through the diffusion of EVs depends on the country's electricity generation mix.  相似文献   

7.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

8.
We study the panel dynamic ordinary least square (DOLS) estimator of a homogeneous cointegration vector for a balanced panel of N individuals observed over T time periods. Allowable heterogeneity across individuals include individual‐specific time trends, individual‐specific fixed effects and time‐specific effects. The estimator is fully parametric, computationally convenient, and more precise than the single equation estimator. For fixed N as T→∞, the estimator converges to a function of Brownian motions and the Wald statistic for testing a set of s linear constraints has a limiting χ2(s) distribution. The estimator also has a Gaussian sequential limit distribution that is obtained first by letting T→∞ and then letting N→∞. In a series of Monte‐Carlo experiments, we find that the asymptotic distribution theory provides a reasonably close approximation to the exact finite sample distribution. We use panel DOLS to estimate coefficients of the long‐run money demand function from a panel of 19 countries with annual observations that span from 1957 to 1996. The estimated income elasticity is 1.08 (asymptotic s.e. = 0.26) and the estimated interest rate semi‐elasticity is ?0.02 (asymptotic s.e. = 0.01).  相似文献   

9.
This paper proposes maximum likelihood estimators for panel seemingly unrelated regressions with both spatial lag and spatial error components. We study the general case where spatial effects are incorporated via spatial errors terms and via a spatial lag dependent variable and where the heterogeneity in the panel is incorporated via an error component specification. We generalize the approach of Wang and Kockelman (2007) and propose joint and conditional Lagrange multiplier tests for spatial autocorrelation and random effects for this spatial SUR panel model. The small sample performance of the proposed estimators and tests are examined using Monte Carlo experiments. An empirical application to hedonic housing prices in Paris illustrate these methods. The proposed specification uses a system of three SUR equations corresponding to three types of flats within 80 districts of Paris over the period 1990-2003. We test for spatial effects and heterogeneity and find reasonable estimates of the shadow prices for housing characteristics.  相似文献   

10.
The present paper shows that cross-section demeaning with respect to time fixed effects is more useful than commonly appreciated, in that it enables consistent and asymptotically normal estimation of interactive effects models with heterogeneous slope coefficients when the number of time periods, T, is small and only the number of cross-sectional units, N, is large. This is important when using OLS but also when using more sophisticated estimators of interactive effects models whose validity does not require demeaning, a point that to the best of our knowledge has not been made before in the literature. As an illustration, we consider the problem of estimating the average treatment effect in the presence of unobserved time-varying heterogeneity. Gobillon and Magnac (2016) recently considered this problem. They employed a principal components-based approach designed to deal with general unobserved heterogeneity, which does not require fixed effects demeaning. The approach does, however, require that T is large, which is typically not the case in practice, and the results reported here confirm that the performance can be extremely poor in small-T samples. The exception is when the approach is applied to data that have been demeaned with respect to fixed effects.  相似文献   

11.
We consider the Case 1 interval censoring approach for right‐censored survival data. An important feature of the model is that right‐censored event times are not observed exactly, but at some inspection times. The model covers as particular cases right‐censored data, current status data, and life table survival data with a single inspection time. We discuss the nonparametric estimation approach and consider three nonparametric estimators for the survival function of failure time: maximum likelihood, pseudolikelihood, and the naïve estimator. We establish strong consistency of the estimators with the L1 rate of convergence. Simulation results confirm consistency of the estimators.  相似文献   

12.
Standard jackknife confidence intervals for a quantile Q y (β) are usually preferred to confidence intervals based on analytical variance estimators due to their operational simplicity. However, the standard jackknife confidence intervals can give undesirable coverage probabilities for small samples sizes and large or small values of β. In this paper confidence intervals for a population quantile based on several existing estimators of a quantile are derived. These intervals are based on an approximation for the cumulative distribution function of a studentized quantile estimator. Confidence intervals are empirically evaluated by using real data and some applications are illustrated. Results derived from simulation studies show that proposed confidence intervals are narrower than confidence intervals based on the standard jackknife technique, which assumes normal approximation. Proposed confidence intervals also achieve coverage probabilities above to their nominal level. This study indicates that the proposed method can be an alternative to the asymptotic confidence intervals, which can be unknown in practice, and the standard jackknife confidence intervals, which can have poor coverage probabilities and give wider intervals.  相似文献   

13.
Typical data that arise from surveys, experiments, and observational studies include continuous and discrete variables. In this article, we study the interdependence among a mixed (continuous, count, ordered categorical, and binary) set of variables via graphical models. We propose an ?1‐penalized extended rank likelihood with an ascent Monte Carlo expectation maximization approach for the copula Gaussian graphical models and establish near conditional independence relations and zero elements of a precision matrix. In particular, we focus on high‐dimensional inference where the number of observations are in the same order or less than the number of variables under consideration. To illustrate how to infer networks for mixed variables through conditional independence, we consider two datasets: one in the area of sports and the other concerning breast cancer.  相似文献   

14.
This paper considers estimation of censored panel‐data models with individual‐specific slope heterogeneity. The slope heterogeneity may be random (random slopes model) or related to covariates (correlated random slopes model). Maximum likelihood and censored least‐absolute deviations estimators are proposed for both models. The estimators are simple to implement and, in the case of maximum likelihood, lead to straightforward estimation of partial effects. The rescaled bootstrap suggested by Andrews (Econometrica 2000; 68 : 399–405) is used to deal with the possibility of variance parameters being equal to zero. The methodology is applied to an empirical study of Dutch household portfolio choice, where the outcome variable (portfolio share in safe assets) has corner solutions at zero and one. As predicted by economic theory, there is strong evidence of correlated random slopes for the age profiles, indicating a heterogeneous age profile of portfolio adjustment that varies significantly with other household characteristics. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
We contrast the forecasting performance of alternative panel estimators, divided into three main groups: homogeneous, heterogeneous and shrinkage/Bayesian. Via a series of Monte Carlo simulations, the comparison is performed using different levels of heterogeneity and cross sectional dependence, alternative panel structures in terms of T and N and the specification of the dynamics of the error term. To assess the predictive performance, we use traditional measures of forecast accuracy (Theil’s U statistics, RMSE and MAE), the Diebold–Mariano test, and Pesaran and Timmerman’s statistic on the capability of forecasting turning points. The main finding of our analysis is that when the level of heterogeneity is high, shrinkage/Bayesian estimators are preferred, whilst when there is low or mild heterogeneity, homogeneous estimators have the best forecast accuracy.  相似文献   

16.
Estimation of spatial autoregressive panel data models with fixed effects   总被引:13,自引:0,他引:13  
This paper establishes asymptotic properties of quasi-maximum likelihood estimators for SAR panel data models with fixed effects and SAR disturbances. A direct approach is to estimate all the parameters including the fixed effects. Because of the incidental parameter problem, some parameter estimators may be inconsistent or their distributions are not properly centered. We propose an alternative estimation method based on transformation which yields consistent estimators with properly centered distributions. For the model with individual effects only, the direct approach does not yield a consistent estimator of the variance parameter unless T is large, but the estimators for other common parameters are the same as those of the transformation approach. We also consider the estimation of the model with both individual and time effects.  相似文献   

17.
The effect of technological innovation on employment is of major concern for workers and their unions, policy makers and academic researchers. We meta‐analyse 570 estimates from 35 primary studies that estimate a derived labour demand model. We contribute to existing attempts at evidence synthesis by addressing the risks of selection bias and that of data dependence in observational studies. Our findings indicate that: (i) hierarchical meta‐regression models are sufficiently versatile for addressing both selection bias and data dependence in observational data; (ii) innovation's effect on employment is positive but small and highly heterogeneous; (iii) only a small part of residual heterogeneity is explained by moderating factors; (iv) selection bias tends to reflect preference for upholding prevalent hypotheses on the employment effects of process and product innovations; (v) country‐specific effect‐size estimates are related to labour market and product market regulation in six OECD countries in a U‐shaped fashion; and (vi) OLS estimates reflect upward bias whereas those based on time‐differenced or within estimators reflect a downward bias. Our findings point out to a range of data quality and modelling issues that should be addressed in future research.  相似文献   

18.
This paper considers the identification and estimation of an extension of Roy’s model (1951) of sectoral choice, which includes a non-pecuniary component in the selection equation and allows for uncertainty on potential earnings. We focus on the identification of the non-pecuniary component, which is key to disentangling the relative importance of monetary incentives versus preferences in the context of sorting across sectors. By making the most of the structure of the selection equation, we show that this component is point identified from the knowledge of the covariate effects on earnings, as soon as one covariate is continuous. Notably, and in contrast to most results on the identification of Roy models, this implies that identification can be achieved without any exclusion restriction nor large support condition on the covariates. As a by-product, bounds are obtained on the distribution of the ex ante   monetary returns. We propose a three-stage semiparametric estimation procedure for this model, which yields root-nn consistent and asymptotically normal estimators. Finally, we apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions.  相似文献   

19.
We present a nonparametric study of current status data in the presence of death. Such data arise from biomedical investigations in which patients are examined for the onset of a certain disease, for example, tumor progression, but may die before the examination. A key difference between such studies on human subjects and the survival–sacrifice model in animal carcinogenicity experiments is that, due to ethical and perhaps technical reasons, deceased human subjects are not examined, so that the information on their disease status is lost. We show that, for current status data with death, only the overall and disease‐free survival functions can be identified, whereas the cumulative incidence of the disease is not identifiable. We describe a fast and stable algorithm to estimate the disease‐free survival function by maximizing a pseudo‐likelihood with plug‐in estimates for the overall survival rates. It is then proved that the global rate of convergence for the nonparametric maximum pseudo‐likelihood estimator is equal to Op(n?1/3) or the convergence rate of the estimated overall survival function, whichever is slower. Simulation studies show that the nonparametric maximum pseudo‐likelihood estimators are fairly accurate in small‐ to medium‐sized samples. Real data from breast cancer studies are analyzed as an illustration.  相似文献   

20.
When modeling demand for differentiated products, it is vital to adequately capture consumer taste heterogeneity, But there is no clearly preferred approach. Here, we compare the performance of six alternative models. Currently, the most popular are mixed logit (MIXL), particularly the version with normal mixing (N‐MIXL), and latent class (LC), which assumes discrete consumer types. Recently, several alternative models have been developed. The 'generalized multinomial logit' (G‐MNL) extends N‐MIXL by allowing for heterogeneity in the logit scale coefficient. Scale heterogeneity logit (S‐MNL) is a special case of G‐MNL with scale heterogeneity only. The 'mixed‐mixed' logit (MM‐MNL) assumes a discrete mixture‐of‐normals heterogeneity distribution. Finally, one can modify N‐MIXL by imposing theoretical sign constraints on vertical attributes. We call this 'T‐MIXL'. We find that none of these models dominates the others, but G‐MNL, MM‐MNL and T‐MIXL typically outperform the popular N‐MIXL and LC models. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号