首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Equilibrium business cycle models have typically less shocks than variables. As pointed out by Altug (1989) International Economic Review 30 (4) 889–920 and Sargent (1989) The Journal of Political Economy 97 (2) 251–287, if variables are measured with error, this characteristic implies that the model solution for measured variables has a factor structure. This paper compares estimation performance for the impulse response coefficients based on a VAR approximation to this class of models and an estimation method that explicitly takes into account the restrictions implied by the factor structure. Bias and mean-squared error for both factor- and VAR-based estimates of impulse response functions are quantified using, as data-generating process, a calibrated standard equilibrium business cycle model. We show that, at short horizons, VAR estimates of impulse response functions are less accurate than factor estimates while the two methods perform similarly at medium and long run horizons.  相似文献   

2.
As a result of novel data collection technologies, it is now common to encounter data in which the number of explanatory variables collected is large, while the number of variables that actually contribute to the model remains small. Thus, a method that can identify those variables with impact on the model without inferring other noneffective ones will make analysis much more efficient. Many methods are proposed to resolve the model selection problems under such circumstances, however, it is still unknown how large a sample size is sufficient to identify those “effective” variables. In this paper, we apply sequential sampling method so that the effective variables can be identified efficiently, and the sampling is stopped as soon as the “effective” variables are identified and their corresponding regression coefficients are estimated with satisfactory accuracy, which is new to sequential estimation. Both fixed and adaptive designs are considered. The asymptotic properties of estimates of the number of effective variables and their coefficients are established, and the proposed sequential estimation procedure is shown to be asymptotically optimal. Simulation studies are conducted to illustrate the performance of the proposed estimation method, and a diabetes data set is used as an example.  相似文献   

3.
This review surveys a number of common model selection algorithms (MSAs), discusses how they relate to each other and identifies factors that explain their relative performances. At the heart of MSA performance is the trade‐off between type I and type II errors. Some relevant variables will be mistakenly excluded, and some irrelevant variables will be retained by chance. A successful MSA will find the optimal trade‐off between the two types of errors for a given data environment. Whether a given MSA will be successful in a given environment depends on the relative costs of these two types of errors. We use Monte Carlo experimentation to illustrate these issues. We confirm that no MSA does best in all circumstances. Even the worst MSA in terms of overall performance – the strategy of including all candidate variables – sometimes performs best (viz., when all candidate variables are relevant). We also show how (1) the ratio of relevant to total candidate variables and (2) data‐generating process noise affect relative MSA performance. Finally, we discuss a number of issues complicating the task of MSAs in producing reliable coefficient estimates.  相似文献   

4.
The classical exploratory factor analysis (EFA) finds estimates for the factor loadings matrix and the matrix of unique factor variances which give the best fit to the sample correlation matrix with respect to some goodness-of-fit criterion. Common factor scores can be obtained as a function of these estimates and the data. Alternatively to the classical EFA, the EFA model can be fitted directly to the data which yields factor loadings and common factor scores simultaneously. Recently, new algorithms were introduced for the simultaneous least squares estimation of all EFA model unknowns. The new methods are based on the numerical procedure for singular value decomposition of matrices and work equally well when the number of variables exceeds the number of observations. This paper provides an account that is intended as an expository review of methods for simultaneous parameter estimation in EFA. The methods are illustrated on Harman's five socio-economic variables data and a high-dimensional data set from genome research.  相似文献   

5.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

6.
This paper considers identification and estimation of structural interaction effects in a social interaction model. The model allows unobservables in the group structure, which may be correlated with included regressors. We show that both the endogenous and exogenous interaction effects can be identified if there are sufficient variations in group sizes. We consider the estimation of the model by the conditional maximum likelihood and instrumental variables methods. For the case with large group sizes, the possible identification can be weak in the sense that the estimates converge in distribution at low rates.  相似文献   

7.
Whilst statistics take up a prominent place in the social science research toolkit, some old problems that have been associated there with have not been fully resolved. These problems include bias through the inclusion of irrelevant variation and the exclusion of relevant variation, which may lead to hidden and spurious correlations in more extreme—however not at all unthinkable—cases. These issues have been addressed by Ragin by building a case for the usage of fuzzy set theory in social science. In this paper, we take a complementary view, insofar as we incorporate fuzzy set theory in current statistical analyses. Apart from shedding new light on the main issues associated with (population based) statistics, this approach also offers interesting prospects for the falsification of theories—rather than single relations between variables—in the social sciences.  相似文献   

8.
We introduce a method for estimating multiple class regression models when class membership is uncertain. The procedure—local polynomial regression clustering—first estimates a nonparametric model via local polynomial regression, and then identifies the underlying classes by aggregating sample observations into data clusters with similar estimates of the (local) functional relationships between dependent and independent variables. Finally, parametric functions specific to each class are estimated. The technique is applied to the estimation of a multiple‐class hedonic model for wine, resulting in the identification of four distinct wine classes based on differences in implicit prices of the attributes. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Factor analysis models are used in data dimensionality reduction problems where the variability among observed variables can be described through a smaller number of unobserved latent variables. This approach is often used to estimate the multidimensionality of well-being. We employ factor analysis models and use multivariate empirical best linear unbiased predictor (EBLUP) under a unit-level small area estimation approach to predict a vector of means of factor scores representing well-being for small areas. We compare this approach with the standard approach whereby we use small area estimation (univariate and multivariate) to estimate a dashboard of EBLUPs of the means of the original variables and then averaged. Our simulation study shows that the use of factor scores provides estimates with lower variability than weighted and simple averages of standardised multivariate EBLUPs and univariate EBLUPs. Moreover, we find that when the correlation in the observed data is taken into account before small area estimates are computed, multivariate modelling does not provide large improvements in the precision of the estimates over the univariate modelling. We close with an application using the European Union Statistics on Income and Living Conditions data.  相似文献   

10.
《Economic Systems》2023,47(1):101007
The paper studies the fall of the labor income share in Mexico, contrasting the role of trade and factor intensity as transmission channels of the China shock of 2001. It finds that, while the skill, technological and —more surprisingly— trade intensity of Mexican industries were largely irrelevant, capital intensity played a key role: in particular, the higher was the industries’ initial capital intensity, the more vulnerable they were to the transmission of the global shock to labor. The finding is consistent with the proposition that industrial integration, concentrated in industries that are capital-intensive from the perspective of developing countries, facilitated the transmission of the shock. Results come from the estimation of panel equations for the annual change in the labor share across Mexican manufacturing industries, where transmission is measured by the correlation between changes in the United States and Mexican industry labor shares.  相似文献   

11.
《Labour economics》2007,14(1):73-98
Regression models of wage determination are typically estimated by ordinary least squares using the logarithm of the wage as the dependent variable. These models provide consistent estimates of the proportional impact of wage determinants only under the assumption that the distribution of the error term is independent of the regressors — an assumption that can be violated by the presence of heteroskedasticity, for example. Failure of this assumption is particularly relevant in the estimation of the impact of union status on wages. Alternative wage-equation estimators based on the use of quasi-maximum-likelihood methods are consistent under weaker assumptions about the dependence between the error term and the regressors. They also provide the ability to check the specification of the underlying wage model. Applying this approach to a standard data set, I find that the impact of unions on wages is overstated by a magnitude of 20-30 percent when estimates from log-wage regressions are used for inference.  相似文献   

12.
Small area estimation is a widely used indirect estimation technique for micro‐level geographic profiling. Three unit level small area estimation techniques—the ELL or World Bank method, empirical best prediction (EBP) and M‐quantile (MQ) — can estimate micro‐level Foster, Greer, & Thorbecke (FGT) indicators: poverty incidence, gap and severity using both unit level survey and census data. However, they use different assumptions. The effects of using model‐based unit level census data reconstructed from cross‐tabulations and having no cluster level contextual variables for models are discussed, as are effects of small area and cluster level heterogeneity. A simulation‐based comparison of ELL, EBP and MQ uses a model‐based reconstruction of 2000/2001 data from Bangladesh and compares bias and mean square error. A three‐level ELL method is applied for comparison with the standard two‐level ELL that lacks a small area level component. An important finding is that the larger number of small areas for which ELL has been able to produce sufficiently accurate estimates in comparison with EBP and MQ has been driven more by the type of census data available or utilised than by the model per se.  相似文献   

13.
In this paper, I discuss three issues related to bias of OLS estimators in a general multivariate setting. First, I discuss the bias that arises from omitting relevant variables. I offer a geometric interpretation of such bias and derive sufficient conditions in terms of sign restrictions that allows us to determine the direction of bias. Second, I show that inclusion of some omitted variables will not necessarily reduce the magnitude of bias as long as some others remain omitted. Third, I show that inclusion of irrelevant variables in a model with omitted variables can also have an impact on the bias of OLS estimators. I use a running example of a simple wage regression to illustrate my arguments.  相似文献   

14.
We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample size is not large, for instance smaller than about 1000, asymptotic distribution-free estimation methods are also not applicable. This paper assumes that the observed variables are transformed to normally distributed variables. The non-normally distributed variables are transformed with a Box–Cox function. Estimation of the model parameters and the transformation parameters is done by the maximum likelihood method. Furthermore, the test statistics (i.e. standard deviations) of these parameters are derived. This makes it possible to show the importance of the transformations. Finally, an empirical example is presented.  相似文献   

15.
The past forty years have seen a great deal of research into the construction and properties of nonparametric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson kernel regression estimator.
A third, less well-explored, strand of applications of smoothing is to the estimation of probabilities in categorical data. In this paper the position of categorical data smoothing as a bridge between nonparametric regression and density estimation is explored. Nonparametric regression provides a paradigm for the construction of effective categorical smoothing estimates, and use of an appropriate likelihood function yields cell probability estimates with many desirable properties. Such estimates can be used to construct regression estimates when one or more of the categorical variables are viewed as response variables. They also lead naturally to the construction of well-behaved density estimates using local or penalized likelihood estimation, which can then be used in a regression context. Several real data sets are used to illustrate these points.  相似文献   

16.
《Journal of econometrics》2005,128(2):301-323
Gauss–Hermite quadrature is often used to evaluate and maximize the likelihood for random component probit models. Unfortunately, the estimates are biased for large cluster sizes and/or intraclass correlations. We show that adaptive quadrature largely overcomes these problems. We then extend the adaptive quadrature approach to general random coefficient models with limited and discrete dependent variables. The models can include several nested random effects (intercepts and coefficients) representing unobserved heterogeneity at different levels of a hierarchical dataset. The required multivariate integrals are evaluated efficiently using spherical quadrature rules. Simulations show that adaptive quadrature performs well in a wide range of situations.  相似文献   

17.
We consider methods for estimating the means of survey variables in domains of a finite population, where sample sizes are too small to obtain reliable direct estimates. We construct generalized compositions from the direct and traditional design-based synthetic estimators and propose the methodology for evaluating their coefficients. This methodology measures similarities among sample elements and estimates of the domain means. We propose the compositions for two cases of auxiliary information: domain-level characteristics are available; true means of auxiliary variables are available for the estimation domains, and unit-level auxiliary vectors are known for the sample elements. In the simulation study, we show where the generalized compositions improve the traditional synthetic and composite estimators.  相似文献   

18.
Many studies have estimated the trade effect of the euro, but their results vary greatly. This meta‐analysis collects 3323 estimates of the euro effect along with 28 characteristics of estimation design from almost 60 studies and quantitatively examines the literature. The results show evidence of publication bias, but they also suggest that the bias decreases over time. After correcting for the bias, the meta‐analysis shows that the literature is consistent with an effect ranging between 2% and 6%. The results from Bayesian model averaging, which takes into account model uncertainty, show that the differences among estimates are systematically driven by data sources, data structure, control variables, and estimation techniques. The mean reported estimate of the euro's trade effect conditional on best‐practice approach is 3%, but is not statistically different from zero.  相似文献   

19.
In a letter to the author Arrow makes an important recognition regarding the question of irrelevant alternatives by expressing his view that alternatives which are not among the superior ones can, in fact, affect the choice of the best alternative (it is a question of choosing the best chess player).It is clarified how alternatives become relevant in group decisions by Borda's method.  相似文献   

20.
This paper presents estimation methods for dynamic nonlinear models with correlated random effects (CRE) when having unbalanced panels. Unbalancedness is often encountered in applied work and ignoring it in dynamic nonlinear models produces inconsistent estimates even if the unbalancedness process is completely at random. We show that selecting a balanced panel from the sample can produce efficiency losses or even inconsistent estimates of the average marginal effects. We allow the process that determines the unbalancedness structure of the data to be correlated with the permanent unobserved heterogeneity. We discuss how to address the estimation by maximizing the likelihood function for the whole sample and also propose a Minimum Distance approach, which is computationally simpler and asymptotically equivalent to the Maximum Likelihood estimation. Our Monte Carlo experiments and empirical illustration show that the issue is relevant. Our proposed solutions perform better both in terms of bias and RMSE than the approaches that ignore the unbalancedness or that balance the sample.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号