首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

2.
The various approaches to the construction of causal models are compared from a probabilistic point of view. Although all methods are equivalent in the mathematical manipulation of the equations of a model, three distinct approaches are discernible, depending on how numerical values of the coefficients are calculated. All rely to a greater or lesser extent on a deterministic base, as a result of consideration of the equations simultaneously. The problems of polytomous (nominal and ordinal) variables, of omitted variables, and of nonlinearity are discussed and solutions proposed, before going on to investigate the uses of interaction effects in such models. The interpretation of interactions and relationship to paths and chains is discussed in detail. One step in the analysis of a model describing the relationships of student attitudes to home and to school environments is provided in detail to illustrate the probabilistic concepts. These results are compared with those which might have been obtained if a causal model based on path analysis with least squares linear regression analysis had been applied.  相似文献   

3.
Financial crises pose unique challenges for forecast accuracy. Using the IMF’s Monitoring of Fund Arrangements (MONA) database, we conduct the most comprehensive evaluation of IMF forecasts to date for countries in times of crises. We examine 29 macroeconomic variables in terms of bias, efficiency, and information content to find that IMF forecasts add substantial informational value, as they consistently outperform naive forecast approaches. However, we also document that there is room for improvement: two-thirds of the key macroeconomic variables that we examine are forecast inefficiently, and six variables (growth of nominal GDP, public investment, private investment, the current account, net transfers, and government expenditures) exhibit significant forecast biases. The forecasts for low-income countries are the main drivers of forecast biases and inefficiency, perhaps reflecting larger shocks and lower data quality. When we decompose the forecast errors into their sources, we find that forecast errors for private consumption growth are the key contributor to GDP growth forecast errors. Similarly, forecast errors for non-interest expenditure growth and tax revenue growth are crucial determinants of the forecast errors in the growth of fiscal budgets. Forecast errors for balance of payments growth are influenced significantly by forecast errors in goods import growth. The results highlight which macroeconomic aggregates require further attention in future forecast models for countries in crises.  相似文献   

4.
We introduce a class of instrumental quantile regression methods for heterogeneous treatment effect models and simultaneous equations models with nonadditive errors and offer computable methods for estimation and inference. These methods can be used to evaluate the impact of endogenous variables or treatments on the entire distribution of outcomes. We describe an estimator of the instrumental variable quantile regression process and the set of inference procedures derived from it. We focus our discussion of inference on tests of distributional equality, constancy of effects, conditional dominance, and exogeneity. We apply the procedures to characterize the returns to schooling in the U.S.  相似文献   

5.
This paper proposes a new method for combining forecasts based on complete subset regressions. For a given set of potential predictor variables we combine forecasts from all possible linear regression models that keep the number of predictors fixed. We explore how the choice of model complexity, as measured by the number of included predictor variables, can be used to trade off the bias and variance of the forecast errors, generating a setup akin to the efficient frontier known from modern portfolio theory. In an application to predictability of stock returns, we find that combinations of subset regressions can produce more accurate forecasts than conventional approaches based on equal-weighted forecasts (which fail to account for the dimensionality of the underlying models), combinations of univariate forecasts, or forecasts generated by methods such as bagging, ridge regression or Bayesian Model Averaging.  相似文献   

6.
This paper examines the importance of accounting for measurement error in total expenditure in the estimation of Engel curves, based on the 1994 Ethiopian Urban Household Survey. Using Lewbel's [Review of Economics and Statistics (1996 ), Vol. 78, pp. 718–725] estimator for demand models with correlated measurement errors in the dependent and independent variables, we find robust evidence of a quadratic relationship between food share and total expenditure in the capital city, and significant biases in various estimators that do not correct for correlated measurement errors.  相似文献   

7.
Aiting Shen  Andrei Volodin 《Metrika》2017,80(6-8):605-625
In the paper, the Marcinkiewicz–Zygmund type moment inequality for extended negatively dependent (END, in short) random variables is established. Under some suitable conditions of uniform integrability, the \(L_r\) convergence, weak law of large numbers and strong law of large numbers for usual normed sums and weighted sums of arrays of rowwise END random variables are investigated by using the Marcinkiewicz–Zygmund type moment inequality. In addition, some applications of the \(L_r\) convergence, weak and strong laws of large numbers to nonparametric regression models based on END errors are provided. The results obtained in the paper generalize or improve some corresponding ones for negatively associated random variables and negatively orthant dependent random variables.  相似文献   

8.
Hierarchical Models in Environmental Science   总被引:6,自引:0,他引:6  
Environmental systems are complicated. They include very intricate spatio-temporal processes, interacting on a wide variety of scales. There is increasingly vast amounts of data for such processes from geographical information systems, remote sensing platforms, monitoring networks, and computer models. In addition, often there is a great variety of scientific knowledge available for such systems, from partial differential equations based on first principles to panel surveys. It is argued that it is not generally adequate to consider such processes from a joint perspective. Instead, the processes often must be considered as a coherently linked system of conditional models. This paper provides a brief overview of hierarchical approaches applied to environmental processes. The key elements of such models can be considered in three general stages, the data stage, process stage, and parameter stage. In each stage, complicated dependence structure is mitigated by conditioning. For example, the data stage can incorporate measurement errors as well as multiple datasets with varying supports. The process and parameter stages can allow spatial and spatio-temporal processes as well as the direct inclusion of scientific knowledge. The paper concludes with a discussion of some outstanding problems in hierarchical modelling of environmental systems, including the need for new collaboration approaches.  相似文献   

9.
We consider how to estimate the trend and cycle of a time series, such as real gross domestic product, given a large information set. Our approach makes use of the Beveridge–Nelson decomposition based on a vector autoregression, but with two practical considerations. First, we show how to determine which conditioning variables span the relevant information by directly accounting for the Beveridge–Nelson trend and cycle in terms of contributions from different forecast errors. Second, we employ Bayesian shrinkage to avoid overfitting in finite samples when estimating models that are large enough to include many possible sources of information. An empirical application with up to 138 variables covering various aspects of the US economy reveals that the unemployment rate, inflation, and, to a lesser extent, housing starts, aggregate consumption, stock prices, real money balances, and the federal funds rate contain relevant information beyond that in output growth for estimating the output gap, with estimates largely robust to substituting some of these variables or incorporating additional variables.  相似文献   

10.
We present a discussion of the different dimensions of the ongoing controversy about the analysis of ordinal variables. The source of this controversy is traced to the earliest possible stage, measurement theory. Three major approaches in analyzing ordinal variables, called the non-parametric, the parametric, and the underlying variable approach, are identified and the merits and drawbacks of each of these approaches are pointed out. We show that the controversy on the exact definition of an ordinal variable causes problems with regard to defining ordinal association, and therefore to the interpretation of many recently designed models for ordinal variables, e.g., structure equation models using polychoric correlations, latent class models and ordinal response models. We conclude that the discussion with regard to ordinal variable modeling can only be fruitful if one makes a distinction between different types of ordinal variables. Five types of ordinal variables were identified. The problems concerning the analysis of these five types of ordinal variables are solved in some cases and remain a problem for others.  相似文献   

11.
The use of indicator variables to construct predictions and to estimate the variances of prediction errors is illustrated for systems of equations, nonlinear regressions and autoregressions.  相似文献   

12.
The generalized method of moments (GMM) estimation technique is discussed for count data models with endogenous regressors. Count data models can be specified with additive or multiplicative errors. It is shown that, in general, a set of instruments is not orthogonal to both error types. Simultaneous equations with a dependent count variable often do not have a reduced form which is a simple function of the instruments. However, a simultaneous model with a count and a binary variable can only be logically consistent when the system is triangular. The GMM estimator is used in the estimation of a model explaining the number of visits to doctors, with as a possible endogenous regressor a self-reported binary health index. Further, a model is estimated, in stages, that includes latent health instead of the binary health index. © 1997 John Wiley & Sons, Ltd.  相似文献   

13.
We consider the possibility that demographic variables are measured with errors which arise because household surveys measure demographic structures at a point‐in‐time, whereas household composition evolves throughout the survey period. We construct and estimate sharp bounds on household size and find that the degree of these measurement errors is non‐trivial. These errors have the potential to resolve the Deaton–Paxson paradox, but fail to do so.  相似文献   

14.
When constructing unconditional point forecasts, both direct and iterated multistep (DMS and IMS) approaches are common. However, in the context of producing conditional forecasts, IMS approaches based on vector autoregressions are far more common than simpler DMS models. This is despite the fact that there are theoretical reasons to believe that DMS models are more robust to misspecification than are IMS models. In the context of unconditional forecasts, Marcellino et al. (Journal of Econometrics, 2006, 135, 499–526) investigate the empirical relevance of these theories. In this paper, we extend that work to conditional forecasts. We do so based on linear bivariate and trivariate models estimated using a large dataset of macroeconomic time series. Over comparable samples, our results reinforce those in Marcellino et al.: the IMS approach is typically a bit better than DMS with significant improvements only at longer horizons. In contrast, when we focus on the Great Moderation sample we find a marked improvement in the DMS approach relative to IMS. The distinction is particularly clear when we forecast nominal rather than real variables where the relative gains can be substantial.  相似文献   

15.
Near-term forecasts, also called nowcasts, are most challenging but also most important when the economy experiences an abrupt change. In this paper, we explore the performance of models with different information sets and data structures in order to best nowcast US initial unemployment claims in spring of 2020 in the midst of the COVID-19 pandemic. We show that the best model, particularly near the structural break in claims, is a state-level panel model that includes dummy variables to capture the variation in timing of state-of-emergency declarations. Autoregressive models perform poorly at first but catch up relatively quickly. The state-level panel model, exploiting the variation in timing of state-of-emergency declarations, also performs better than models including Google Trends. Our results suggest that in times of structural change there is a bias–variance tradeoff. Early on, simple approaches to exploit relevant information in the cross sectional dimension improve forecasts, but in later periods the efficiency of autoregressive models dominates.  相似文献   

16.
The performance of six classes of models in forecasting different types of economic series is evaluated in an extensive pseudo out‐of‐sample exercise. One of these forecasting models, regularized data‐rich model averaging (RDRMA), is new in the literature. The findings can be summarized in four points. First, RDRMA is difficult to beat in general and generates the best forecasts for real variables. This performance is attributed to the combination of regularization and model averaging, and it confirms that a smart handling of large data sets can lead to substantial improvements over univariate approaches. Second, the ARMA(1,1) model emerges as the best to forecast inflation changes in the short run, while RDRMA dominates at longer horizons. Third, the returns on the S&P 500 index are predictable by RDRMA at short horizons. Finally, the forecast accuracy and the optimal structure of the forecasting equations are quite unstable over time.  相似文献   

17.
Sign restrictions have become increasingly popular for identifying shocks in structural vector autoregressive (SVAR) models. So far there are no techniques for validating the shocks identified via such restrictions. Although in an ideal setting the sign restrictions specify shocks of interest, sign restrictions may be invalidated by measurement errors, data adjustments or omitted variables. We model changes in the volatility of the shocks via a Markov switching (MS) mechanism and use this device to give the data a chance to object to sign restrictions. The approach is illustrated by considering a small model for the market of crude oil. Earlier findings that oil supply shocks explain only a very small fraction of movements in the price of oil are confirmed and it is found that the importance of aggregate demand shocks for oil price movements has declined since the mid 1980s. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
A recursive dynamic disaggregated computable general equilibrium model of the Spanish economy is used to compare the model predictions of endogenous variables with their observed values over the period 1991–1997. It includes 12 producers, 12 households, government and 2 external sectors. There are four types of labour and real wages that depend on unemployment rates. Private investment is determined by private savings and public and external surpluses. Domestic products and imports are imperfect substitutes. All exogenous variables and tax parameters are updated every year with the best available information. The model provides rather accurate predictions in 1991, a normal year, but it underestimates the intensity of the 1992–1993 recession. It also predicts dramatic reversals of trade balances in response to devaluations. These results suggest both that investment savings-driven models provide useful insights in the medium term but underestimate the consequences of downturns, and that Armington's elascitities typically assumed may be too large.  相似文献   

19.
This paper provides a control function estimator to adjust for endogeneity in the triangular simultaneous equations model where there are no available exclusion restrictions to generate suitable instruments. Our approach is to exploit the dependence of the errors on exogenous variables (e.g. heteroscedasticity) to adjust the conventional control function estimator. The form of the error dependence on the exogenous variables is subject to restrictions, but is not parametrically specified. In addition to providing the estimator and deriving its large-sample properties, we present simulation evidence which indicates the estimator works well.  相似文献   

20.
The standard generalized method of moments (GMM) estimation of Euler equations in heterogeneous‐agent consumption‐based asset pricing models is inconsistent under fat tails because the GMM criterion is asymptotically random. To illustrate this, we generate asset returns and consumption data from an incomplete‐market dynamic general equilibrium model that is analytically solvable and exhibits power laws in consumption. Monte Carlo experiments suggest that the standard GMM estimation is inconsistent and susceptible to Type II errors (incorrect nonrejection of false models). Estimating an overidentified model by dividing agents into age cohorts appears to mitigate Type I and II errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号