首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper studies the joint dynamics of U.S. output and unemployment rate in a non‐linear VAR model. The non‐linearity is introduced through a feedback variable that endogenously augments the output lags of the VAR in recessionary phases. Sufficient conditions for the ergodicity of the model, potentially applying to a larger class of threshold models, are provided. The linear specification is rejected in favour of our threshold VAR. However, in the estimation the feedback is found to be statistically significant only on unemployment, while it transmits to output through its cross‐correlation. This feedback effect from recessions generates important asymmetries in the propagation of shocks, a possible key to interpret the divergence in the measures of persistence in the literature. The regime‐dependent persistence also explains the finding that the feedback from recession exerts a positive effect on the long‐run growth rate of the economy, an empirical validation of the Schumpeterian macroeconomic theories. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
This paper uses state‐level data to estimate the effect of government spending shocks during expansions and recessions. By employing a mixed‐frequency framework, we are able to include a long span of annual state‐level government spending data in our nonlinear quarterly panel VAR model. We find evidence that for the average state the fiscal multiplier is larger during recessions. However, there is substantial heterogeneity across the cross‐section. The degree of nonlinearity in the effect of spending shocks is larger in states that are subject to a higher degree of financial frictions. In contrast, states with a prevalence of manufacturing, mining and agricultural industries tend to have multipliers that are more similar across business cycle phases.  相似文献   

3.
Potential output plays a central role in monetary policy and short‐term macroeconomic policy making. Yet, characterizing the output gap involves a trend‐cycle decomposition, and unobserved component estimates are typically subject to a large uncertainty at the sample end. An important consequence is that output gap estimates can be quite inaccurate in real time, as recently highlighted by Orphanides and van Norden ( 2002 ), and this causes a serious problem for policy makers. For the cases of the US, EU‐11 and two EU countries, we evaluate the benefits of using inflation data for improving the accuracy of real‐time estimates. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
We consider how to estimate the trend and cycle of a time series, such as real gross domestic product, given a large information set. Our approach makes use of the Beveridge–Nelson decomposition based on a vector autoregression, but with two practical considerations. First, we show how to determine which conditioning variables span the relevant information by directly accounting for the Beveridge–Nelson trend and cycle in terms of contributions from different forecast errors. Second, we employ Bayesian shrinkage to avoid overfitting in finite samples when estimating models that are large enough to include many possible sources of information. An empirical application with up to 138 variables covering various aspects of the US economy reveals that the unemployment rate, inflation, and, to a lesser extent, housing starts, aggregate consumption, stock prices, real money balances, and the federal funds rate contain relevant information beyond that in output growth for estimating the output gap, with estimates largely robust to substituting some of these variables or incorporating additional variables.  相似文献   

5.
By using a dynamic factor model, we can substantially improve the reliability of real-time output gap estimates for the U.S. economy. First, we use a factor model to extract a series for the common component in GDP from a large panel of monthly real-time macroeconomic variables. This series is immune to revisions to the extent that revisions are due to unbiased measurement errors or idiosyncratic news. Second, our model is able to handle the unbalanced arrival of the data. This yields favorable nowcasting properties and thus starting conditions for the filtering of data into a trend and deviations from a trend. Combined with the method of augmenting data with forecasts prior to filtering, this greatly reduces the end-of-sample imprecision in the gap estimate. The increased precision has economic importance for real-time policy decisions and improves real-time inflation forecasts.  相似文献   

6.
Direct and indirect translog utility functions provide budget share equations which are both flexible and consistent with the theory of utility maximization. These forms are attractive for modelling consumer behavior. Because of their flexibility they are ideal for testing hypotheses such as additivity of preferences. In this paper we use the translog methodology to analyze U.S. consumption of the four principal categories of meat-fish, beef, poultry, and pork. We decisively reject the hypothesis of additivity. However, further testing for partial additivity reveals that (beef) and (fish, poultry, pork) are additively separable subgroups of meat.  相似文献   

7.
This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use log-score and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance.  相似文献   

8.
This paper illustrates, based on an example, the importance of consistency between empirical measurement and the concept of variables in estimated macroeconomic models. Since standard New Keynesian models do not account for demographic trends and sectoral shifts, I propose adjusting hours worked per capita used to estimate such models accordingly to enhance the consistency between the data and the model. Without this adjustment, low‐frequency shifts in hours lead to unreasonable trends in the output gap, caused by the close link between hours and the output gap in such models. The retirement wave of baby boomers, for example, lowers US aggregate hours per capita, which leads to erroneous permanently negative output gap estimates following the Great Recession. After correcting hours for changes in the age composition, the estimated output gap closes gradually instead following the years after the Great Recession.  相似文献   

9.
This paper is motivated by the claim that promotion probabilities are lower for women than men. Using data from the 1984 and 1989 National Longitudinal Youth Surveys, this paper tests this claim and two related hypotheses concerning training and ability. It is found that females are less likely to be promoted than males, and females receive less training than males. The relationship between promotion and gender varies across occupations, however, suggesting that the alleged glass ceiling faced by women and other minorities in the workplace is not uniform across all labor markets. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

10.
《Journal of econometrics》2005,127(2):131-164
We analyze labor productivity in coal mining in the United States using indices of productivity change associated with the concepts of panel data modeling. This approach is valuable when there is extensive heterogeneity in production units, as with coal mines. We find substantial returns to scale for coal mining in all geographical regions, and find that smooth technical progress is exhibited by estimates of the fixed effects for coal mining. We carry out a variety of diagnostic analyses of our basic model and primary modeling assumptions, using recently proposed methods for addressing ‘errors-in-variables’ and ‘weak instrument bias’ problems in linear and nonlinear models.  相似文献   

11.
A government delegates a build‐operate‐transfer project to a private firm. In the contracting stage, the operating cost is unknown. The firm can increase the likelihood of facing a low cost, rather than a high cost, by exerting costly effort when building the infrastructure. Once the infrastructure is in place, the firm learns the true cost and begins to operate. Under limited commitment, either partner may renege on the contract at any moment thereafter. The novelty with respect to incentive theory is that the contractual length is stipulated in the contract in such a way that it depends on the cost realization. Our main result is that, if the break‐up of the partnership is sufficiently costly to the government and/or adverse selection and moral hazard are sufficiently severe, then the efficient contract is not robust to renegotiation unless it has a longer duration when the realized cost is low. This result is at odds with the literature on flexible‐term contracts, which recommends a longer duration when operating conditions are unfavorable, yet, with regard to a different setting, where the demand is uncertain and the cash‐flow is exogenous.  相似文献   

12.
13.
A model of U.K. manufacturing employment is estimated in which output expectations data are derived from Confederation of British Industry (CBI) survey information. The output expectations terms are highly significant, and equations including them encompass more traditional models that use current and lagged output. In addition, the equations also successfully predict the sharp falls in manufacturing employment that occurred after 1979. One interesting implication of these equations is that the decline in ‘cyclically adjusted’ productivity around 1975, and the subsequent improvement around 1980, can be largely explained in terms of a prolonged period of over-optimistic output expectations by U.K. firms.  相似文献   

14.
DEA (data envelopment analysis) is a technique for determining the efficiencyfrontier (the envelope) to the inputs and outputs of a collection of individual corporations or other productive units. DEA is here employed to estimate the intertemporal productive efficiency of U.S. computer manufactures, using financial data brought from earnings statements and balance sheets. The results indicate that a few corporations, including Apple Computer Inc., Compaq Computer Corp., and Seagate Technology were able to stay at the productivity efficiency frontier throughout the time period investigated. But not all successful corporations did; sometimes subefficiency (=disequilibrium) actually goes together with very rapid growth. A new Malmquist type productivity index is calculated for each corporation, measuring shifts of the estimated intertemporal efficiency frontier.  相似文献   

15.
16.
17.
18.
A vector-autoregressive model of actual output and expected output obtained from surveys is used to test for information rigidities and to provide a characterisation of output dynamics that accommodates these information structures. News on actual and expected outputs is decomposed to identify innovations understood to have short-lived effects and these are used with the model to derive a ‘news-adjusted output gap׳ measure. The approach is applied to US data over 1970q1–2014q2 and the new gap measure is shown to provide a good leading indicator of inflation.  相似文献   

19.
The U.S. Bureau of Labor Statistics (BLS) publishes measures of multifactor productivity, which are patterned after the Solow residual. Inputs of capital must be aggregated with inputs of labor. The theory requires a measure of the capital service flow, a rather abstract notion which is rarely observable. The BLS procedures for capital measurement are summarized, and the rationale for these procedures is explored. Implicit measures of capital services are derived from data on property income and data on historical investments which are detailed as to type of asset.  相似文献   

20.
This paper presents two different estimates of the output loss resulting from allocative inefficiency in the Soviet Union and the United States. Surprisingly, the evidence from our examination of nine industrial sectors during the period 1960–1984 shows only small differences in measured allocative inefficiency between the United States and Soviet economies. Instead of immediately rejecting this result as the product of unreliable data and insurmountable methodological difficulties, we present a plausible explanation for the unexpectedly strong performance of Soviet-type economies in the allocation of labor and capital across sectors. If true, the finding of relatively low levels of resource misallocation implies that the source of poor economic performance in Soviet-type economies must be due to technical inefficiency, slow technological change, and/or production of the wrong mix of outputs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号