首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
One important question in the DSGE literature is whether we should detrend data when estimating the parameters of a DSGE model using the moment method. It has been common in the literature to detrend data in the same way the model is detrended. Doing so works relatively well with linear models, in part because in such cases the information that disappears from the data is usually related to the parameters that also disappear from the detrended model. Unfortunately, in heavy non‐linear DSGE models, parameters rarely disappear from detrended models, but information does disappear from the detrended data. Using a simple real business cycle model, we show that both the moment method estimators of parameters and the estimated responses of endogenous variables to a technological shock can be seriously inaccurate when detrended data are used in the estimation process. Using a dynamic stochastic general equilibrium model and U.S. data, we show that detrending the data before estimating the parameters may result in a seriously misleading response of endogenous variables to monetary shocks. We suggest building the moment conditions using raw data, irrespective of the trend observed in the data.  相似文献   

2.
Stakeholders argue that the information barrier is the major obstacle restricting firms from adopting Energy Efficiency Technologies (EETs) in Europe. The present work examines the processes of information gathering as regards to EETs and explores the factors affecting the level of acquired information by EET adopters. Empirical evidence is provided by a data set of Greek manufacturing firms which have adopted EETs. In conclusion, we propose appropriate policy measures able to promote the adoption of EETs by overcoming the information barrier.  相似文献   

3.
Markku Lehmus   《Economic Modelling》2009,26(5):926-933
This paper provides a review of the empirical macroeconomic model (EMMA) built for forecasting purposes at the Finnish Labour Institute for Economic Research. The model is quite small, consisting of 71 endogenous and 70 exogenous variables. The number of behavioural equations is 15. The basis of the model is Keynesian, although the model has some novel properties. They are the treatment of the supply side and prices that follow the routes of the neoclassical synthesis. The parameters of the model are estimated from quarterly data that cover the years 1990–2005. The model also contains a Kalman-filtered variable to control the deep recession in Finland at the beginning of the '90s. This special feature brings the model closer to the new calibrated models.  相似文献   

4.
Predicting life expectancy has become of upmost importance in society. Pension providers, insurance companies, government bodies and individuals in the developed world have a vested interest in understanding how long people will live for. This desire to better understand life expectancy has resulted in an explosion of stochastic mortality models many of which identify linear trends in mortality rates by time. In making use of such models for forecasting purposes, we rely on the assumption that the direction of the linear trend (determined from the data used for fitting purposes) will not change in the future, recent literature has started to question this assumption. In this article, we carry out a comprehensive investigation of these types of models using male and female data from 30 countries and using the theory of structural breaks to identify changes in the extracted trends by time. We find that structural breaks are present in a substantial number of cases, that they are more prevalent in male data than in female data, that the introduction of additional period factors into the model reduces their presence, and that allowing for changes in the trend improves the fit and forecast substantially.  相似文献   

5.
In this paper we address the following question: To what extent is the hypothesis that voters vote “ideologically” (i.e., they always vote for the candidate who is ideologically “closest” to them) testable or falsifiable? We show that using data only on how individuals vote in a single election, the hypothesis that voters vote ideologically is irrefutable, regardless of the number of candidates competing in the election. On the other hand, using data on how the same individuals vote in multiple elections, the hypothesis that voters vote ideologically is potentially falsifiable, and we provide general conditions under which the hypothesis can be tested.  相似文献   

6.
商业银行个人信贷信用评分模型的构建与应用   总被引:1,自引:0,他引:1  
刘莉亚 《财经研究》2007,33(2):26-36
面对全球经济、金融一体化的现实背景以及随之而来的白热化竞争态势,个人信贷业务将是我国银行业目前及未来发展的关键领域。为此,文章首先从借款人、贷款方案、贷款投向和风险缓释四个要素出发,构建了一套产品水平的信用评分模型的整体分析框架,并将该框架具体应用于个人住房贷款产品;在此基础上,考虑到我国银行业的发展现状与评分模型的可实施性,设计了一个根据专家判断法的评分结果和定量模型法的评分结果进行相互校验的混合型个人住房贷款信用评分模型,并基于所收集的某股份制商业银行的样本贷款数据进行了部分验证工作,同时指出下一步的研究方向。  相似文献   

7.
Choice behavior is typically evaluated by assuming that the data is generated by one latent decision-making process or another. What if there are two (or more) latent decision-making processes generating the observed choices? Some choices might then be better characterized as being generated by one process, and other choices by the other process. A finite mixture model can be used to estimate the parameters of each decision process while simultaneously estimating the probability that each process applies to the sample. We consider the canonical case of lottery choices in a laboratory experiment and assume that the data is generated by expected utility theory and prospect theory decision rules. We jointly estimate the parameters of each theory as well as the fraction of choices characterized by each. The methodology provides the wedding invitation, and the data consummates the ceremony followed by a decent funeral for the representative agent model that assumes only one type of decision process. The evidence suggests support for each theory, and goes further to identify under what demographic domains one can expect to see one theory perform better than the other. We therefore propose a reconciliation of the debate over two of the dominant theories of choice under risk, at least for the tasks and samples we consider. The methodology is broadly applicable to a range of debates over competing theories generated by experimental and non-experimental data.  相似文献   

8.
《Applied economics》2012,44(21):2667-2677
Childhood obesity and food insecurity are major public health concerns in the United States and other developed countries. Research on the relationship between the two has provided mixed results across a variety of data sets and empirical methods. Common throughout this research, however, is the use of parametric frameworks for empirical analyses. This study moves beyond parametric methods by examining the relationship between childhood obesity and food insecurity among low-income children with nonparametric regression techniques. We examine data from the Child Development Supplement (CDS) of the Panel Study of Income Dynamics (PSID), a nationally representative data set from the US. Consistent with recent work, our parametric analyses indicate that there is no statistically significant relationship between childhood obesity and food insecurity. In contrast, our nonparametric results indicate that the probability of being obese varies markedly with the level of food insecurity being experienced by the child. Moreover, this relationship differs across relevant subgroups including those defined by gender, race/ethnicity and income. Fully understanding the relationship between childhood obesity and food insecurity has significant policy implications.  相似文献   

9.
The paper provides a comparison of alternative univariate time series models that are advocated for the analysis of seasonal data. Consumption and income series from (West-) Germany, United Kingdom, Japan and Sweden are investigated. The performance of competing models in forecasting is used to assess the adequacy of a specific model. To account for nonstationarity first and annual differences of the series are investigated. In addition, time series models assuming periodic integration are evaluated. To describe the stationary dynamics (standard) time invariant parametrizations are compared with periodic time series models conditioning the data generating process on the season. Periodic models improve the in-sample fit considerably but in most cases under study this model class involves a loss in ex-ante forecasting relative to nonperiodic models. Inference on unit-roots indicates that the nonstationary characteristics of consumption and income data may differ. For German and Swedish data forecasting exercises yield a unique recommendation of unit roots in consumption and income data which is an important (initial) result for multivariate analysis. Time series models assuming periodic integration are parsimonious to specify but often involve correlated one-step-ahead forecast errors. First version received: April 1996/final version received: January 1998  相似文献   

10.
Rangan Gupta 《Applied economics》2013,45(33):4677-4697
This article considers the ability of large-scale (involving 145 fundamental variables) time-series models, estimated by dynamic factor analysis and Bayesian shrinkage, to forecast real house price growth rates of the four US census regions and the aggregate US economy. Besides the standard Minnesota prior, we also use additional priors that constrain the sum of coefficients of the VAR models. We compare 1- to 24-months-ahead forecasts of the large-scale models over an out-of-sample horizon of 1995:01–2009:03, based on an in-sample of 1968:02–1994:12, relative to a random walk model, a small-scale VAR model comprising just the five real house price growth rates and a medium-scale VAR model containing 36 of the 145 fundamental variables besides the five real house price growth rates. In addition to the forecast comparison exercise across small-, medium- and large-scale models, we also look at the ability of the ‘optimal’ model (i.e. the model that produces the minimum average mean squared forecast error) for a specific region in predicting ex ante real house prices (in levels) over the period of 2009:04 till 2012:02. Factor-based models (classical or Bayesian) perform the best for the North East, Mid-West, West census regions and the aggregate US economy and equally well to a small-scale VAR for the South region. The ‘optimal’ factor models also tend to predict the downward trend in the data when we conduct an ex ante forecasting exercise. Our results highlight the importance of information content in large number of fundamentals in predicting house prices accurately.  相似文献   

11.
Several studies using observational data suggest that ethnic discrimination increases in downturns of the economy. We investigate whether ethnic discrimination depends on labour market tightness using data from correspondence studies. We utilize three correspondence studies of the Swedish labour market and two different measures of labour market tightness. These two measures produce qualitatively similar results, and, opposite to the observational studies, suggest that ethnic discrimination in hiring decreases in downturns of the economy.  相似文献   

12.
Nonlinear models with panel data   总被引:1,自引:0,他引:1  
Panel data play an important role in empirical economics. With panel data one can answer questions about microeconomic dynamic behavior that could not be answered with cross sectional data. Panel data techniques are also useful for analyzing cross sectional data with grouping. This paper discusses some issues related to specification and estimation of nonlinear models using panel data.JEL Classification: C230The research behind this paper was supported by the National Science Foundation, the Gregory C. Chow Econometric Research Program at Princeton University, and Danish National Research Foundation (through CAM at the University of Copenhagen). The author thanks Ekaterini Kyriazidou, Hong Li, Marina Sallustro, and the editors for helpful suggestions.  相似文献   

13.
Output Variability and Economic Growth: the Japanese Case   总被引:4,自引:0,他引:4  
We examine the empirical relationship between output variability and output growth using quarterly data for the 1961–2000 period for the Japanese economy. Using three different specifications of GARCH models, namely, Bollerslev's model, Taylor/Schwert's model, and Nelson's EGARCH model, we obtain two important results. First, we find robust evidence that the “in‐mean” coefficient is not statistically significant. This evidence is consistent with Speight's (1999) analysis of UK data and implies that output variability does not affect output growth. In other words, this finding supports several real business cycle theories of economic fluctuations. Second, we find no evidence of asymmetry between output variability and growth, a result consistent with Hamori (2000) .  相似文献   

14.
We analyze the effect of heteroskedasticity on log-linear aggregation, and its implications for the pooling of cross-section and aggregate time series data. An empirical analysis of food consumption, based on US family budget survey and aggregate time series data, illustrates.  相似文献   

15.
This paper tests the predictive value of subjective labour supply data for adjustments in working hours over time. The idea is that if subjective labour supply data help to predict working hours, the subjective data must contain at least some information on individual labour supply preferences. In this paper, I formulate a partial-adjustment model that allows for measurement error in the observed variables. Applying estimation methods that are developed for dynamic panel data models, I find evidence for a predictive power of subjective labour supply data concerning desired working hours in the German Socio-Economic Panel 1988–1995.I wish to thank John Haisken-DeNew, Astrid Kunze, Markus Pannenberg, Winfried Pohlmeier, Frank Windmeijer, Rainer Winkelmann, my former colleagues at IZA and two anonymous referees for their valuable comments. The author gratefully acknowledges DIW for providing the data.First version received: December 2001/Final version received: December 2003  相似文献   

16.
The structural Quantal Response Equilibrium (QRE) generalizes the Nash equilibrium by augmenting payoffs with random elements that are not removed in some limit. This approach has been widely used both as a theoretical framework to study comparative statics of games and as an econometric framework to analyze experimental and field data. The framework of structural QRE is flexible: it can be applied to arbitrary finite games and incorporate very general error structures. Restrictions on the error structure are needed, however, to place testable restrictions on the data (Haile et al., 2004). This paper proposes a reduced-form approach, based on quantal response functions that replace the best-response functions underlying the Nash equilibrium. We define a regular QRE as a fixed point of quantal response functions that satisfies four axioms: continuity, interiority, responsiveness, and monotonicity. We show that these conditions are not vacuous and demonstrate with an example that they imply economically sensible restrictions on data consistent with laboratory observations. The reduced-form approach allows for a richer set of regular quantal response functions, which has proven useful for estimation purposes. JEL Classification: D62, C73  相似文献   

17.
This paper investigates the effects of data transformation on nonlinearity by means of a simulation analysis based on empirical threshold models for the unemployment rate. Unemployment rate series are particularly suitable because they exhibit a number of interesting features: business cycle asymmetries, persistence, long memory and seasonality. The main finding is that evidence of nonlinearity is not independent of the form in which data are analysed and that most data transformations result in a loss of nonlinearity. This is particularly the case for seasonal adjustment transformations, which remove not only seasonality but also nonlinear features, as shown for the commonly applied Census X12 method.  相似文献   

18.
Given the existence of nonnormality and nonlinearity in the data generating process of real house price returns over the period of 1831–2013, this article compares the ability of various univariate copula models, relative to standard benchmarks (naive and autoregressive models) in forecasting real US house price over the annual out-of-sample period of 1874–2013, based on an in-sample of 1831–1873. Overall, our results provide overwhelming evidence in favour of the copula models (Normal, Student’s t, Clayton, Frank, Gumbel, Joe and Ali-Mikhail-Huq) relative to linear benchmarks, and especially for the Student’s t-copula, which outperforms all other models both in terms of in-sample and out-of-sample predictability results. Our results highlight the importance of accounting for nonnormality and nonlinearity in the data generating process of real house price returns for the US economy for nearly two centuries of data.  相似文献   

19.
This paper draws on the author's experience as a research fellow on the Southampton Econometric Model Building Unit in suggesting that there is a case for large-scale, structural, disaggregated econometric models. This is particularly so if, like SEMBU, the aim is to improve both economic theory and economic policy advice rather than simply to forecast actual futures. It is argued that for such an approach to model building to be successful, funding agencies should adopt a long-term view and the Central Statistical Office and other data generating bodies should become much more closely involved with the model-building unit. This latter cooperation could prove mutually beneficial.  相似文献   

20.
Robots are the most important innovation which has affected the production process in the last three decades. Thanks to the latest advances in technology, they have been able to perform an ever-increasing number of tasks, eventually replacing human work within the whole production process. However, because of the scarcity of suitable data, the extent of this potentially disrupting process is not fully assessed. This paper makes up for the lack of empirical evidence on the effect of robotization on labour dislocation using data collected by the International Federation of Robotics (IFR) on the number of robots installed in the different manufacturing industries of 16 OECD countries over the period 2011–2016. We show that at the industry level a 1% growth in the number of robots reduces the growth rate of worked hours by 0.16, as well as the selling prices and the real values of the compensations of employees. Moreover, we show that a given sector is more likely to be robotized when it is expanding both in terms of relative prices and employee compensations. We conclude that, at least in the selected countries, the introduction of robots plays a key role in slowing down human labour and compensation growth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号