首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper demonstrates that the class of conditionally linear and Gaussian state-space models offers a general and convenient framework for simultaneously handling nonlinearity, structural change and outliers in time series. Many popular nonlinear time series models, including threshold, smooth transition and Markov-switching models, can be written in state-space form. It is then straightforward to add components that capture parameter instability and intervention effects. We advocate a Bayesian approach to estimation and inference, using an efficient implementation of Markov Chain Monte Carlo sampling schemes for such linear dynamic mixture models. The general modelling framework and the Bayesian methodology are illustrated by means of several examples. An application to quarterly industrial production growth rates for the G7 countries demonstrates the empirical usefulness of the approach.  相似文献   

2.
This paper introduces a quasi maximum likelihood approach based on the central difference Kalman filter to estimate non‐linear dynamic stochastic general equilibrium (DSGE) models with potentially non‐Gaussian shocks. We argue that this estimator can be expected to be consistent and asymptotically normal for DSGE models solved up to third order. These properties are verified in a Monte Carlo study for a DSGE model solved to second and third order with structural shocks that are Gaussian, Laplace distributed, or display stochastic volatility. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Skepticism toward traditional identifying assumptions based on exclusion restrictions has led to a surge in the use of structural VAR models in which structural shocks are identified by restricting the sign of the responses of selected macroeconomic aggregates to these shocks. Researchers commonly report the vector of pointwise posterior medians of the impulse responses as a measure of central tendency of the estimated response functions, along with pointwise 68% posterior error bands. It can be shown that this approach cannot be used to characterize the central tendency of the structural impulse response functions. We propose an alternative method of summarizing the evidence from sign-identified VAR models designed to enhance their practical usefulness. Our objective is to characterize the most likely admissible model(s) within the set of structural VAR models that satisfy the sign restrictions. We show how the set of most likely structural response functions can be computed from the posterior mode of the joint distribution of admissible models both in the fully identified and in the partially identified case, and we propose a highest-posterior density credible set that characterizes the joint uncertainty about this set. Our approach can also be used to resolve the long-standing problem of how to conduct joint inference on sets of structural impulse response functions in exactly identified VAR models. We illustrate the differences between our approach and the traditional approach for the analysis of the effects of monetary policy shocks and of the effects of oil demand and oil supply shocks.  相似文献   

4.
In this paper, I interpret a time series spatial model (T-SAR) as a constrained structural vector autoregressive (SVAR) model. Based on these restrictions, I propose a minimum distance approach to estimate the (row-standardized) network matrix and the overall network influence parameter of the T-SAR from the SVAR estimates. I also develop a Wald-type test to assess the distance between these two models. To implement the methodology, I discuss machine learning methods as one possible identification strategy of SVAR models. Finally, I illustrate the methodology through an application to volatility spillovers across major stock markets using daily realized volatility data for 2004–2018.  相似文献   

5.
In this paper we introduce a calibration procedure for validating of agent based models. Starting from the well-known financial model of (Brock and Hommes, 1998), we show how an appropriate calibration enables the model to describe price time series. We formulate the calibration problem as a nonlinear constrained optimization that can be solved numerically via a gradient-based method. The calibration results show that the simplest version of the Brock and Hommes model, with two trader types, fundamentalists and trend-followers, replicates nicely the price series of four different markets indices: the S&P 500, the Euro Stoxx 50, the Nikkei 225 and the CSI 300. We show how the parameter values of the calibrated model are important in interpreting the trader behavior in the different markets investigated. These parameters are then used for price forecasting. To further improve the forecasting, we modify our calibration approach by increasing the trader information set. Finally, we show how this new approach improves the model׳s ability to predict market prices.  相似文献   

6.
A popular macroeconomic forecasting strategy utilizes many models to hedge against instabilities of unknown timing; see (among others) Stock and Watson (2004), Clark and McCracken (2010), and Jore et al. (2010). Existing studies of this forecasting strategy exclude dynamic stochastic general equilibrium (DSGE) models, despite the widespread use of these models by monetary policymakers. In this paper, we use the linear opinion pool to combine inflation forecast densities from many vector autoregressions (VARs) and a policymaking DSGE model. The DSGE receives a substantial weight in the pool (at short horizons) provided the VAR components exclude structural breaks. In this case, the inflation forecast densities exhibit calibration failure. Allowing for structural breaks in the VARs reduces the weight on the DSGE considerably, but produces well-calibrated forecast densities for inflation.  相似文献   

7.
Forecasting researchers, with few exceptions, have ignored the current major forecasting controversy: global warming and the role of climate modelling in resolving this challenging topic. In this paper, we take a forecaster’s perspective in reviewing established principles for validating the atmospheric-ocean general circulation models (AOGCMs) used in most climate forecasting, and in particular by the Intergovernmental Panel on Climate Change (IPCC). Such models should reproduce the behaviours characterising key model outputs, such as global and regional temperature changes. We develop various time series models and compare them with forecasts based on one well-established AOGCM from the UK Hadley Centre. Time series models perform strongly, and structural deficiencies in the AOGCM forecasts are identified using encompassing tests. Regional forecasts from various GCMs had even more deficiencies. We conclude that combining standard time series methods with the structure of AOGCMs may result in a higher forecasting accuracy. The methodology described here has implications for improving AOGCMs and for the effectiveness of environmental control policies which are focussed on carbon dioxide emissions alone. Critically, the forecast accuracy in decadal prediction has important consequences for environmental planning, so its improvement through this multiple modelling approach should be a priority.  相似文献   

8.
We consider efficient methods for likelihood inference applied to structural models. In particular, we introduce a particle filter method which concentrates upon disturbances in the Markov state of the approximating solution to the structural model. A particular feature of such models is that the conditional distribution of interest for the disturbances is often multimodal. We provide a fast and effective method for approximating such distributions. We estimate a neoclassical growth model using this approach. An asset pricing model with persistent habits is also considered. The methodology we employ allows many fewer particles to be used than alternative procedures for a given precision.  相似文献   

9.
The empirical analysis of the economic interactions between factors of production, output and corresponding prices has received much attention over the last two decades. Most contributions in this area have agreed on the neoclassical principle of a representative optimizing firm and typically use theory-based structural equation models (SEM). A popular alternative to SEM is the vector autoregression (VAR) methodology. The most recent attempts to link the SEM approach with VAR analysis in the area of factor demands concentrate on single-equation models, whereas no effort has been devoted to compare these alternative approaches when a firm is assumed to face a multi-factor technology and to decide simultaneously the optimal quantity for each input. This paper bridges this gap. First, we illustrate how the SEM and the VAR approaches can both represent valid alternatives to model systems of dynamic factor demands. Second, we show how to apply both methodologies to estimate dynamic factor demands derived from a cost-minimizing capital-labour-energy-materials (KLEM) technology with adjustment costs (ADC) on the quasi-fixed capital factor. Third, we explain how to use both models to calculate some widely accepted indicators of the production structure of an economic sector, such as price and quantity elasticities, and alternative measures of ADC. In particular, we propose and discuss some theoretical and empirical justifications of the differences between observed elasticities, measures of ADC, and the assumption of exogeneity of output and/or input prices. Finally, we offer some suggestions for the applied researcher.   相似文献   

10.
Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete‐data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the “estimated weights” paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models.  相似文献   

11.
In this paper I propose an alternative to calibration of linearized singular dynamic stochastic general equilibrium models. Given an a-theoretical econometric model as a representative of the data generating process, I will construct an information measure which compares the conditional distribution of the econometric model variables with the corresponding singular conditional distribution of the theoretical model variables. The singularity problem will be solved by using convolutions of both distributions with a non-singular distribution. This information measure will then be maximized to the deep parameters of the theoretical model, which links these parameters to the parameters of the econometric model and provides an alternative to calibration. This approach will be illustrated by an application to a linearized version of the stochastic growth model of King, Plosser and Rebelo.  相似文献   

12.
We present new Bayesian methodology for consumer sales forecasting. Focusing on the multi-step-ahead forecasting of daily sales of many supermarket items, we adapt dynamic count mixture models for forecasting individual customer transactions, and introduce novel dynamic binary cascade models for predicting counts of items per transaction. These transaction–sales models can incorporate time-varying trends, seasonality, price, promotion, random effects and other outlet-specific predictors for individual items. Sequential Bayesian analysis involves fast, parallel filtering on sets of decoupled items, and is adaptable across items that may exhibit widely-varying characteristics. A multi-scale approach enables information to be shared across items with related patterns over time in order to improve prediction, while maintaining the scalability to many items. A motivating case study in many-item, multi-period, multi-step-ahead supermarket sales forecasting provides examples that demonstrate an improved forecast accuracy on multiple metrics, and illustrates the benefits of full probabilistic models for forecast accuracy evaluation and comparison.  相似文献   

13.
The class of p2 models is suitable for modeling binary relation data in social network analysis. A p2 model is essentially a regression model for bivariate binary responses, featuring within‐dyad dependence and correlated crossed random effects to represent heterogeneity of actors. Despite some desirable properties, these models are used less frequently in empirical applications than other models for network data. A possible reason for this is due to the limited possibilities for this model for accounting for (and explicitly modeling) structural dependence beyond the dyad as can be done in exponential random graph models. Another motive, however, may lie in the computational difficulties existing to estimate such models by means of the methods proposed in the literature, such as joint maximization methods and Bayesian methods. The aim of this article is to investigate maximum likelihood estimation based on the Laplace approximation approach, that can be refined by importance sampling. Practical implementation of such methods can be performed in an efficient manner, and the article provides details on a software implementation using R . Numerical examples and simulation studies illustrate the methodology.  相似文献   

14.
Many structural break and regime-switching models have been used with macroeconomic and financial data. In this paper, we develop an extremely flexible modeling approach which can accommodate virtually any of these specifications. We build on earlier work showing the relationship between flexible functional forms and random variation in parameters. Our contribution is based around the use of priors on the time variation that is developed from considering a hypothetical reordering of the data and distance between neighboring (reordered) observations. The range of priors produced in this way can accommodate a wide variety of nonlinear time series models, including those with regime-switching and structural breaks. By allowing the amount of random variation in parameters to depend on the distance between (reordered) observations, the parameters can evolve in a wide variety of ways, allowing for everything from models exhibiting abrupt change (e.g. threshold autoregressive models or standard structural break models) to those which allow for a gradual evolution of parameters (e.g. smooth transition autoregressive models or time varying parameter models). Bayesian econometric methods for inference are developed for estimating the distance function and types of hypothetical reordering. Conditional on a hypothetical reordering and distance function, a simple reordering of the actual data allows us to estimate our models with standard state space methods by a simple adjustment to the measurement equation. We use artificial data to show the advantages of our approach, before providing two empirical illustrations involving the modeling of real GDP growth.  相似文献   

15.
Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time‐varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.  相似文献   

16.
In this study, we addressed the problem of point and probabilistic forecasting by describing a blending methodology for machine learning models from the gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition in both the Accuracy and Uncertainty tracks. The key points of our methodology are: (a) transforming the task into regression on sales for a single day; (b) information-rich feature engineering; (c) creating a diverse set of state-of-the-art machine learning models; and (d) carefully constructing validation sets for model tuning. We show that the diversity of the machine learning models and careful selection of validation examples are most important for the effectiveness of our approach. Forecasting data have an inherent hierarchical structure (12 levels) but none of our proposed solutions exploited the hierarchical scheme. Using the proposed methodology, we ranked within the gold medal range in the Accuracy track and within the prizes in the Uncertainty track. Inference code with pre-trained models are available on GitHub.1  相似文献   

17.
The economic theory of decision-making under uncertainty is used to produce three econometric models of dynamic discrete choice: (1) for a single spell of unemployment; (2) for an equilibrium two-state model of employment and non-employment; (3) for a general three-state model with a non-market sector. The paper provides a structural economic motivation for the continuous time Markov (or more generally ‘competing risks’) model widely used in longitudinal analysis in biostatistics and sociology, and it extends previous work on dynamic discrete choice to a continuous time setting. An important feature of identification analysis is separation of economic parameters that can only be identified by assuming arbitrary functional forms from economic parameters that can be identified by non-parametric procedures. The paper demonstrates that most econometric models for the analysis of truncated data are non-parametrically under-identified. It also demonstrates that structural estimators frequently violate standard regularity conditions. The standard asymptotic theory is modified to account for this essential feature of many structural models of labor force dynamics. Empirical estimates of an equilibrium two-state model of employment and non-employment are presented.  相似文献   

18.
We consider the dynamic factor model and show how smoothness restrictions can be imposed on factor loadings by using cubic spline functions. We develop statistical procedures based on Wald, Lagrange multiplier and likelihood ratio tests for this purpose. The methodology is illustrated by analyzing a newly updated monthly time series panel of US term structure of interest rates. Dynamic factor models with and without smooth loadings are compared with dynamic models based on Nelson–Siegel and cubic spline yield curves. We conclude that smoothness restrictions on factor loadings are supported by the interest rate data and can lead to more accurate forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
This paper considers semiparametric identification of structural dynamic discrete choice models and models for dynamic treatment effects. Time to treatment and counterfactual outcomes associated with treatment times are jointly analyzed. We examine the implicit assumptions of the dynamic treatment model using the structural model as a benchmark. For the structural model we show the gains from using cross-equation restrictions connecting choices to associated measurements and outcomes. In the dynamic discrete choice model, we identify both subjective and objective outcomes, distinguishing ex post and ex ante outcomes. We show how to identify agent information sets.  相似文献   

20.
This paper contributes to the econometric literature on structural breaks by proposing a test for parameter stability in vector autoregressive (VAR) models at a particular frequency ω, where ω ∈ [0, π]. When a dynamic model is affected by a structural break, the new tests allow for detecting which frequencies of the data are responsible for parameter instability. If the model is locally stable at the frequencies of interest, the whole sample size can then be exploited despite the presence of a break. The methodology is applied to analyse the productivity slowdown in the US, and the outcome is that local stability concerns only the higher frequencies of data on consumption, investment and output.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号