首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a method to compute equilibria in dynamic models with several continuous state variables and occasionally binding constraints. These constraints induce non-differentiabilities in policy functions. We develop an interpolation technique that addresses this problem directly: It locates the non-differentiabilities and adds interpolation nodes there. To handle this flexible grid, it uses Delaunay interpolation, a simplicial interpolation technique. Hence, we call this method Adaptive Simplicial Interpolation (ASI). We embed ASI into a time iteration algorithm to compute recursive equilibria in an infinite horizon endowment economy where heterogeneous agents trade in a bond and a stock subject to various trading constraints. We show that this method computes equilibria accurately and outperforms other grid schemes by far.  相似文献   

2.
The study of the solutions of dynamic models with optimizing agents has often been limited by a lack of available analytical techniques to explicitly find the global solution paths. On the other hand, the application of numerical techniques such as dynamic programming to find the solution in interesting regions of the state was restricted by the use of fixed grid size techniques. Following Grüne (Numer. Math. 75 (3) (1997) 319; University of Bayreuth, submitted, 2003), in this paper an adaptive grid scheme is used for finding the global solutions of discrete time Hamilton–Jacobi–Bellman equations. Local error estimates are established and an adapting iteration for the discretization of the state space is developed. The advantage of the use of adaptive grid scheme is demonstrated by computing the solutions of one- and two-dimensional economic models which exhibit steep curvature, complicated dynamics due to multiple equilibria, thresholds (Skiba sets) separating domains of attraction and periodic solutions. We consider deterministic and stochastic model variants. The studied examples are from economic growth, investment theory, environmental and resource economics.  相似文献   

3.
The method of endogenous gridpoints (ENDG) significantly speeds up the solution to dynamic stochastic optimization problems with continuous state and control variables by avoiding repeated computations of expected outcomes while searching for optimal policy functions. I provide an interpolation technique for non-rectilinear grids that allow ENDG to be used in n-dimensional problems in an intuitive and computationally efficient way: the acceleration of ENDG with non-linear grid interpolation is nearly constant in the density of the grid. Further, ENDG has only been shown by example and has never been formally characterized. Using a theoretical framework for dynamic stochastic optimization problems, I formalize the method of endogenous gridpoints and present conditions for the class of models for which it can be used.  相似文献   

4.
We compare global methods for solving models with jump discontinuities in the policy function. We find that differences between value function iteration (VFI) and other methods are economically significant and Euler equation errors fail to be a sufficient measure of accuracy in such models. VFI fails to accurately identify both the location and size of jump discontinuities, while the endogenous grid method (EGM) and the finite element method (FEM) are much better at approximating this class of models. We further show that combining VFI with a local interpolation step (VFI‐INT) is sufficient to obtain accurate approximations. The combination of computational speed, relatively easy implementation and adaptability make VFI‐INT especially suitable for approximating models with jump discontinuities in policy functions: while EGM is the fastest method, it is relatively complex to implement; implementation of VFI‐INT is relatively straightforward and it is much faster than FEM.  相似文献   

5.
We develop an envelope condition method (ECM) for dynamic programming problems – a tractable alternative to expensive conventional value function iteration (VFI). ECM has two novel features: first, to reduce the cost of iteration on Bellman equation, ECM constructs policy functions using envelope conditions which are simpler to analyze numerically than first-order conditions. Second, to increase the accuracy of solutions, ECM solves for derivatives of value function jointly with value function itself. We complement ECM with other computational techniques that are suitable for high-dimensional problems, such as simulation-based grids, monomial integration rules and derivative-free solvers. The resulting value-iterative ECM method can accurately solve models with at least up to 20 state variables and can successfully compete in accuracy and speed with state-of-the-art Euler equation methods. We also use ECM to solve a challenging default risk model with a kink in value and policy functions.  相似文献   

6.
The endogenous grid method (EGM) significantly speeds up the solution of stochastic dynamic programming problems by simplifying or completely eliminating root-finding. We propose a general and parsimonious EGM extended to handle (1) multiple continuous states and choices, (2) multiple occasionally binding constraints, and (3) non-convexities such as discrete choices. Our method enjoys the speed gains of the original one-dimensional EGM, while avoiding expensive interpolation on multi-dimensional irregular endogenous grids. We explicitly define a broad class of models for which our solution method is applicable, and illustrate its speed and accuracy using a consumption–saving model with both liquid assets and illiquid pension assets and a discrete retirement choice.  相似文献   

7.
This paper investigates the statistical properties of estimators of the parameters and unobserved series for state space models with integrated time series. In particular, we derive the full asymptotic results for maximum likelihood estimation using the Kalman filter for a prototypical class of such models—those with a single latent common stochastic trend. Indeed, we establish the consistency and asymptotic mixed normality of the maximum likelihood estimator and show that the conventional method of inference is valid for this class of models. The models we explicitly consider comprise a special–yet useful–class of models that may be employed to extract the common stochastic trend from multiple integrated time series. Such models can be very useful to obtain indices that represent fluctuations of various markets or common latent factors that affect a set of economic and financial variables simultaneously. Moreover, our derivation of the asymptotics of this class makes it clear that the asymptotic Gaussianity and the validity of the conventional inference for the maximum likelihood procedure extends to a larger class of more general state space models involving integrated time series. Finally, we demonstrate the utility of this class of models extracting a common stochastic trend from three sets of time series involving short- and long-term interest rates, stock return volatility and trading volume, and Dow Jones stock prices.  相似文献   

8.
Current economic theory typically assumes that all the macroeconomic variables belonging to a given economy are driven by a small number of structural shocks. As recently argued, apart from negligible cases, the structural shocks can be recovered if the information set contains current and past values of a large, potentially infinite, set of macroeconomic variables. However, the usual practice of estimating small size causal Vector AutoRegressions can be extremely misleading as in many cases such models could fully recover the structural shocks only if future values of the few variables considered were observable. In other words, the structural shocks may be non‐fundamental with respect to the small dimensional vector used in current macroeconomic practice. By reviewing a recent strand of econometric literature, we show that, as a solution, econometricians should enlarge the space of observations, and thus consider models able to handle very large panels of related time series. Among several alternatives, we review dynamic factor models together with their economic interpretation, and we show how non‐fundamentalness is non‐generic in this framework. Finally, using a factor model, we provide new empirical evidence on the effect of technology shocks on labour productivity and hours worked.  相似文献   

9.
In this paper we consider the optimal quadratic control problem of Markov-switching linear rational expectation models. These models are general and flexible tools for modelling not only regime but also model or parameter uncertainty. We show, first, how to find the solution of a Markov-switching linear rational expectation model. Based on this solution we then show how to apply dynamic programming to find the optimal time-consistent policy and the resulting Nash-Stackelberg equilibrium. Suitable modifications of the algorithm allow to deal with the (non-RE) case in which the policymaker and the private sector hold different beliefs or probabilities over regime change. We also show how the optimisation procedure can be employed to obtain the optimal policy under commitment. As an illustration we compute the optimal policy in a small open economy subject to stochastic structural breaks in some of its key parameters.  相似文献   

10.
In this paper we develop new Markov chain Monte Carlo schemes for the estimation of Bayesian models. One key feature of our method, which we call the tailored randomized block Metropolis–Hastings (TaRB-MH) method, is the random clustering of the parameters at every iteration into an arbitrary number of blocks. Then each block is sequentially updated through an M–H step. Another feature is that the proposal density for each block is tailored to the location and curvature of the target density based on the output of simulated annealing, following  and  and Chib and Ergashev (in press). We also provide an extended version of our method for sampling multi-modal distributions in which at a pre-specified mode jumping iteration, a single-block proposal is generated from one of the modal regions using a mixture proposal density, and this proposal is then accepted according to an M–H probability of move. At the non-mode jumping iterations, the draws are obtained by applying the TaRB-MH algorithm. We also discuss how the approaches of Chib (1995) and Chib and Jeliazkov (2001) can be adapted to these sampling schemes for estimating the model marginal likelihood. The methods are illustrated in several problems. In the DSGE model of Smets and Wouters (2007), for example, which involves a 36-dimensional posterior distribution, we show that the autocorrelations of the sampled draws from the TaRB-MH algorithm decay to zero within 30–40 lags for most parameters. In contrast, the sampled draws from the random-walk M–H method, the algorithm that has been used to date in the context of DSGE models, exhibit significant autocorrelations even at lags 2500 and beyond. Additionally, the RW-MH does not explore the same high density regions of the posterior distribution as the TaRB-MH algorithm. Another example concerns the model of An and Schorfheide (2007) where the posterior distribution is multi-modal. While the RW-MH algorithm is unable to jump from the low modal region to the high modal region, and vice-versa, we show that the extended TaRB-MH method explores the posterior distribution globally in an efficient manner.  相似文献   

11.
This paper derives limit distributions of empirical likelihood estimators for models in which inequality moment conditions provide overidentifying information. We show that the use of this information leads to a reduction of the asymptotic mean-squared estimation error and propose asymptotically uniformly valid tests and confidence sets for the parameters of interest. While inequality moment conditions arise in many important economic models, we use a dynamic macroeconomic model as a data generating process and illustrate our methods with instrumental variable estimators of monetary policy rules. The results obtained in this paper extend to conventional GMM estimators.  相似文献   

12.
The special functions are intensively used in mathematical physics to solve differential systems. We argue that they should be most useful in economic dynamics, notably in the assessment of the transition dynamics of endogenous economic growth models. We illustrate our argument on the famous Lucas-Uzawa model, which we solve by the means of Gaussian hypergeometric functions. We show how the use of Gaussian hypergeometric functions allows for an explicit representation of the equilibrium dynamics of all variables in level. The parameters of the involved hypergeometric functions are identified using the Pontryagin conditions arising from the underlying optimization problems. In contrast to the pre-existing approaches, our method is global and does not rely on dimension reduction.  相似文献   

13.
We present discrete time survival models of borrower default for credit cards that include behavioural data about credit card holders and macroeconomic conditions across the credit card lifetime. We find that dynamic models which include these behavioural and macroeconomic variables provide statistically significant improvements in model fit, which translate into better forecasts of default at both account and portfolio levels when applied to an out-of-sample data set. By simulating extreme economic conditions, we show how these models can be used to stress test credit card portfolios.  相似文献   

14.
Small dimension PDE for discrete Asian options   总被引:1,自引:0,他引:1  
This paper presents an efficient method for pricing discrete Asian options in presence of smile and non-proportional dividends. Using an homogeneity property, we show how to reduce an n0 dimensional problem to a one- or two-dimensional one. We examine different numerical specifications of our dimension reduced PDE using a Crank–Nicholson method (interpolation method, grid boundaries, time and space steps) as well as the extension to the case of non-proportional discrete dividends, using a jump condition. We benchmark our results with Quasi Monte-Carlo simulation and a multi-dimensional PDE  相似文献   

15.
Numerous methods have been proposed to update input–output (I–O) tables. They rely on the assumption that the economic structure will not change significantly during the interpolation period. However, this assumption may not always hold, particularly for countries experiencing rapid development. This study attempts to combine forecasting with a matrix transformation technique (MTT) to provide a new perspective on updating I–O tables. Under the assumption that changes in the trend of an economic structure are statistically significant, the method extrapolates I–O tables by combining time series models with an MTT and proceeds with only the total value added during the target years. A simulation study and empirical analysis are conducted to compare the forecasting performance of the MTT to the Generalized RAS (GRAS) and Kuroda methods. The results show that the comprehensive performance of the MTT is better than the performance of the GRAS and Kuroda methods, as measured by the Standardized Total Percentage Error, Theil's U and Mean Absolute Percentage Error indices.  相似文献   

16.
We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations of the retarded type. We apply the Waveform Relaxation algorithm, i.e., we provide a guess of the policy function and solve the resulting system of (deterministic) ordinary differential equations by standard techniques. For parametric restrictions, analytical solutions to the stochastic growth model and a novel solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households.  相似文献   

17.
Talmud  Ilan  Kraus  Vered  Yonay  Yuval 《Quality and Quantity》2003,37(1):21-41
This paper demonstrates how nesting and non-nesting analytical strategies provide different answers regarding the comparative utility of theoretical models. This paper demonstrates this incompatibility by testing the empirical efficacy of Goldthorpe's and Wright's class schemes in explaining earnings inequality in Israel. These models are non-nested, because while they partially overlap each other conceptually and empirically, neither can be written as a parametric restriction of the other. As they are non-nested, we cannot test each model against the other by using the conventional sociological approach to hypotheses testing. For the sake of demonstration, however, we show results obtained from the conventional Ordinary Least Squares regression models with conventional Baysian Information Coefficient statistic, serving as criterion for a decision rule. Wright's model was found to be more significant in explaining earnings variations in Israeli society. Yet when we used two models of non-nested specification tests (the Cox-Pesaran model and the J test) to examine each model's unique contribution, neither of these models were able to reject the rival hypothesis. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

18.
In this paper, we develop solutions for linearized models with forward‐looking expectations and structural changes under a variety of assumptions regarding agents' beliefs about those structural changes. For each solution, we show how its associated likelihood function can be constructed by using a ‘backward–forward’ algorithm. We illustrate the techniques with two examples. The first considers an inflationary program in which beliefs about the inflation target evolve differently from the inflation target itself, and the second applies the techniques to estimate a new Keynesian model through the Volcker disinflation. We compare our methodology with the alternative in which structural change is captured by switching between regimes via a Markov switching process. We show that our method can produce accurate results much faster than the Markov switching method as well as being easily adapted to handle beliefs departing from reality. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
We introduce the papers appearing in the special issue of this journal associated with the WEHIA 2015. The papers in issue deal with two growing fields in the in the literature inspired by the complexity-based approach to economic analysis. The first group of contributions develops network models of financial systems and show how these models can shed light on relevant issues that emerged in the aftermath of the last financial crisis. The second group of contributions deals with the issue of validation of agent-based model. Agent-based models have proven extremely useful to account for key features economic dynamics that are usually neglected by more standard models. At the same time, agent-based models have been criticized for the lack of an adequate validation against empirical data. The works in this issue propose useful techniques to validate agent-based models, thus contributing to the wider diffusion of these models in the economic discipline.  相似文献   

20.
This paper shows how to build algorithms that use graphics processing units (GPUs) installed in most modern computers to solve dynamic equilibrium models in economics. In particular, we rely on the compute unified device architecture (CUDA) of NVIDIA GPUs. We illustrate the power of the approach by solving a simple real business cycle model with value function iteration. We document improvements in speed of around 200 times and suggest that even further gains are likely.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号