首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Vector autoregressive (VAR) models have become popular in marketing literature for analyzing the behavior of competitive marketing systems. One drawback of these models is that the number of parameters can become very large, potentially leading to estimation problems. Pooling data for multiple cross-sectional units (stores) can partly alleviate these problems. An important issue in such models is how heterogeneity among cross-sectional units is accounted for. We investigate the performance of several pooling approaches that accommodate different levels of cross-sectional heterogeneity in a simulation study and in an empirical application. Our results show that the random coefficients modeling approach is an overall good choice when the estimated VAR model is used for out-of-sample forecasting only. When the estimated model is used to compute Impulse Response Functions, we conclude that one should select a modeling approach that matches the level of heterogeneity in the data.  相似文献   

2.
In this paper, we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification, while other alternative-specific coefficients are assumed to be drawn from a multivariate Normal distribution, which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model, using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a “latent class” sampling algorithm, which is applicable to a general class of models, including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004–2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution, which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience.  相似文献   

3.
Recently, there has been considerable work on stochastic time-varying coefficient models as vehicles for modelling structural change in the macroeconomy with a focus on the estimation of the unobserved paths of random coefficient processes. The dominant estimation methods, in this context, are based on various filters, such as the Kalman filter, that are applicable when the models are cast in state space representations. This paper introduces a new class of autoregressive bounded processes that decompose a time series into a persistent random attractor, a time varying autoregressive component, and martingale difference errors. The paper examines, rigorously, alternative kernel based, nonparametric estimation approaches for such models and derives their basic properties. These estimators have long been studied in the context of deterministic structural change, but their use in the presence of stochastic time variation is novel. The proposed inference methods have desirable properties such as consistency and asymptotic normality and allow a tractable studentization. In extensive Monte Carlo and empirical studies, we find that the methods exhibit very good small sample properties and can shed light on important empirical issues such as the evolution of inflation persistence and the purchasing power parity (PPP) hypothesis.  相似文献   

4.
Detecting and modeling structural changes in time series models have attracted great attention. However, relatively little effort has been paid to the testing of structural changes in panel data models despite their increasing importance in economics and finance. In this paper, we propose a new approach to testing structural changes in panel data models. Unlike the bulk of the literature on structural changes, which focuses on detection of abrupt structural changes, we consider smooth structural changes for which model parameters are unknown deterministic smooth functions of time except for a finite number of time points. We use nonparametric local smoothing method to consistently estimate the smooth changing parameters and develop two consistent tests for smooth structural changes in panel data models. The first test is to check whether all model parameters are stable over time. The second test is to check potential time-varying interaction while allowing for a common trend. Both tests have an asymptotic N(0,1) distribution under the null hypothesis of parameter constancy and are consistent against a vast class of smooth structural changes as well as abrupt structural breaks with possibly unknown break points alternatives. Simulation studies show that the tests provide reliable inference in finite samples and two empirical examples with respect to a cross-country growth model and a capital structure model are discussed.  相似文献   

5.
Near-term forecasts, also called nowcasts, are most challenging but also most important when the economy experiences an abrupt change. In this paper, we explore the performance of models with different information sets and data structures in order to best nowcast US initial unemployment claims in spring of 2020 in the midst of the COVID-19 pandemic. We show that the best model, particularly near the structural break in claims, is a state-level panel model that includes dummy variables to capture the variation in timing of state-of-emergency declarations. Autoregressive models perform poorly at first but catch up relatively quickly. The state-level panel model, exploiting the variation in timing of state-of-emergency declarations, also performs better than models including Google Trends. Our results suggest that in times of structural change there is a bias–variance tradeoff. Early on, simple approaches to exploit relevant information in the cross sectional dimension improve forecasts, but in later periods the efficiency of autoregressive models dominates.  相似文献   

6.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

7.
This paper contributes to the econometric literature on structural breaks by proposing a test for parameter stability in vector autoregressive (VAR) models at a particular frequency ω, where ω ∈ [0, π]. When a dynamic model is affected by a structural break, the new tests allow for detecting which frequencies of the data are responsible for parameter instability. If the model is locally stable at the frequencies of interest, the whole sample size can then be exploited despite the presence of a break. The methodology is applied to analyse the productivity slowdown in the US, and the outcome is that local stability concerns only the higher frequencies of data on consumption, investment and output.  相似文献   

8.
US yield curve dynamics are subject to time-variation, but there is ambiguity about its precise form. This paper develops a vector autoregressive (VAR) model with time-varying parameters and stochastic volatility, which treats the nature of parameter dynamics as unknown. Coefficients can evolve according to a random walk, a Markov switching process, observed predictors, or depend on a mixture of these. To decide which form is supported by the data and to carry out model selection, we adopt Bayesian shrinkage priors. Our framework is applied to model the US yield curve. We show that the model forecasts well, and focus on selected in-sample features to analyze determinants of structural breaks in US yield curve dynamics.  相似文献   

9.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

10.
This paper provides a feasible approach to estimation and forecasting of multiple structural breaks for vector autoregressions and other multivariate models. Owing to conjugate prior assumptions we obtain a very efficient sampler for the regime allocation variable. A new hierarchical prior is introduced to allow for learning over different structural breaks. The model is extended to independent breaks in regression coefficients and the volatility parameters. Two empirical applications show the improvements the model has over benchmarks. In a macro application with seven variables we empirically demonstrate the benefits from moving from a multivariate structural break model to a set of univariate structural break models to account for heterogeneous break patterns across data series.  相似文献   

11.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

12.
The likelihood of the parameters in structural macroeconomic models typically has non‐identification regions over which it is constant. When sufficiently diffuse priors are used, the posterior piles up in such non‐identification regions. Use of informative priors can lead to the opposite, so both can generate spurious inference. We propose priors/posteriors on the structural parameters that are implied by priors/posteriors on the parameters of an embedding reduced‐form model. An example of such a prior is the Jeffreys prior. We use it to conduct Bayesian limited‐information inference on the new Keynesian Phillips curve with a VAR reduced form for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
We present a new approach to trend/cycle decomposition of time series that follow regime-switching processes. The proposed approach, which we label the “regime-dependent steady-state” (RDSS) decomposition, is motivated as the appropriate generalization of the Beveridge and Nelson decomposition [Beveridge, S., Nelson, C.R., 1981. A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle. Journal of Monetary Economics 7, 151–174] to the setting where the reduced-form dynamics of a given series can be captured by a regime-switching forecasting model. For processes in which the underlying trend component follows a random walk with possibly regime-switching drift, the RDSS decomposition is optimal in a minimum mean-squared-error sense and is more broadly applicable than directly employing an Unobserved Components model.  相似文献   

14.
It has been documented that random walk outperforms most economic structural and time series models in out-of-sample forecasts of the conditional mean dynamics of exchange rates. In this paper, we study whether random walk has similar dominance in out-of-sample forecasts of the conditional probability density of exchange rates given that the probability density forecasts are often needed in many applications in economics and finance. We first develop a nonparametric portmanteau test for optimal density forecasts of univariate time series models in an out-of-sample setting and provide simulation evidence on its finite sample performance. Then we conduct a comprehensive empirical analysis on the out-of-sample performances of a wide variety of nonlinear time series models in forecasting the intraday probability densities of two major exchange rates—Euro/Dollar and Yen/Dollar. It is found that some sophisticated time series models that capture time-varying higher order conditional moments, such as Markov regime-switching models, have better density forecasts for exchange rates than random walk or modified random walk with GARCH and Student-t innovations. This finding dramatically differs from that on mean forecasts and suggests that sophisticated time series models could be useful in out-of-sample applications involving the probability density.  相似文献   

15.
This paper describes a method for finding optimal transformations for analyzing time series by autoregressive models. 'Optimal' implies that the agreement between the autoregressive model and the transformed data is maximal. Such transformations help 1) to increase the model fit, and 2) to analyze categorical time series. The method uses an alternating least squares algorithm that consists of two main steps: estimation and transformation. Nominal, ordinal and numerical data can be analyzed. Some alternative applications of the general idea are highlighted: intervention analysis, smoothing categorical time series, predictable components, spatial modeling and cross-sectional multivariate analysis. Limitations, modeling issues and possible extensions are briefly indicated.  相似文献   

16.
A methodology is presented for fitting distributed lag models with polynomial restrictions on the lag coefficients. The model incorporates autoregressive residuals. Orthogonal methods are employed so that the procedures are numerically sound. Furthermore, these have the effect of allowing for inference to be made about the three integer parameters in the model: (i) the length of the lag, (ii) the degree of the polynomial, and (iii) the order of the autoregression. The methodology is applied to the extended Almon data. This analysis suggests that estimation of polynomial distributed lags is highly sensitive to autoregressive disturbances. This underlines the importance of modeling the disturbances.  相似文献   

17.
We provide a general methodology for forecasting in the presence of structural breaks induced by unpredictable changes to model parameters. Bayesian methods of learning and model comparison are used to derive a predictive density that takes into account the possibility that a break will occur before the next observation. Estimates for the posterior distribution of the most recent break are generated as a by‐product of our procedure. We discuss the importance of using priors that accurately reflect the econometrician's opinions as to what constitutes a plausible forecast. Several applications to macroeconomic time‐series data demonstrate the usefulness of our procedure. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Asymptotic theory for nonparametric regression with spatial data   总被引:1,自引:0,他引:1  
Nonparametric regression with spatial, or spatio-temporal, data is considered. The conditional mean of a dependent variable, given explanatory ones, is a nonparametric function, while the conditional covariance reflects spatial correlation. Conditional heteroscedasticity is also allowed, as well as non-identically distributed observations. Instead of mixing conditions, a (possibly non-stationary) linear process is assumed for disturbances, allowing for long range, as well as short-range, dependence, while decay in dependence in explanatory variables is described using a measure based on the departure of the joint density from the product of marginal densities. A basic triangular array setting is employed, with the aim of covering various patterns of spatial observation. Sufficient conditions are established for consistency and asymptotic normality of kernel regression estimates. When the cross-sectional dependence is sufficiently mild, the asymptotic variance in the central limit theorem is the same as when observations are independent; otherwise, the rate of convergence is slower. We discuss the application of our conditions to spatial autoregressive models, and models defined on a regular lattice.  相似文献   

19.
This paper develops a new Bayesian approach to structural break modeling. The focuses of the approach are the modeling of in-sample structural breaks and forecasting time series allowing out-of-sample breaks. The model has several desirable features. First, the number of regimes is not fixed but is treated as a random variable. Second, the model adopts a hierarchical prior for regime coefficients, which allows for the coefficients of one regime to contain information about coefficients of other regimes. Third, the regime coefficients can be integrated analytically in the posterior density; as a consequence the posterior simulator is fast and reliable. An application to US real GDP quarterly growth rates links groups of regimes to specific historical periods and provides forecasts of future growth rates.  相似文献   

20.
Inferences about unobserved random variables, such as future observations, random effects and latent variables, are of interest. In this paper, to make probability statements about unobserved random variables without assuming priors on fixed parameters, we propose the use of the confidence distribution for fixed parameters. We focus on their interval estimators and related probability statements. In random‐effect models, intervals can be formed either for future (yet‐to‐be‐realised) random effects or for realised values of random effects. The consistency of intervals for these two cases requires different regularity conditions. Via numerical studies, their finite sampling properties are investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号