首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

2.
Stochastic frontier models with multiple time-varying individual effects   总被引:3,自引:1,他引:2  
This paper proposes a flexible time-varying stochastic frontier model. Similarly to Lee and Schmidt [1993, In: Fried H, Lovell CAK, Schmidt S (eds) The measurement of productive efficiency: techniques and applications. Oxford University Press, Oxford], we assume that individual firms’ technical inefficiencies vary over time. However, the model, which we call the “multiple time-varying individual effects” model, is more general in that it allows multiple factors determining firm-specific time-varying technical inefficiencies. This allows the temporal pattern of inefficiency to vary over firms. The number of such factors can be consistently estimated. The model is applied to data on Indonesian rice farms, and the changes in the efficiency rankings of farms over time demonstrate the model’s flexibility.
Young H. LeeEmail:
  相似文献   

3.
A. S. Young 《Metrika》1987,34(1):185-194
Summary It has been asserted in the past that any Bayesian treatment of the model selection problem in regression using some form of continuous loss structure would always lead to using the largest possible model (Leamer 1979; Chow 1981). We show in this paper that, provided the distinction between the choice of a model and the estimation of its parameters is maintained, the Kullback-Leibler information measure can be used in a Bayesian context to derive a criterion which may lead to parsimony of parameters in regression analysis. The regression models are taken as restrictions of a general class of distributions which includes the truen-variate distribution of the variabley. Separate criteria for the cases of known and unknown variance ofy are obtained. In the limiting situation when prior opinions about the parameters are weak, these criteria reduce to special cases of the generalizedC p and AIC criteria (Atkinson 1981). Relationship with other criteria is discussed.  相似文献   

4.
Model specification for state space models is a difficult task as one has to decide which components to include in the model and to specify whether these components are fixed or time-varying. To this aim a new model space MCMC method is developed in this paper. It is based on extending the Bayesian variable selection approach which is usually applied to variable selection in regression models to state space models. For non-Gaussian state space models stochastic model search MCMC makes use of auxiliary mixture sampling. We focus on structural time series models including seasonal components, trend or intervention. The method is applied to various well-known time series.  相似文献   

5.
《Economic Systems》2022,46(1):100944
It is not directly observable how effectively a society practices social distancing during the COVID-19 pandemic. This paper proposes a novel and robust methodology to identify latent social distancing at the country level. We extend the Susceptible-Exposed-Infectious-Recovered-Deceased (SEIRD) model with a time-varying, country-specific distancing term, and derive the Model-Inferred DIStancing index (MIDIS) for 120 countries using readily available epidemiological data. The index is not sensitive to measurement errors in epidemiological data and to the values assigned to model parameters. The evolution of MIDIS shows that countries exhibit diverse patterns of distancing during the first wave of the COVID-19 pandemic—a persistent increase, a trendless fluctuation, and an inverted U are among these patterns. We then implement regression analyses using MIDIS and obtain the following results: First, MIDIS is strongly correlated with available mobility statistics, at least for high income countries. Second, MIDIS is also strongly associated with (i) the stringency of lockdown measures (governmental response), (ii) the cumulative number of deceased persons (behavioral response), and (iii) the time that passed since the first confirmed case (temporal response). Third, there is statistically significant regional variation in MIDIS, and more developed societies achieve higher distancing levels. Finally, MIDIS is used to explain output losses experienced during the pandemic, and it is shown that there is a robust positive relationship between the two, with sizable economic effects.  相似文献   

6.
Empirical evidence has shown that seasonal patterns of tourism demand and the effects of various influencing factors on this demand tend to change over time. To forecast future tourism demand accurately requires appropriate modelling of these changes. Based on the structural time series model (STSM) and the time-varying parameter (TVP) regression approach, this study develops the causal STSM further by introducing TVP estimation of the explanatory variable coefficients, and therefore combines the merits of the STSM and TVP models. This new model, the TVP-STSM, is employed for modelling and forecasting quarterly tourist arrivals to Hong Kong from four key source markets: China, South Korea, the UK and the USA. The empirical results show that the TVP-STSM outperforms all seven competitors, including the basic and causal STSMs and the TVP model for one- to four-quarter-ahead ex post forecasts and one-quarter-ahead ex ante forecasts.  相似文献   

7.
This paper studies the estimation and testing of Euler equation models in the framework of the classical two-step minimum-distance method. The time-varying reduced-form model in the first step reflects the adaptation of private agents’ beliefs to the changing economic environment. The presumed ability of Euler conditions to deliver stable parameters indexing tastes and technology is interpreted as a time-invariant second-step model. This paper shows that, complementary to and independent of one another, both standard specification test and stability test are required for the evaluation of an Euler equation. As an empirical application, a widely used investment Euler equation is submitted to examination. The empirical outcomes appear to suggest that the standard investment model has not been a success for aggregate investment.  相似文献   

8.
In the last decade VAR models have become a widely-used tool for forecasting macroeconomic time series. To improve the out-of-sample forecasting accuracy of these models, Bayesian random-walk prior restrictions are often imposed on VAR model parameters. This paper focuses on whether placing an alternative type of restriction on the parameters of unrestricted VAR models improves the out-of-sample forecasting performance of these models. The type of restriction analyzed here is based on the business cycle characteristics of U.S. macroeconomic data, and in particular, requires that the dynamic behavior of the restricted VAR model mimic the business cycle characteristics of historical data. The question posed in this paper is: would a VAR model, estimated subject to the restriction that the cyclical characteristics of simulated data from the model “match up” with the business cycle characteristics of U.S. data, generate more accurate out-of-sample forecasts than unrestricted or Bayesian VAR models?  相似文献   

9.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model.  相似文献   

10.
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) βi> and βi<0, where βi is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) θ=0, θ> and θ<0 where θ is the mean of a normal distribution, (4) β=0 and β≠0, where β is a vector of regression coefficients, (5) β2=0 vs. β2≠0 where β' =(β'1β2) is a vector regression coefficients and β1's value is unrestricted. In several cases, is a vector of regression coefficients and β1's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems.  相似文献   

11.
Stein-Rule estimator for regression problems has been studied by several authors including Sclove (1968) and others listed in Vinod's (1978) survey. Ullah and Ullah (1978) provide the expressions for the mean squared error (MSE) of a double k-class (KK) estimator with parameters k1 and k2. When k2=1 the Stein-Rule estimator becomes a special case of KK and an optimal choice of k1 is known. This paper explores optimal theoretical choice of k1 and k2. We note that negative choices of k2 are permissible, and that thereis a large range of choices for K1 and k2 where the MSE of the Stein-Rule estimator can be reduced for regression problems based on multicollinear data. A simulation experiment is included.  相似文献   

12.
This paper analyzes the productivity of farms across 370 municipalities in the Center-West region of Brazil. A stochastic frontier model with a latent spatial structure is proposed to account for possible unknown geographical variation of the outputs. The paper compares versions of the model that include the latent spatial effect in the mean of output or as a variable that conditions the distribution of inefficiency, include or not observed municipal variables, and specify independent normal or conditional autoregressive priors for the spatial effects. The Bayesian paradigm is used to estimate the proposed models. As the resultant posterior distributions do not have a closed form, stochastic simulation techniques are used to obtain samples from them. Two model comparison criteria provide support for including the latent spatial effects, even after considering covariates at the municipal level. Models that ignore the latent spatial effects produce significantly different rankings of inefficiencies across agents.
Alexandra M. SchmidtEmail: URL: www.dme.ufrj.br/∼alex
  相似文献   

13.
This paper introduces new and flexible classes of inefficiency distributions for stochastic frontier models. We consider both generalized gamma distributions and mixtures of generalized gamma distributions. These classes cover many interesting cases and accommodate both positively and negatively skewed composed error distributions. Bayesian methods allow for useful inference with carefully chosen prior distributions. We recommend a two-component mixture model where a sensible amount of structure is imposed through the prior to distinguish the components, which are given an economic interpretation. This setting allows for efficiencies to depend on firm characteristics, through the probability of belonging to either component. Issues of label-switching and separate identification of both the measurement and inefficiency errors are also examined. Inference methods through MCMC with partial centring are outlined and used to analyse both simulated and real data. An illustration using hospital cost data is discussed in some detail.
M. F. J. SteelEmail:
  相似文献   

14.
We extend the recently introduced latent threshold dynamic models to include dependencies among the dynamic latent factors which underlie multivariate volatility. With an ability to induce time-varying sparsity in factor loadings, these models now also allow time-varying correlations among factors, which may be exploited in order to improve volatility forecasts. We couple multi-period, out-of-sample forecasting with portfolio analysis using standard and novel benchmark neutral portfolios. Detailed studies of stock index and FX time series include: multi-period, out-of-sample forecasting, statistical model comparisons, and portfolio performance testing using raw returns, risk-adjusted returns and portfolio volatility. We find uniform improvements on all measures relative to standard dynamic factor models. This is due to the parsimony of latent threshold models and their ability to exploit between-factor correlations so as to improve the characterization and prediction of volatility. These advances will be of interest to financial analysts, investors and practitioners, as well as to modeling researchers.  相似文献   

15.
Poly-t densities are defined by the property that their kernel is a product, or ratio of products, of Student-t kernels. These multivariate densities arise as Bayesian posterior densities for regression coefficients, under a surprising variety of specifications for the prior density and the data generating process. Although no analytical expression exists for the integrating constant and moments of these densities, these parameters are obtained through numerical integration in a number of dimensions given by the number of Student-t kernels in the numerator, minus one. The paper reviews how poly-t densities arise in regression analysis, and summarizes the results obtained for a number of models.  相似文献   

16.
This paper considers the issue of selecting the number of regressors and the number of structural breaks in multivariate regression models in the possible presence of multiple structural changes. We develop a modified Akaike information criterion (AIC), a modified Mallows’ Cp criterion and a modified Bayesian information criterion (BIC). The penalty terms in these criteria are shown to be different from the usual terms. We prove that the modified BIC consistently selects the regressors and the number of breaks whereas the modified AIC and the modified Cp criterion tend to overfit with positive probability. The finite sample performance of these criteria is investigated through Monte Carlo simulations and it turns out that our modification is successful in comparison to the classical model selection criteria and the sequential testing procedure robust to heteroskedasticity and autocorrelation.  相似文献   

17.
In this study we focus attention on model selection in the presence of panel data. Our approach is eclectic in that it combines both classical and Bayesian techniques. It is also novel in that we address not only model selection, but also model occurrence, i.e., the process by which ‘nature’ chooses a statistical framework in which to generate the data of interest. For a given data subset, there exist competing models each of which have an ex ante positive probability of being the correct model, but for any one generated sample, ex post exactly one such model is the basis for the observed data set. Attention focuses on how the underlying model occurrence probabilities of the competing models depend on characteristics of the environments in which the data subsets are generated. Classical, Bayesian, and mixed estimation approaches are developed. Bayesian approaches are shown to be especially attractive whenever the models are nested.  相似文献   

18.
Detecting and modeling structural changes in time series models have attracted great attention. However, relatively little effort has been paid to the testing of structural changes in panel data models despite their increasing importance in economics and finance. In this paper, we propose a new approach to testing structural changes in panel data models. Unlike the bulk of the literature on structural changes, which focuses on detection of abrupt structural changes, we consider smooth structural changes for which model parameters are unknown deterministic smooth functions of time except for a finite number of time points. We use nonparametric local smoothing method to consistently estimate the smooth changing parameters and develop two consistent tests for smooth structural changes in panel data models. The first test is to check whether all model parameters are stable over time. The second test is to check potential time-varying interaction while allowing for a common trend. Both tests have an asymptotic N(0,1) distribution under the null hypothesis of parameter constancy and are consistent against a vast class of smooth structural changes as well as abrupt structural breaks with possibly unknown break points alternatives. Simulation studies show that the tests provide reliable inference in finite samples and two empirical examples with respect to a cross-country growth model and a capital structure model are discussed.  相似文献   

19.
This paper outlines an approach to Bayesian semiparametric regression in multiple equation models which can be used to carry out inference in seemingly unrelated regressions or simultaneous equations models with nonparametric components. The approach treats the points on each nonparametric regression line as unknown parameters and uses a prior on the degree of smoothness of each line to ensure valid posterior inference despite the fact that the number of parameters is greater than the number of observations. We develop an empirical Bayesian approach that allows us to estimate the prior smoothing hyperparameters from the data. An advantage of our semiparametric model is that it is written as a seemingly unrelated regressions model with independent normal–Wishart prior. Since this model is a common one, textbook results for posterior inference, model comparison, prediction and posterior computation are immediately available. We use this model in an application involving a two‐equation structural model drawn from the labour and returns to schooling literatures. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
This paper considers the problem of defining a time-dependent nonparametric prior for use in Bayesian nonparametric modelling of time series. A recursive construction allows the definition of priors whose marginals have a general stick-breaking form. The processes with Poisson-Dirichlet and Dirichlet process marginals are investigated in some detail. We develop a general conditional Markov Chain Monte Carlo (MCMC) method for inference in the wide subclass of these models where the parameters of the marginal stick-breaking process are nondecreasing sequences. We derive a generalised Pólya urn scheme type representation of the Dirichlet process construction, which allows us to develop a marginal MCMC method for this case. We apply the proposed methods to financial data to develop a semi-parametric stochastic volatility model with a time-varying nonparametric returns distribution. Finally, we present two examples concerning the analysis of regional GDP and its growth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号