首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a new test for the null hypothesis of panel unit roots for micropanels with short time dimensions (T) and large cross‐sections (N). There are several distinctive features of this test. First, the test is based on a panel AR(1) model allowing for cross‐sectional dependency, which is introduced by a factor structure of the initial condition. Second, the test employs the panel AR(1) model with AR(1) coefficients that are heterogeneous for finite N. Third, the test can be used both for the alternative hypothesis of stationarity and for that of explosive roots. Fourth, the test does not use the AR(1) coefficient estimator. The effectiveness of the test rests on the fact that the initial condition has permanent effects on the trajectory of a time series in the presence of a unit root. To measure the effects of the initial condition, the present paper employs cross‐sectional regressions using the first time‐series observations as a regressor and the last as a dependent variable. If there is a unit root in every individual time series, the coefficient of the regressor is equal to one. The t‐ratios for the coefficient are this paper's test statistics and have a standard normal distribution in the limit. The t‐ratios are based on the OLS estimator and the instrumental variables estimator that uses reshuffled regressors as instruments. The test proposed in this paper makes it possible to test for a unit root even at T = 2 as long as N is large. Simulation results show that test statistics have reasonable empirical size and power. The test is applied to college graduates' monthly real wage in South Korea. The number of time‐series observations for this data is only two. The null hypothesis of a unit root is rejected against the alternative of stationarity.  相似文献   

2.
Until recently, considerable effort has been devoted to the estimation of panel data regression models without adequate attention being paid to the drivers of interaction amongst cross-section and spatial units. We discuss some new methodologies in this emerging area and demonstrate their use in measurement and inferences on cross-section and spatial interactions. Specifically, we highlight the important distinction between spatial dependence driven by unobserved common factors and those based on a spatial weights matrix. We argue that purely factor-driven models of spatial dependence may be inadequate because of their connection with the exchangeability assumption. The three methods considered are appropriate for different asymptotic settings; estimation under structural constraints when N is fixed and T → ∞, whilst the methods based on GMM and common correlated effects are appropriate when TN → ∞. Limitations and potential enhancements of the existing methods are discussed, and several directions for new research are highlighted.  相似文献   

3.
In this paper, we consider the case of finite time dimension in the panel stationarity tests with structural breaks. By fixing T, the finite sample properties of the tests for both micro (T small and N large) and macro (both T and N large) panel data are generally greatly improved. More importantly, the derivation of the tests for finite T and , as opposed to joint asymptotic where N and simultaneously, avoids the imposition of the rate condition making the test valid for any (T, N) blend. Four models corresponding to the usual combination of breaks are considered. The asymptotic distributions of the test are derived under the null hypothesis and are shown to be normally distributed. Their moments for T fixed are derived analytically employing Ghazal’s corollary 1. The case with unknown breaks is also considered. The proposed tests have generally empirical sizes that are very close to the nominal size. The Monte Carlo simulations show that the power of the test statistics increases substantially with N and T.  相似文献   

4.
This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N →∞ simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro‐panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over‐rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (≤10).  相似文献   

5.
Summary LetT denote a continuous time horizon and {G t :tT} be a net (generalized sequence) of Bayesian games. We show that: (i) if {x t : tT} is a net of Bayesian Nash Equilibrium (BNE) strategies for Gt we can extract a subsequence which converges to a limit full information BNE strategy for a one shot limit full information Bayesian game, (ii) If {x t : tT} is a net of approximate or t-BNE strategies for the game Gt we can still extract a subsequence which converges to the one shot limit full information equilibrium BNE strategy, (iii) Given a limit full information BNE strategy of a one shot limit full information Bayesian game, we can find a net of t-BNE strategies {x t : tT} in {G t :tT} which converges to the limit full information BNE strategy of the one shot game.We wish to thank Larry Blume, Mark Feldman, Jim Jordan, Charlie Kahn, Stefan Krasa, Gregory Michalopoulos, Wayne Shafer, Bart Taub, and Anne Villamil for several useful discussions. The financial support of the University of Illinois at Urbana-Champaign Campus Research Board is gratefully acknowledged.  相似文献   

6.
The basic structural time series model has been designed for the modelling and forecasting of seasonal economic time series. In this article, we explore a generalization of the basic structural time series model in which the time-varying trigonometric terms associated with different seasonal frequencies have different variances for their disturbances. The contribution of the article is two-fold. The first aim is to investigate the dynamic properties of this frequency-specific Basic Structural Model (BSM). The second aim is to relate the model to a comparable generalized version of the Airline model developed at the US Census Bureau. By adopting a quadratic distance metric based on the restricted reduced form moving-average representation of the models, we conclude that the generalized models have properties that are close to each other compared to their default counterparts. In some settings, the distance between the models is almost zero so that the models can be regarded as observationally equivalent. An extensive empirical study on disaggregated monthly shipment and foreign trade series illustrates the improvements of the frequency-specific extension and investigates the relations between the two classes of models.  相似文献   

7.
After the seminal work of Nickell (1981), a vast literature demonstrates the inconsistency of ‘conditional convergence’ estimator in income‐based dynamic panel models with fixed effects when the time horizon (T) is short but the sample of countries (N) is large. Less attention is given to the economic root of inconsistency of the fixed effects estimator when T is also large. Using a variant of the Ramsey growth model with long‐run adjustment cost of capital, we demonstrate that the fixed effects estimator of such models could be inconsistent when T is large. This inconsistency arises because of the long‐run adjustment cost of capital which gives rise to a negative moving average coefficient in the error term. Income convergence will be thus overestimated. We theoretically characterize the order of this inconsistency. Our Monte Carlo simulation demonstrates that the size of the bias is substantial and it is greater in economies with higher capital adjustment costs. We show that the use of instrumental variables that take into account the presence of the negative moving average term in the error will overcome this bias.  相似文献   

8.
Aims: To estimate a preference-based single index for the disease-specific instrument (AcroQoL) by mapping it onto the EQ-5D to assist in future economic evaluations.

Materials and methods: A sample of 245 acromegaly patients with AcroQoL and EQ-5D scores was obtained from three previously published European studies. The sample was split into two: one sub-sample to construct the model (algorithm construction sample, n?=?184), and the other one to confirm it (validation sample, n?=?61). Various multiple regression models including two-part model, tobit model, and generalized additive models were tested and/or evaluated for predictive ability, consistency of estimated coefficients, normality of prediction errors, and simplicity.

Results: Across these studies, mean age was 50–60 years and the proportion of males was 36–59%. At overall level the percentage of patients with controlled disease was 37.4%. Mean (SD) scores for AcroQoL Global Score and EQ-5D utility were 62.3 (18.5) and 0.71 (0.28), respectively. The best model for predicting EQ-5D was a generalized regression model that included the Physical Dimension summary score and categories from questions 9 and 14 as independent variables (Adj. R2?=?0.56, with mean absolute error of 0.0128 in the confirmatory sample). Observed and predicted utilities were strongly correlated (Spearman r?=?0.73, p?<?.001) and paired t-Student test revealed non-significant differences between means (p?>?.05). Estimated utility scores showed a minimum error of ≤10% in 45% of patients; however, error increased in patients with an observed utility score under 0.2. The model’s predictive ability was confirmed in the validation cohort.

Limitations and conclusions: A mapping algorithm was developed for mapping of AcroQoL to EQ-5D, using patient level data from three previously published studies, and including validation in the confirmatory sub-sample. Mean (SD) utilities index in this study population was estimated as 0.71 (0.28). Additional research may be needed to test this mapping algorithm in other acromegaly populations.  相似文献   

9.
This paper considers methods for forecasting macroeconomic time series in a framework where the number of predictors, N, is too large to apply traditional regression models but not sufficiently large to resort to statistical inference based on double asymptotics. Our interest is motivated by a body of empirical research suggesting that popular data-rich prediction methods perform best when N ranges from 20 to 40. In order to accomplish our goal, we resort to partial least squares and principal component regression to consistently estimate a stable dynamic regression model with many predictors as only the number of observations, T, diverges. We show both by simulations and empirical applications that the considered methods, especially partial least squares, compare well to models that are widely used in macroeconomic forecasting.  相似文献   

10.
We develop a nonparametric test for consistency of player behavior with the quantal response equilibrium (QRE). The test exploits a characterization of the equilibrium choice probabilities in any structural QRE as the gradient of a convex function; thereby, QRE‐consistent choices satisfy the cyclic monotonicity inequalities. Our testing procedure utilizes recent econometric results for moment inequality models. We assess our test using lab experimental data from a series of generalized matching pennies games. We reject the QRE hypothesis in the pooled data but cannot reject individual‐level quantal response behavior for over half of the subjects.  相似文献   

11.
The aim of the paper is to evaluate the information provided by forecasting models that include explanatory variables besides the variables to be forecasted. It is argued that the content of a forecast is a combination of historical information about the variable to be forecasted and theoretical considerations, normally manifested by a model. The historical information is assessed by a time series model for the variable. In order to assess the theoretical information about a variable, one suggests a measure. This measure is based on the improvement of fit to the actual values of the values obtained from the forecasting model in comparison to the values obtained from the time series model. The R2 measure, which frequently is used as a measure of the explanatory power of a forecasting model, is critically discussed.  相似文献   

12.
We examine and compare a large number of generalized autoregressive conditional heteroskedastic (GARCH) and stochastic volatility (SV) models using series of Bitcoin and Litecoin price returns to assess the model fit for dynamics of these cryptocurrency price returns series. The various models examined include the standard GARCH(1,1) and SV with an AR(1) log-volatility process, as well as more flexible models with jumps, volatility in mean, leverage effects, t-distributed and moving average innovations. We report that the best model for Bitcoin is SV-t while it is GARCH-t for Litecoin. Overall, the t-class of models performs better than other classes for both cryptocurrencies. For Bitcoin, the SV models consistently outperform the GARCH models and the same holds true for Litecoin in most cases. Finally, the comparison of GARCH models with GARCH-GJR models reveals that the leverage effect is not significant for cryptocurrencies, suggesting that these do not behave like stock prices.  相似文献   

13.
This paper demonstrates a parametric test of specification that can be used in validating econometric models employing pooled time series and cross-section data with fixed effects. This Lagrange multiplier test allows for the simultaneous testing of proper model functional form and the presence of nonspherical disturbances, using a combination of the Box-Cox transformation, the double-length regression of Davidson and MacKinnon, and the Bonferroni inducedt-test. Testing procedures are demonstrated using a model of long distance telephone demand in the United States. The illustrative model used is representative of models filed as direct testimony by telephone companies in administrative law proceedings, which usually require rigorous model validation and defenses of model results in a formal hearing room setting. The tests presented in this paper are useful to a wide variety of researchers who use pooled econometric models with fixed effects in their work.  相似文献   

14.
This paper develops a Bayesian model comparison of two broad major classes of varying volatility model, the generalized autoregressive conditional heteroskedasticity and stochastic volatility models, on financial time series. The leverage effect, jumps and heavy‐tailed errors are incorporated into the two models. For estimation, the efficient Markov chain Monte Carlo methods are developed and the model comparisons are examined based on the marginal likelihood. The empirical analyses are illustrated using the daily return data of US stock indices, individual securities and exchange rates of UK sterling and Japanese yen against the US dollar. The estimation results indicate that the stochastic volatility model with leverage and Student‐t errors yield the best performance among the competing models.  相似文献   

15.
This article uses a small set of variables – real GDP, the inflation rate and the short-term interest rate – and a rich set of models – atheoretical (time series) and theoretical (structural), linear and nonlinear, as well as classical and Bayesian models – to consider whether we could have predicted the recent downturn of the US real GDP. Comparing the performance of the models to the benchmark random-walk model by root mean-square errors, the two structural (theoretical) models, especially the nonlinear model, perform well on average across all forecast horizons in our ex post, out-of-sample forecasts, although at specific forecast horizons certain nonlinear atheoretical models perform the best. The nonlinear theoretical model also dominates in our ex ante, out-of-sample forecast of the Great Recession, suggesting that developing forward-looking, microfounded, nonlinear, dynamic stochastic general equilibrium models of the economy may prove crucial in forecasting turning points.  相似文献   

16.
Seasonal fractional models are shown in this article to be alternative credible ways of modelling the seasonal component in macroeconomic time series. A testing procedure that allows one to test different orders of integration at zero and at each of the seasonal frequencies is described. This procedure is then applied to the Italian consumption and income series, the results being very sensitive to the way of modelling the I(0) disturbances.  相似文献   

17.
The explanation of state and local government expenditures has received considerable attention since Fabricant's study Trends in Government Activity Since 1900. These studies have been subject to at least two important shortcomings. One of their limitations stems from the estimation procedures used, while the other is the result of an incomplete model of the process underlying the determination of such expenditures. For the most part, past studies have used either cross-sectional data for a particular year or time series data for a single state. Consequently, the explanations resulting from these analyses either fail to capture the dynamic aspects of the problem in the first case, or remain localized to a particular state in the second. Since expenditure decisions are influenced by both historical events acting through time and economic, political, and demographic factors working at a point in time, studies which fail to integrate both types of information into the estimation process are imcomplete.

The purpose of this paper is to suggest a methodology for using both types of information. Accordingly, the resulting technique is a more efficient approach for estimating state and local government expenditure determinants. The technique is a generalized Aitken estimator for a system of unrelated regressions and was first introduced by ZELLNER (1962). The second problem with past research is the result of the inadequacy of our models for public goods and collective consumption in general, the decision process underlying public provision of goods and services has not been subjected to comprehensive modeling. 1 1 Some work has begun in this area. See HAEFELE (1970, 1971, 1972) as well as the references he cites. Therefore empirical analyses of expenditure patterns have been based on incompletely developed models. Our approach will be to suggest a model which is representative of the existing literature, sketch its theoretical foundation, and discuss the areas for future research. The present paper will not, however, attempt to develop a more complete model of the public decision process.

Section I of the paper briefly summarizes the primary research efforts in this area. It is followed by an explanation of the model and of the technique used for this study. Section IV presents the results for nine expenditure categories for state and local governments in the U.S. in 1957, 1962, and 1967. The last section summarizes the conclusions of the paper and discusses the scope for further research.  相似文献   

18.
Abstract

Objective:

This study constructed the Economic and Health Outcomes Model for type 2 diabetes mellitus (ECHO-T2DM), a long-term stochastic microsimulation model, to predict the costs and health outcomes in patients with T2DM. Naturally, the usefulness of the model depends upon its predictive accuracy. The objective of this work is to present results of a formal validation exercise of ECHO-T2DM.

Methods:

The validity of ECHO-T2DM was assessed using criteria recommended by the International Society for Pharmacoeconomics and Outcomes Research/Society for Medical Decision Making (ISPOR/SMDM). Specifically, the results of a number of clinical trials were predicted and compared with observed study end-points using a scatterplot and regression approach. An F-test of the best-fitting regression was added to assess whether it differs statistically from the identity (45°) line defining perfect predictions. In addition to testing the full model using all of the validation study data, tests were also performed of microvascular, macrovascular, and survival outcomes separately. The validation tests were also performed separately by type of data (used vs not used to construct the model, economic simulations, and treatment effects).

Results:

The intercept and slope coefficients of the best-fitting regression line between the predicted outcomes and corresponding trial end-points in the main analysis were ?0.0011 and 1.067, respectively, and the R2 was 0.95. A formal F-test of no difference between the fitted line and the identity line could not be rejected (p?=?0.16). The high R2 confirms that the data points are closely (and linearly) associated with the fitted regression line. Additional analyses identified that disagreement was highest for macrovascular end-points, for which the intercept and slope coefficients were 0.0095 and 1.225, respectively. The R2 was 0.95 and the estimated intercept and slope coefficients were 0.017 and 1.048, respectively, for mortality, and the F-test was narrowly rejected (p?=?0.04). The sub-set of microvascular end-points showed some tendency to over-predict (the slope coefficient was 1.095), although concordance between predictions and observed values could not be rejected (p?=?0.16).

Limitations:

Important study limitations include: (1) data availability limited one to tests based on end-of-study outcomes rather than time-varying outcomes during the studies analyzed; (2) complex inclusion and exclusion criteria in two studies were difficult to replicate; (3) some of the studies were older and reflect outdated treatment patterns; and (4) the authors were unable to identify published data on resource use and costs of T2DM suitable for testing the validity of the economic calculations.

Conclusions:

Using conventional methods, ECHO-T2DM simulated the treatment, progression, and patient outcomes observed in important clinical trials with an accuracy consistent with other well-accepted models. Macrovascular outcomes were over-predicted, which is common in health-economic models of diabetes (and may be related to a general over-prediction of event rates in the United Kingdom Prospective Diabetes Study [UKPDS] Outcomes Model). Work is underway in ECHO-T2DM to incorporate new risk equations to improve model prediction.  相似文献   

19.
By analysing three macroeconomic time series, namely retail sales, purchases of durables and of cars, we show the consequences of the presence of outliers in the data on the outcome of model-based seasonal adjustment. For all three series, we detect substantial negative effects for the resulting seasonally adjusted figures.In a recent paper,Thury — Wüger (1992) demonstrated that the presence of outliers in economic data has serious negative effects for time series modelling. Poorly estimated ARIMA models with an unsatisfactory forecasting performance are the consequence. Beyond that, we suspect that outliers may also cause problems for seasonal adjustment. Since seasonally adjusted data play a prominent role in applied economic research, it seems worthwhile to investigate this problem more deeply. Analysing the same three series as in the above mentioned paper, namely retail sales, purchases of durables and of cars which, as we know, are severely contaminated by outliers, we try to derive the consequences of the existence of outliers in the data for seasonal adjustment. Where monthly observations of our considered data exist, we also enclose calendar effects in the modelbased seasonal adjustment procedure.
Zusammenfassung Die Existenz von Ausreißern in ökonomischen Zeitreihen führt zu schlecht spezifizierten Zeitreihenmodellen mit verzerrten Parameterschätzwerten. Verwendet man solche Modelle als Ausgangspunkt für eine auf Modellansatz basierende Saisonbereinigung, so erhält man sehr unverläßliche, mit starken Zufallsschwankungen behaftete Ergebnisse.
  相似文献   

20.
This note shows that the usual correction factor applied to the estimator of the residual variance in static models, to get unbiasedness, gives a bias of order O(T-2) in autoregressive models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号