首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We compare a number of methods that have been proposed in the literature for obtaining h-step ahead minimum mean square error forecasts for self-exciting threshold autoregressive (SETAR) models. These forecasts are compared to those from an AR model. The comparison of forecasting methods is made using Monte Carlo simulation. The Monte-Carlo method of calculating SETAR forecasts is generally at least as good as that of the other methods we consider. An exception is when the disturbances in the SETAR model come from a highly asymmetric distribution, when a Bootstrap method is to be preferred.An empirical application calculates multi-period forecasts from a SETAR model of US gross national product using a number of the forecasting methods. We find that whether there are improvements in forecast performance relative to a linear AR model depends on the historical epoch we select, and whether forecasts are evaluated conditional on the regime the process was in at the time the forecast was made.  相似文献   

2.
We contrast the forecasting performance of alternative panel estimators, divided into three main groups: homogeneous, heterogeneous and shrinkage/Bayesian. Via a series of Monte Carlo simulations, the comparison is performed using different levels of heterogeneity and cross sectional dependence, alternative panel structures in terms of T and N and the specification of the dynamics of the error term. To assess the predictive performance, we use traditional measures of forecast accuracy (Theil’s U statistics, RMSE and MAE), the Diebold–Mariano test, and Pesaran and Timmerman’s statistic on the capability of forecasting turning points. The main finding of our analysis is that when the level of heterogeneity is high, shrinkage/Bayesian estimators are preferred, whilst when there is low or mild heterogeneity, homogeneous estimators have the best forecast accuracy.  相似文献   

3.
Parametric stochastic frontier models yield firm-level conditional distributions of inefficiency that are truncated normal. Given these distributions, how should one assess and rank firm-level efficiency? This study compares the techniques of estimating (a) the conditional mean of inefficiency and (b) probabilities that firms are most or least efficient. Monte Carlo experiments suggest that the efficiency probabilities are easier to estimate (less noisy) in terms of mean absolute percent error when inefficiency has large variation across firms. Along the way we tackle some interesting problems associated with simulating and assessing estimator performance in the stochastic frontier model.  相似文献   

4.
This paper compares a set of four cross-impact models: (1) additive, (2) likelihood multiplier, (3) R-space, and (4) a model constructed by the author. This is done by examining a forecasting problem encountered by an industrial firm. The forecasting problem was to study the market trend in order to decide whether to expand the production capacity of a ceramics plant. In spite of their different theoretical premises, the models yielded similar results. However, only the R-space model produced results that differed from the others. The paper also suggests a method that should avoid some internal contradictions of the cross-impact models.  相似文献   

5.
Empirical researchers usually prefer statistical models that can be easily estimated with the help of commonly available software packages. Sequential binary models with or without normal random effects are an example of such models that can be adopted to estimate discrete duration models with unobserved heterogeneity. But an easy-to-implement estimation may incur a cost. In this paper we conduct a Monte Carlo simulation to evaluate the consequences of omitting or misspecifying the unobserved heterogeneity distribution in single-spell discrete duration models.  相似文献   

6.
The relative performances of forecasting models change over time. This empirical observation raises two questions. First, is the relative performance itself predictable? Second, if so, can it be exploited in order to improve the forecast accuracy? We address these questions by evaluating the predictive abilities of a wide range of economic variables for two key US macroeconomic aggregates, namely industrial production and inflation, relative to simple benchmarks. We find that business cycle indicators, financial conditions, uncertainty and measures of past relative performances are generally useful for explaining the models’ relative forecasting performances. In addition, we conduct a pseudo-real-time forecasting exercise, where we use the information about the conditional performance for model selection and model averaging. The newly proposed strategies deliver sizable improvements over competitive benchmark models and commonly-used combination schemes. The gains are larger when model selection and averaging are based on both financial conditions and past performances measured at the forecast origin date.  相似文献   

7.
Six of the simpler ARMA type models are examined with respect to properties of a variety of proposed estimators of unknown parameters. The findings suggest that if only one estimation method were available to a researcher the choice should probably be maximum likelihood. Stationarity- and invertibility-restricted estimation would appear appropriate when parameters are thought to be within 5 percent of constraint boundaries.  相似文献   

8.
The analysis of aggregate economic phenomena by VAR's as suggested by Sims often results in a small sample relative to the number of estimated parameters. Since the model is identified by a dimensionality criterion, the small-sample properties of available criteria are important. This paper presents a study of small-sample properties for six criteria with Monte Carlo methods. It is found that no criterion performs well, and that underfitting of models may be quite common.  相似文献   

9.
10.
Electronic health records are being increasingly used in medical research to answer more relevant and detailed clinical questions; however, they pose new and significant methodological challenges. For instance, observation times are likely correlated with the underlying disease severity: Patients with worse conditions utilise health care more and may have worse biomarker values recorded. Traditional methods for analysing longitudinal data assume independence between observation times and disease severity; yet, with health care data, such assumptions unlikely hold. Through Monte Carlo simulation, we compare different analytical approaches proposed to account for an informative visiting process to assess whether they lead to unbiased results. Furthermore, we formalise a joint model for the observation process and the longitudinal outcome within an extended joint modelling framework. We illustrate our results using data from a pragmatic trial on enhanced care for individuals with chronic kidney disease, and we introduce user-friendly software that can be used to fit the joint model for the observation process and a longitudinal outcome.  相似文献   

11.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

12.
Richard Y. P. Joun 《Socio》1983,17(5-6):345-353
This paper presents a case study of two regional economic-demographic models: the Washington Projection and Simulation Model and the Hawaii Economic-Population Projection and Simulation Model. A discussion of model specification focusses attention on the interdependence of economic and demographic variables. Ex ante prediction tests demonstrate the models' forecasting capabilities. Simulations with the Washington model are conducted to show more clearly the interaction between economic and demogrphic activity in a region.  相似文献   

13.
14.
This Monte Carlo study examines the relative performance of sample selection and two-part models for data with a cluster at zero. The data are drawn from a bivariate normal distribution with a positive correlation. The alternative estimators are examined in terms of means squared error, mean bias and pointwise bias. The sample selection estimators include LIML and FIML. The two-part estimators include a naive (the true specification, omitting the correlation coefficient) and a data-analytic (testimator) variant.In the absence of exclusion restrictions, the two-part models are no worse, and often appreciably better than selection models in terms of mean behavior, but can behave poorly for extreme values of the independent variable. LIML had the worst performance of all four models. Empirically, selection effects are difficult to distinguish from a non-linear (e.g., quadratic) response. With exclusion restrictions, simple selection models were significantly better behaved than a naive two-part model over subranges of the data, but were negligibly better than the data-analytic version.  相似文献   

15.
This paper discusses a series of Monte Carlo experiments designed to evaluate the empirical properties of Heterogeneous-Agent macroeconomic models in the presence of sampling variability. The calibration procedure leads to the welfare analysis being conducted with the wrong parameters. The ability of the calibrated model to correctly predict the long-run welfare changes induced by a set of policy experiments is assessed. The results show that, for the policy reforms with sizable welfare effects (i.e., more than 0.2%), the model always predicts the right sign of the welfare effects. However, the welfare effects can be evaluated with the wrong sign, when they are small and when the sample size is fairly limited. Quantitatively, the maximum errors made in evaluating a policy change are very small for some reforms (in the order of 0.02 percentage points), but bigger for others (in the order of 0.6 percentage points). Finally, having access to better data, in terms of larger samples, does lead to substantial increases in the precision of the welfare effects estimates, though the rate of convergence can be slow.  相似文献   

16.
Typical data that arise from surveys, experiments, and observational studies include continuous and discrete variables. In this article, we study the interdependence among a mixed (continuous, count, ordered categorical, and binary) set of variables via graphical models. We propose an ?1‐penalized extended rank likelihood with an ascent Monte Carlo expectation maximization approach for the copula Gaussian graphical models and establish near conditional independence relations and zero elements of a precision matrix. In particular, we focus on high‐dimensional inference where the number of observations are in the same order or less than the number of variables under consideration. To illustrate how to infer networks for mixed variables through conditional independence, we consider two datasets: one in the area of sports and the other concerning breast cancer.  相似文献   

17.
18.
It is known that the small sample significance levels of Cox-type tests of non-nested regression models can be much greater than the nominal level. Adjustments designed to overcome this problem are discussed and two tests are proposed. Monte Carlo evidence on the performance of the tests derived in this paper, the Davidson-MacKinnon J-test and the Fisher-McAleer test is presented. The F-test applied to the comprehensive model is also included in the simulation experiments.  相似文献   

19.
Researchers commonly use co-occurrence counts to assess the similarity of objects. This paper illustrates how traditional association measures can lead to misguided significance tests of co-occurrence in settings where the usual multinomial sampling assumptions do not hold. I propose a Monte Carlo permutation test that preserves the original distributions of the co-occurrence data. I illustrate the test on a dataset of organizational categorization, in which I investigate the relations between organizational categories (such as “Argentine restaurants” and “Steakhouses”).  相似文献   

20.
The Monte Carlo method of exploring the properties of econometric estimators and significance tests has yielded a considerable amount of information that has practical value in guiding choice of technique in applied research. This paper presents a bibliography of such Monte Carlo studies over the period 1948–1972. About 150 citations are listed alphabetically by author, and also under a detailed subject-matter classification scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号