首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 261 毫秒
1.
We develop a methodology of parametric modeling of time series dynamics when the underlying loss function is linear-exponential (Linex). We propose to directly model the dynamics of the conditional expectation that determines the optimal predictor. The procedure hinges on the exponential quasi-maximum likelihood interpretation of the Linex loss and nicely fits the multiplicative error modeling framework. Many conclusions relating to estimation, inference and forecasting follow from results already available in the econometric literature. The methodology is illustrated using data on United States GNP growth and Treasury bill returns.  相似文献   

2.
Modelling futures term structures (price forward curves) is essential for commodity-related investments, portfolios, risk management, and capital budgeting decisions. This paper uses a novel strategy, wavelet thresholding, to de-noise futures price data prior to estimation in a state-space framework in order to improve model fit and prediction. Rather than de-noise the raw data, this method de-noises only wavelet coefficients linked to specific timescales, minimizing the amount of information that is accidentally removed. Our findings are that, for the first five futures maturities in our sample data, in-sample (tracking) and 5-day-ahead out-of-sample (forecasting) Root Mean Squared Errors (RMSEs) are smaller both (i) when we increase the number of factors from one to four, and (ii) when we de-noise the data using wavelet thresholding. The improvement due to wavelet thresholding is often greater than the improvement from adding one more factor to the model, which is important because going beyond four factors does not improve model fit. Wavelet-based de-noising thus has the potential to improve considerably the estimation of various economic time series models, helping practitioners and policymakers with better forecasting and risk management.  相似文献   

3.
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.  相似文献   

4.
In this paper, we study the dynamics between house prices and selected macroeconomic fundamentals in Greece. The empirical analysis applies the asymmetric ARDL cointegration methodology proposed by Shin, Yu and Greenwood-Nimmo (2011) over the period from January 1999 to May 2011. The evidence suggests that ignoring the intrinsic nonlinearities may lead to misleading inference. In particular, the results reveal significant differences in the response of house prices to positive or negative changes of the explanatory variables in both the long- and short-run time horizons. The obtained evidence of asymmetry could be of major importance for more efficient policymaking and forecasting in the Greek house market.  相似文献   

5.
Vacant technology forecasting (VTF) is a technology forecasting approach to find technological needs for given industrial field in the future. It is important to know the future trend of developing technology for the R&D planning of a company and a country. In this paper, we propose a new Bayesian model for patent clustering. This is a VTF methodology based on patent data analysis. Our method is composed of Bayesian learning and ensemble method to construct the VTF model. To illustrate the practical way of the proposed methodology, we perform a case study of given technology domain using retrieved patent documents from patent databases in the world.  相似文献   

6.
This paper argues for the development of more explicit forecasting methodologies that use the pragmatics of combining methods and the philosophical base of multiple perspectives. The increasingly common “wicked” problem of forecasting demand for discontinuous innovations (DI) at the concept testing stage of new product development is used to ground the discussion. We look to the interpretivist group-based inquiry methodologies in the management and information systems literature, and coupled this with discussions with forecasting managers, to provide evidence to support the adoption of this approach. Relativism is briefly critiqued and the accuracy of the combining methods forecasting literature reviewed. It appears that the managers interviewed could benefit from an explicit understanding of the multiple perspective approach, as they already appeared to have appreciated the need for a broader based approach than traditional forecasting techniques. It is therefore hoped that as a result of this paper, more managers involved with the “wicked” problem of innovative product forecasting will recognise the need to adopt a more explicit multiple perspective inquiry methodology in their efforts to combine forecasting methods.  相似文献   

7.
Abstract. Experimental studies of expectation formation of subjects are predominantly limited to the prediction of one single time series despite the practical relevance of expectations in situations with multiple sources of information. In this paper, we report on an experiment in which subjects are given time series (indicators) as additional information for the judgemental forecast of a stationary time series. The quality and the number of these indicators are varied in three versions of a forecasting experiment. We explore the effects on forecasting accuracy and we test the average forecasts of the subjects for consistency with the rational expectations hypothesis. A simple heuristic is presented that explains the average forecasting behavior better than the rational expectations if indicators are presented to the subjects. It is demonstrated by a simulation study that this result is representative for the considered stationary stochastic processes.  相似文献   

8.
We forecast US inflation using a standard set of macroeconomic predictors and a dynamic model selection and averaging methodology that allows the forecasting model to change over time. Pseudo out‐of‐sample forecasts are generated from models identified from a multipath general‐to‐specific algorithm that is applied dynamically using rolling regressions. Our results indicate that the inflation forecasts that we obtain employing a short rolling window substantially outperform those from a well‐established univariate benchmark, and contrary to previous evidence, are considerably robust to alternative forecast periods.  相似文献   

9.
During the past decade, there have been some significant developments in technological forecasting methodology. This paper describes developments in environmental scanning, models, scenarios, Delphi, extrapolation, probabilistic forecasts, technology measurement and some chaos-like behavior in technological data. Some of these developments are refinements of earlier methodology, such as using computerized data mining (DM) for environmental scanning, which extends the power of earlier methods. Other methodology developments, such as the use of cellular automata and object-oriented simulation, represent new approaches to basic forecasting methods. Probabilistic forecasts were developed only within the past decade, but now appear ready for practical use. Other developments include the wide use of some methods, such as the massive national Delphi studies carried out in Japan, Korea, Germany and India. Other new developments include empirical tests of various trend extrapolation methods, to assist the forecaster in selecting the appropriate trend model for a specific case. Each of these developments is discussed in detail.  相似文献   

10.
This exploratory paper is among the first to examine the impact of stock exchange mergers on informational market efficiency. We focus on the merger of Bolsa de Valores de Lisboa e Porto (Portuguese Stock Exchange) with Euronext in 2002 (that created Euronext Lisbon). To investigate this question we perform numerous statistical tests: serial correlation test (ACF test), runs test, unit root test (Kwiatkowski, Philips, Schmidt, & Shin, 1992), multiple variance ratio test (Chow & Denning, 1993) and ranks and signs test (Wright, 2000).The results indicate that the Portuguese Equity Market is inefficient in weak form during pre-merger period implying that investors possessed an opportunity to earn abnormal returns though small in magnitude. The results, sensitive to the methodology used, indicate a mixed evidence of improvement in market efficiency during the post-merger period. Although the findings are mixed, yet most tests show a tendency of improved efficiency.  相似文献   

11.
To explain which methods might win forecasting competitions on economic time series, we consider forecasting in an evolving economy subject to structural breaks, using mis-specified, data-based models. ‘Causal’ models need not win when facing deterministic shifts, a primary factor underlying systematic forecast failure. We derive conditional forecast biases and unconditional (asymptotic) variances to show that when the forecast evaluation sample includes sub-periods following breaks, non-causal models will outperform at short horizons. This suggests using techniques which avoid systematic forecasting errors, including improved intercept corrections. An application to a small monetary model of the UK illustrates the theory.  相似文献   

12.
Modis [Technol. Forecast. Soc. Change 34 (1988) 95] reports that a logistic growth (LG) model of the number of U.S. Nobel Prize recipients provides an excellent fit for the period 1901-1987. This model forecasts that approximately 235 Americans will receive a Nobel Prize by year-end 2002 and that a total of 283 Americans will eventually receive a Nobel Prize. We use recent data (1901-2002) on prize recipients to provide a revised test of this model. The results of extensive holdout forecasting and nonlinear least-squares fits to the data provide convincing evidence that the LG model systematically underpredicts the number of Nobel Prizes awarded to Americans. For instance, the cumulative number of American recipients as of year-end 2002 is 270, significantly larger than the LG forecast of 235. We argue that other approaches to forecasting the number of future Nobel awards should be considered.  相似文献   

13.
In stock market forecasting, high-order time-series models that use previous several periods of stock prices as forecast factors are more reasonable to provide a superior investment portfolio for investors than one-order time-series models using one previous period of stock prices. However, in forecasting processes, it is difficult to deal with high-order stock data, because it is hard to give a proper weight to each period of past stock price, reduce data dimensions without losing stock information, and produce a comprehensive forecasting result based on stock data with nonlinear relationships.Additionally, there are two more drawbacks to past time-series models: (1) some assumptions (Bollerslev, 1986; Engle, 1982) about stock variables are required for statistical methods, such as the autoregressive model (AR) and autoregressive moving average (ARMA); (2) numeric time-series models have been presented to deal with forecasting problems for stock markets, but they can not handle the nonlinear relationships within the stock prices.To address these shortcomings, this paper proposes a new time series model, which employs the ordered weighted averaging (OWA) operator to fuse high-order data into the aggregated values of single attributes, a fusion adaptive network-based fuzzy inference system (ANFIS) procedure, for forecasting stock price in Taiwanese stock markets.In verification, this paper employs a seven-year period of the TAIEX stock index, from 1997 to 2003, as experimental datasets and the root mean square error (RMSE) as evaluation criterion. The experimental results indicate that the proposed model is superior to the listing methods in terms of root mean squared error.  相似文献   

14.
This article models industrial new orders across the European Union (EU) countries for various breakdowns. A common modelling framework exploits soft (business opinion surveys) as well as hard data (industrial turnover). The estimates show for about 200 cases that the model determinants significantly help in explaining new orders' monthly growth rates. An alternative estimation method, different model specifications and out-of-sample and real-time forecasting all show that the model results are robust. We present real-time outcomes of a European Central Bank (ECB) indicator on industrial new orders at an aggregated euro area level. This indicator is largely based on national new orders data and on estimates yielded by the model for those countries that no longer report new orders at the national level. Finally, we demonstrate the leading content of the ECB indicator on euro area new orders for industrial production.  相似文献   

15.
Time series analysis for the Euro Area requires the availability of sufficiently long historical data series, but the appropriate construction methodology has received little attention. The benchmark dataset, developed by the European Central Bank for use in its Area Wide Model (AWM), is based on fixed-weight aggregation across countries with historically distinct monetary policies and financial markets of varying international importance. This paper proposes a new methodology for producing back-dated financial series for the Euro Area, that is based on the time-varying distance of periphery countries from core countries with respect to monetary integration. Historical decompositions of the residuals of vector autoregressive models of the Euro Area economy are then used to explore and compare the monetary policy implications of using the new methodology versus the use of AWM fixed weight series.  相似文献   

16.
This paper presents a model to predict French gross domestic product (GDP) quarterly growth rate. The model is designed to be used on a monthly basis by integrating monthly economic information through bridge models, for both supply and demand sides, allowing thus economic interpretations. For each GDP component, bridge equations are specified by using a general‐to‐specific approach implemented in an automated way by Hoover and Perez and improved by Krolzig and Hendry. This approach allows to select explanatory variables among a large data set of hard and soft data. A rolling forecast study is carried out to assess the forecasting performance in the prediction of aggregated GDP, by taking publication lags into account in order to run pseudo real‐time forecasts. It turns out that the model outperforms benchmark models. The results show that changing the set of equations over the quarter is superior to keeping the same equations over time. In addition, GDP growth seems to be more precisely predicted from a supply‐side approach rather than a demand‐side approach.  相似文献   

17.
Using the methodology developed in Stock and Watson (2002a), this paper proposes to exploit the information that contains the factor loading to identify the countries sharing common factors. The proposal is illustrated by analyzing the relation with the international reference-cycle of a large sample of advanced countries from 1950 until 2006.  相似文献   

18.
We propose a new methodology for predicting electoral results that combines a fundamental model and national polls within an evidence synthesis framework. Although novel, the methodology builds upon basic statistical structures, largely modern analysis of variance type models, and it is carried out in open-source software. The methodology is motivated by the specific challenges of forecasting elections with the participation of new political parties, which is becoming increasingly common in the post-2008 European panorama. Our methodology is also particularly useful for the allocation of parliamentary seats, since the vast majority of available opinion polls predict at national level whereas seats are allocated at local level. We illustrate the advantages of our approach relative to recent competing approaches using the 2015 Spanish Congressional Election. In general, the predictions of our model outperform the alternative specifications, including hybrid models that combine fundamental and polls models. Our forecasts are, in relative terms, particularly accurate in predicting the seats obtained by each political party.  相似文献   

19.
20.
In this article, we forecast employment growth for Germany with data for the period from November 2008 to November 2015. Hutter and Weber (2015) introduced an innovative unemployment indicator and evaluated the performance of several leading indicators, including the Ifo Employment Barometer (IEB), to predict unemployment changes. Since the IEB focuses on employment growth instead of unemployment developments, we mirror the study by Hutter and Weber (2015). It turns out that in our case, and in contrast to their article, the IEB outperforms their newly developed indicator. Additionally, consumers’ unemployment expectations and hard data such as new orders exhibit a high forecasting accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号