首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 568 毫秒
1.
Using weekly data for stock and Forex market returns, a set of MS-GARCH models is estimated for a group of high-income (HI) countries and emerging market economies (EMEs) using algorithms proposed by Augustyniak (2014) and Ardia et al. (2018, 2019a,b), allowing for a variety of conditional variance and distribution specifications. The main results are: (i) the models selected using Ardia et al. (2018) have a better fit than those estimated by Augustyniak (2014), contain skewed distributions, and often require that the main coefficients be different in each regime; (ii) in Latam Forex markets, estimates of the heavy-tail parameter are smaller than in HI Forex and all stock markets; (iii) the persistence of the high-volatility regime is considerable and more evident in stock markets (especially in Latam EMEs); (iv) in (HI and Latam) stock markets, a single-regime GJR model (leverage effects) with skewed distributions is selected; but when using MS models, virtually no MS-GJR models are selected. However, this does not happen in Forex markets, where leverage effects are not found either in single-regime or MS-GARCH models.  相似文献   

2.
Information criteria (IC) are often used to decide between forecasting models. Commonly used criteria include Akaike's IC and Schwarz's Bayesian IC. They involve the sum of two terms: the model's log likelihood and a penalty for the number of model parameters. The likelihood is calculated with equal weight being given to all observations. We propose that greater weight should be put on more recent observations in order to reflect more recent accuracy. This seems particularly pertinent when selecting among exponential smoothing methods, as they are based on an exponential weighting principle. In this paper, we use exponential weighting within the calculation of the log likelihood for the IC. Our empirical analysis uses supermarket sales and call centre arrivals data. The results show that basing model selection on the new exponentially weighted IC can outperform individual models and selection based on the standard IC.  相似文献   

3.
This paper studies valuation changes of capital inflows in 19 emerging market economies (EMEs). In most of the EMEs, we find that there are significant valuation changes and a positive rate of return on external liabilities by foreigners. Furthermore, the nonlinear effects of exchange rate movements on valuation changes are investigated using panel smooth transition regression models. Empirical results show that the transition is centered at approximately −22.3% of exchange rate change, which implies that when the exchange rate appreciates more than this level, foreign investment value gains increase considerably.  相似文献   

4.
This paper uses Bayesian methods to estimate a real business cycle model that allows for interactions among fiscal policy instruments, the stochastic ‘fiscal limit’ and sovereign default. Using the particle filter to perform likelihood‐based inference, we estimate the full nonlinear model with post‐EMU data until 2010:Q4. We find that (i) the probability of default on Greek debt was in the range of 5–10% in 2010:Q4 and (ii) the 2011 surge in the Greek real interest rate is within model forecast bands. The results suggest that a nonlinear rational expectations environment can account for the Greek interest rate path. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
The general consensus in the volatility forecasting literature is that high-frequency volatility models outperform low-frequency volatility models. However, such a conclusion is reached when low-frequency volatility models are estimated from daily returns. Instead, we study this question considering daily, low-frequency volatility estimators based on open, high, low, and close daily prices. Our data sample consists of 18 stock market indices. We find that high-frequency volatility models tend to outperform low-frequency volatility models only for short-term forecasts. As the forecast horizon increases (up to one month), the difference in forecast accuracy becomes statistically indistinguishable for most market indices. To evaluate the practical implications of our results, we study a simple asset allocation problem. The results reveal that asset allocation based on high-frequency volatility model forecasts does not outperform asset allocation based on low-frequency volatility model forecasts.  相似文献   

6.
In many decision contexts, there is a need for benchmark equity valuations, based on simplified modeling and publicly available information. Prior research on U.S. data however shows that the accuracy of such valuation models can be low and sensitive to the choice of model specifications and value driver predictions. In this paper, we test the applicability and pricing accuracy of three fundamental valuation (dividend discount, residual income, and abnormal earnings growth) models, all based on forecasts of company dividends, earnings, and/or equity book values. Extending prior research, we apply these models to Scandinavian firms with accounting data from the period 2005–2014, explicitly testing two approaches for the prediction of the value drivers—exogenously forecasted numbers versus projected historical numbers. Given access to the forecasted value drivers, the dividend discount model comes out as the most accurate valuation model. In particular, this holds in a comparison between the most parsimonious model specifications. The residual income valuation model generates the best pricing accuracy given the prediction of value drivers based on historical financial numbers. Notably, we observe pricing errors that in general are lower than what has been reported in prior U.S.‐based research for the dividend discount and the residual income valuation models. The pricing accuracy of the abnormal earnings growth models is surprisingly weak in the Scandinavian setting. However, these models improve somewhat after a couple of complexity adjustments, in particular with value driver predictions based on the projected history setting.  相似文献   

7.
Retailers supply a wide range of stock keeping units (SKUs), which may differ for example in terms of demand quantity, demand frequency, demand regularity, and demand variation. Given this diversity in demand patterns, it is unlikely that any single model for demand forecasting can yield the highest forecasting accuracy across all SKUs. To save costs through improved forecasting, there is thus a need to match any given demand pattern to its most appropriate prediction model. To this end, we propose an automated model selection framework for retail demand forecasting. Specifically, we consider model selection as a classification problem, where classes correspond to the different models available for forecasting. We first build labeled training data based on the models’ performances in previous demand periods with similar demand characteristics. For future data, we then automatically select the most promising model via classification based on the labeled training data. The performance is measured by economic profitability, taking into account asymmetric shortage and inventory costs. In an exploratory case study using data from an e-grocery retailer, we compare our approach to established benchmarks. We find promising results, but also that no single approach clearly outperforms its competitors, underlying the need for case-specific solutions.  相似文献   

8.
Estimation of technical efficiency is widely used in empirical research using both cross-sectional and panel data. Although several stochastic frontier models for panel data are available, only a few of them are normally applied in empirical research. In this article we chose a broad selection of such models based on different assumptions and specifications of heterogeneity, heteroskedasticity and technical inefficiency. We applied these models to a single dataset from Norwegian grain farmers for the period 2004–2008. We also introduced a new model that disentangles firm effects from persistent (time-invariant) and residual (time-varying) technical inefficiency. We found that efficiency results are quite sensitive to how inefficiency is modeled and interpreted. Consequently, we recommend that future empirical research should pay more attention to modeling and interpreting inefficiency as well as to the assumptions underlying each model when using panel data.  相似文献   

9.

Railway signalling system is a safety-critical system to ensure railway safety and its development cost is huge. It is of great economic value to apply the generic signalling systems in different environments through configuration of different application data. In this paper, a new method to configure the application data completely and accurately is illustrated; in particular, a technique called automatic generation technology is introduced to automatically configure the functional logic of safety-critical systems, i.e. computer-based interlocking (CBI) system and automatic train protection (ATP) system. All of the application data are collected from the workflow among various departments through the enterprise system (ES). Some application data are represented by models employing automatic generation technology, and the functional logic is then obtained through analysis using these models. A configuration platform based on the ES is developed in which both the efficiency and accuracy of the application data configuration are significantly improved. In addition, it is capable of reducing human errors to a maximal extent.  相似文献   

10.
Electric load forecasting is a crucial part of business operations in the energy industry. Various load forecasting methods and techniques have been proposed and tested. With growing concerns about cybersecurity and malicious data manipulations, an emerging topic is to develop robust load forecasting models. In this paper, we propose a robust support vector regression (SVR) model to forecast the electricity demand under data integrity attacks. We first introduce a weight function to calculate the relative importance of each observation in the load history. We then construct a weighted quadratic surface SVR model. Some theoretical properties of the proposed model are derived. Extensive computational experiments are based on the publicly available data from Global Energy Forecasting Competition 2012 and ISO New England. To imitate data integrity attacks, we have deliberately increased or decreased the historical load data. Finally, the computational results demonstrate better accuracy of the proposed robust model over other recently proposed robust models in the load forecasting literature.  相似文献   

11.
《Journal of econometrics》2002,109(2):341-363
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the “best” empirical model developed without common cycle restrictions need not nest the “best” model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan–Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.  相似文献   

12.
This paper uses large Factor Models (FMs), which accommodate a large cross-section of macroeconomic time series for forecasting the per capita growth rate, inflation, and the nominal short-term interest rate for the South African economy. The FMs used in this study contain 267 quarterly series observed over the period 1980Q1-2006Q4. The results, based on the RMSEs of one- to four-quarter-ahead out-of-sample forecasts from 2001Q1 to 2006Q4, indicate that the FMs tend to outperform alternative models such as an unrestricted VAR, Bayesian VARs (BVARs) and a typical New Keynesian Dynamic Stochastic General Equilibrium (NKDSGE) model in forecasting the three variables under consideration, hence indicating the blessings of dimensionality.  相似文献   

13.
Prediction markets have been an important source of information for decision makers due to their high ex post accuracies. Nevertheless, recent failures of prediction markets remind us of the importance of ex ante assessments of their prediction accuracy. This paper proposes a systematic procedure for decision makers to acquire prediction models which may be used to predict the correctness of winner-take-all markets. We commence with a set of classification models and generate combined models following various rules. We also create artificial records in the training datasets to overcome the imbalanced data issue in classification problems. These models are then empirically trained and tested with a large dataset to see which may best be used to predict the failures of prediction markets. We find that no model can universally outperform others in terms of different performance measures. Despite this, we clearly demonstrate a result of capable models for decision makers based on different decision goals.  相似文献   

14.
This study is an attempt to construct and test a distress classification model for Korean companies. Utilizing a sample of 34 distressed firms from the recent 1990-1993 period and a matched (by industry and year) sample of non-failed firms, we observe the classification accuracy of two models. Both models utilize measures of firm size, asset turnover, solvency and leverage with one model available for testing only on publicly traded companies and one model applicable to all public and private entities. We observe excellent classification accuracy based on data from the first two years prior to distress. And, although the accuracy drops off after t -2, the models still provide effective early warnings of distress in many cases. The results of this study are of particular relevance in the current financial market scenario of increased deregulation and greater individual financial institution decision making. It is somewhat ironic for us to be proposing the use of a financial distress early-warning model given the current robust economic growth and low bankruptcy rate in Korea. But, the financial problems in Japan are a sobering reminder that high growth can be followed by financial excesses, increased business failures and large loan losses.  相似文献   

15.
We propose an out-of-sample prediction approach that combines unrestricted mixed-data sampling with machine learning (mixed-frequency machine learning, MFML). We use the MFML approach to generate a sequence of nowcasts and backcasts of weekly unemployment insurance initial claims based on a rich trove of daily Google Trends search volume data for terms related to unemployment. The predictions are based on linear models estimated via the LASSO and elastic net, nonlinear models based on artificial neural networks, and ensembles of linear and nonlinear models. Nowcasts and backcasts of weekly initial claims based on models that incorporate the information in the daily Google Trends search volume data substantially outperform those based on models that ignore the information. Predictive accuracy increases as the nowcasts and backcasts include more recent daily Google Trends data. The relevance of daily Google Trends data for predicting weekly initial claims is strongly linked to the COVID-19 crisis.  相似文献   

16.
Forecasting economic and financial variables with global VARs   总被引:1,自引:0,他引:1  
This paper considers the problem of forecasting economic and financial variables across a large number of countries in the global economy. To this end a global vector autoregressive (GVAR) model, previously estimated by Dees, di Mauro, Pesaran, and Smith (2007) and Dees, Holly, Pesaran, and Smith (2007) over the period 1979Q1–2003Q4, is used to generate out-of-sample forecasts one and four quarters ahead for real output, inflation, real equity prices, exchange rates and interest rates over the period 2004Q1–2005Q4. Forecasts are obtained for 134 variables from 26 regions, which are made up of 33 countries and cover about 90% of the world output. The forecasts are compared to typical benchmarks: univariate autoregressive and random walk models. Building on the forecast combination literature, the effects of model and estimation uncertainty on forecast outcomes are examined by pooling forecasts obtained from different GVAR models estimated over alternative sample periods. Given the size of the modelling problem, and the heterogeneity of the economies considered–industrialised, emerging, and less developed countries–as well as the very real likelihood of possibly multiple structural breaks, averaging forecasts across both models and windows makes a significant difference. Indeed, the double-averaged GVAR forecasts perform better than the benchmark competitors, especially for output, inflation and real equity prices.  相似文献   

17.
The entropy valuation of option (Stutzer, 1996) provides a risk-neutral probability distribution (RND) as the pricing measure by minimizing the Kullback–Leibler (KL) divergence between the empirical probability distribution and its risk-neutral counterpart. This article establishes a unified entropic framework by developing a class of generalized entropy pricing models based upon Cressie-Read (CR) family of divergences. The main contributions of this study are: (1) this unified framework can readily incorporate a set of informative risk-neutral moments (RNMs) of underlying return extracted from the option market which accurately captures the characteristics of the underlying distribution; (2) the classical KL-based entropy pricing model is extended to a unified entropic pricing framework upon a family of CR divergences. For each of the proposed models under the unified framework, the optimal RND is derived by employing the dual method. Simulations show that, compared to the true price, each model of the proposed family can produce high accuracy for option pricing. Meanwhile, the pricing biases among the models are different, and we hence conduct theoretical analysis and experimental investigations to explore the driving causes.  相似文献   

18.
We investigate whether the choice of valuation model affects the forecast accuracy of the target prices that investment analysts issue in their equity research reports, controlling for factors that influence this choice. We examine 490 equity research reports from international investment houses for 94 UK-listed firms published over the period July 2002–June 2004. We use four measures of accuracy: (i) whether the target price is met during the 12-month forecast horizon (met_in); (ii) whether the target price is met on the last day of the 12-month forecast horizon (met_end); (iii) the absolute forecast error (abs_err); and (iv) the forecast error of target prices that are not met at the end of the 12-month forecast horizon (miss_err). Based on met_in and abs_err, price-to-earnings (PE) outperform discounted cash flow (DCF) models, while based on met_end and miss_err the difference in valuation model performance is insignificant. However, after controlling for variables that capture the difficulty of the valuation task, the performance of DCF models improves in all specifications and, based on miss_err, they outperform PE models. These findings are robust to standard controls for selection bias.  相似文献   

19.
We focus on the importance of the assumptions regarding how inefficiency should be incorporated into the specification of the data generating process in an examination of a sector's production or efficiency. Drawing on the literature on non-nested hypothesis testing, we find that the model selection approach of Vuong (1989) is a potentially useful tool for identifying the best specification before carrying out such studies. We include an empirical application using panel data on Spanish dairy farms where we estimate cost frontiers under different specifications of how inefficiency enters the data generating process (in particular, efficiency is introduced as an input-oriented, output-oriented and hyperbolic parameter). Our results show that the different models yield very different pictures of the technology and the efficiency levels of the sector, illustrating the importance of choosing the most correct model before carrying out production and efficiency analyses. The Vuong test shows that the input-oriented model is the best among the models we use, whereas the output-oriented model is the worst. This is consistent with the fact that the input- and output-oriented models provide the most and least credible estimates of scale economies given the structure of the sector.  相似文献   

20.
This paper generalizes existing econometric models for censored competing risks by introducing a new flexible specification based on a piecewise linear baseline hazard, time‐varying regressors, and unobserved individual heterogeneity distributed as an infinite mixture of generalized inverse Gaussian (GIG) densities, nesting the gamma kernel as a special case. A common correlated latent time effect induces dependence among risks. Our model is based on underlying latent exit decisions in continuous time while only a time interval containing the exit time is observed, as is common in economic data. We do not make the simplifying assumption of discretizing exit decisions—our competing risk model setup allows for latent exit times of different risk types to be realized within the same time period. In this setting, we derive a tractable likelihood based on scaled GIG Laplace transforms and their higher‐order derivatives. We apply our approach to analyzing the determinants of unemployment duration with exits to jobs in the same industry or a different industry among unemployment insurance recipients on nationally representative individual‐level survey data from the US Department of Labor. Our approach allows us to conduct a counterfactual policy experiment by changing the replacement rate: we find that the impact of its change on the probability of exit from unemployment is inelastic. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号