首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Tournament outcome uncertainty depends on: the design of the tournament; and the relative strengths of the competitors – the competitive balance. A tournament design comprises the arrangement of the individual matches, which we call the tournament structure, the seeding policy and the progression rules. In this paper, we investigate the effect of seeding policy for various tournament structures, while taking account of competitive balance. Our methodology uses tournament outcome uncertainty to consider the effect of seeding policy and other design changes. The tournament outcome uncertainty is measured using the tournament outcome characteristic which is the probability Pq,R that a team in the top 100q pre‐tournament rank percentile progresses forward from round R, for all q and R. We use Monte Carlo simulation to calculate the values of this metric. We find that, in general, seeding favours stronger competitors, but that the degree of favouritism varies with the type of seeding. Reseeding after each round favours the strong to the greatest extent. The ideas in the paper are illustrated using the soccer World Cup Finals tournament.  相似文献   

2.
We use a dynamic modeling and selection approach for studying the informational content of various macroeconomic, monetary, and demographic fundamentals for forecasting house-price growth in the six largest countries of the European Monetary Union. The approach accounts for model uncertainty and model instability. We find superior performance compared to various alternative forecasting models. Plots of cumulative forecast errors visualize the superior performance of our approach, particularly after the recent financial crisis.  相似文献   

3.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

4.
Forecasting the outcome of outbreaks as early and as accurately as possible is crucial for decision-making and policy implementations. A significant challenge faced by forecasters is that not all outbreaks and epidemics turn into pandemics, making the prediction of their severity difficult. At the same time, the decisions made to enforce lockdowns and other mitigating interventions versus their socioeconomic consequences are not only hard to make, but also highly uncertain. The majority of modeling approaches to outbreaks, epidemics, and pandemics take an epidemiological approach that considers biological and disease processes. In this paper, we accept the limitations of forecasting to predict the long-term trajectory of an outbreak, and instead, we propose a statistical, time series approach to modelling and predicting the short-term behavior of COVID-19. Our model assumes a multiplicative trend, aiming to capture the continuation of the two variables we predict (global confirmed cases and deaths) as well as their uncertainty. We present the timeline of producing and evaluating 10-day-ahead forecasts over a period of four months. Our simple model offers competitive forecast accuracy and estimates of uncertainty that are useful and practically relevant.  相似文献   

5.
This paper examines the out-of-sample forecasting properties of six different economic uncertainty variables for the growth of the real M2 and real M4 Divisia money series for the U.S. using monthly data. The core contention is that information on economic uncertainty improves the forecasting accuracy. We estimate vector autoregressive models using the iterated rolling-window forecasting scheme, in combination with modern regularisation techniques from the field of machine learning. Applying the Hansen-Lunde-Nason model confidence set approach under two different loss functions reveals strong evidence that uncertainty variables that are related to financial markets, the state of the macroeconomy or economic policy provide additional informational content when forecasting monetary dynamics. The use of regularisation techniques improves the forecast accuracy substantially.  相似文献   

6.
Macroeconomic data are subject to data revisions. Yet, the usual way of generating real-time density forecasts from Bayesian Vector Autoregressive (BVAR) models makes no allowance for data uncertainty from future data revisions. We develop methods of allowing for data uncertainty when forecasting with BVAR models with stochastic volatility. First, the BVAR forecasting model is estimated on real-time vintages. Second, the BVAR model is jointly estimated with a model of data revisions such that forecasts are conditioned on estimates of the ‘true’ values. We find that this second method generally improves upon conventional practice for density forecasting, especially for the United States.  相似文献   

7.
Drawing on recent empirical research, we study whether the international business cycle, as measured in terms of the output gaps of the G7 countries, has out-of-sample predictive power for gold-price fluctuations. To this end, we use a real-time forecasting approach that accounts for model uncertainty and model instability. We find some evidence that the international business cycle has predictive power for gold-price fluctuations. After accounting for transaction costs, a simple trading rule that builds on real-time out-of-sample forecasts does not lead to a superior performance relative to a buy-and-hold strategy. We also suggest a behavioral-finance approach to study the quality of out-of-sample forecasts from the perspective of forecasters with potentially asymmetric loss functions.  相似文献   

8.
In this paper we bring a novel approach to the theory of tournament rankings. We combine two different theories that are widely used to establish rankings of populations after a given tournament. First, we use the statistical approach of paired comparison analysis to define the performance of a player in a natural way. Then, we determine a ranking (and rating) of the players in the given tournament. Finally, we show, among other properties, that the new ranking method is the unique one satisfying a natural consistency requirement.  相似文献   

9.
Prediction markets have proved excellent tools for forecasting, outperforming experts and polls in many settings. But do larger markets, with a wider participation, perform better than smaller markets? This paper analyses a series of repeated natural experiments in sports betting. The Queen’s Club Tennis Championships are held every year, but every other year the Championships clash with a major soccer tournament. We find that tennis betting prices become significantly less informative when participation rates are affected adversely by the clashing soccer tournament. This suggests that measures which increase prediction market participation may lead to a greater forecast accuracy.  相似文献   

10.
Performance measures of point forecasts are expressed commonly as skill scores, in which the performance gain from using one forecasting system over another is expressed as a proportion of the gain achieved by forecasting that outcome perfectly. Increasingly, it is common to express scores of probabilistic forecasts in this form; however, this paper presents three criticisms of this approach. Firstly, initial condition uncertainty (which is outside the forecaster’s control) limits the capacity to improve a probabilistic forecast, and thus a ‘perfect’ score is often unattainable. Secondly, the skill score forms of the ignorance and Brier scores are biased. Finally, it is argued that the skill score form of scoring rules destroys the useful interpretation in terms of the relative skill levels of two forecasting systems. Indeed, it is often misleading, and useful information is lost when the skill score form is used in place of the original score.  相似文献   

11.
In this paper, we present a new methodology for forecasting the results of mixed martial arts contests. Our approach utilises data scraped from freely available websites to estimate fighters’ skills in various key aspects of the sport. With these skill estimates, we simulate the contest as an actual fight using Markov chains, rather than predicting a binary outcome. We compare the model’s accuracy to that of the bookmakers using their historical odds and show that the model can be used as the basis of a successful betting strategy.  相似文献   

12.
The M5 competition uncertainty track aims for probabilistic forecasting of sales of thousands of Walmart retail goods. We show that the M5 competition data face strong overdispersion and sporadic demand, especially zero demand. We discuss modeling issues concerning adequate probabilistic forecasting of such count data processes. Unfortunately, the majority of popular prediction methods used in the M5 competition (e.g. lightgbm and xgboost GBMs) fail to address the data characteristics, due to the considered objective functions. Distributional forecasting provides a suitable modeling approach to overcome those problems. The GAMLSS framework allows for flexible probabilistic forecasting using low-dimensional distributions. We illustrate how the GAMLSS approach can be applied to M5 competition data by modeling the location and scale parameters of various distributions, e.g. the negative binomial distribution. Finally, we discuss software packages for distributional modeling and their drawbacks, like the R package gamlss with its package extensions, and (deep) distributional forecasting libraries such as TensorFlow Probability.  相似文献   

13.
"This paper presents an approach of constructing confidence intervals by means of Monte Carlo simulation. This technique attempts to incorporate the uncertainty involved in projecting human populations by letting the fertility and net immigration rates vary as a random variable with a specific distribution. Since fertility and migration are by far the most volatile, and therefore, the most critical components to population forecasting, this technique has the potential of accounting for this uncertainty, if the subjective distributions are specified with enough care. Considering the results of the model for the U.S. in 2082, for example, it is shown that the population will number between 255 million and 355 million with a probability of 90 percent."  相似文献   

14.
Retailers supply a wide range of stock keeping units (SKUs), which may differ for example in terms of demand quantity, demand frequency, demand regularity, and demand variation. Given this diversity in demand patterns, it is unlikely that any single model for demand forecasting can yield the highest forecasting accuracy across all SKUs. To save costs through improved forecasting, there is thus a need to match any given demand pattern to its most appropriate prediction model. To this end, we propose an automated model selection framework for retail demand forecasting. Specifically, we consider model selection as a classification problem, where classes correspond to the different models available for forecasting. We first build labeled training data based on the models’ performances in previous demand periods with similar demand characteristics. For future data, we then automatically select the most promising model via classification based on the labeled training data. The performance is measured by economic profitability, taking into account asymmetric shortage and inventory costs. In an exploratory case study using data from an e-grocery retailer, we compare our approach to established benchmarks. We find promising results, but also that no single approach clearly outperforms its competitors, underlying the need for case-specific solutions.  相似文献   

15.
This paper provides empirical evidence on the dynamic effects of uncertainty on firm‐level capital accumulation. A novelty in this paper is that the firm‐level uncertainty indicator is motivated and derived from a theoretical model, the neoclassical investment model with time to build. This model also serves as the base for the empirical work, where an error‐correction approach is employed. I find a negative effect of uncertainty on capital accumulation, both in the short run and the long run. This outcome cannot be explained by the model alone. Instead, the results suggest that the predominant mechanism at work stems from irreversibility constraints.  相似文献   

16.
We analyze whether it is better to forecast air travel demand using aggregate data at (say) a national level, or to aggregate the forecasts derived for individual airports using airport-specific data. We compare the US Federal Aviation Administration’s (FAA) practice of predicting the total number of passengers using macroeconomic variables with an equivalently specified AIM (aggregating individual markets) approach. The AIM approach outperforms the aggregate forecasting approach in terms of its out-of-sample air travel demand predictions for different forecast horizons. Variants of AIM, where we restrict the coefficient estimates of some explanatory variables to be the same across individual airports, generally dominate both the aggregate and AIM approaches. The superior out-of-sample performances of these so-called quasi-AIM approaches depend on the trade-off between heterogeneity and estimation uncertainty. We argue that the quasi-AIM approaches exploit the heterogeneity across individual airports efficiently, without suffering from as much estimation uncertainty as the AIM approach.  相似文献   

17.
Factor models have been applied extensively for forecasting when high‐dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of literature indicating that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques that take into account the variable to be forecast when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecast while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the well‐known Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show good performance in improving efficiency compared to widely used factor methods in macroeconomic forecasting. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
19.
Adaptive combining is generally a desirable approach for forecasting, which, however, has rarely been explored for discrete response time series. In this paper, we propose an adaptively combined forecasting method for such discrete response data. We demonstrate in theory that the proposed forecast is of the desired adaptation with respect to the widely used squared risk and other significant risk functions under mild conditions. Furthermore, we study the issue of adaptation for the proposed forecasting method in the presence of model screening that is often useful in applications. Our simulation study and two real-world data examples show promise for the proposed approach.  相似文献   

20.
This paper constructs an aligned global economic policy uncertainty (GEPU) index based on a modified machine learning approach. We find that the aligned GEPU index is an informative predictor for forecasting crude oil market volatility both in- and out-of-sample. Compared to general GEPU indices without supervised learning, well-recognized economic variables, and other popular uncertainty indicators, the aligned GEPU index is rather powerful and can provide preponderant or complementary information. The trading strategy based on the aligned GEPU index can also generate sizable economic gains. The statistical source of the aligned GEPU index’s predictive power is that it can learn both the magnitude and sign of national EPU variables’ predictive ability and thus yields reasonable and informative loadings. On the other hand, the economic driving force probably stems from the ability for forecasting the shocks of oil-related fundamentals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号