首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Using two approaches to panel data, Granger causality analysis with semi-asymptotic tests, and a structural approach based on entropies measured on sequences of multiperiod ratings and returns, we specify the relationship between a fund’s performance and both Morningstar and Europerformance ratings. We conclude on the Europerformance agency’s forecasting ability for the Luxembourg funds, and the Morningstar agency for the French funds. Indeed, we find two groups of funds depending on their domiciliation and appropriated rating. The results of this paper have implications for the management of fund portfolios, and the structural approach, more robust to our data, must be a first process for forecast models on the basis of similar funds, minor uncertainty or risk measure, and appropriated rating.  相似文献   

2.
We propose a Bayesian estimation procedure for the generalized Bass model that is used in product diffusion models. Our method forecasts product sales early based on previous similar markets; that is, we obtain pre-launch forecasts by analogy. We compare our forecasting proposal to traditional estimation approaches, and alternative new product diffusion specifications. We perform several simulation exercises, and use our method to forecast the sales of room air conditioners, BlackBerry handheld devices, and compressed natural gas. The results show that our Bayesian proposal provides better predictive performances than competing alternatives when little or no historical data are available, which is when sales projections are the most useful.  相似文献   

3.
We propose new models for analyzing pairwise comparison data, such as that relating to sports. We focus on changes in players’ strengths and the prediction of future results. Our models are based on the Thurstone-Mosteller and Bradley–Terry models, and make use of the time variation in the parameters. Furthermore, we apply our models to data from the Japanese traditional sport sumo, and analyze this data. The proposed models perform better than the standard Thurstone-Mosteller and Bradley–Terry models according to both the Akaike information criterion and the Brier score. We compare the proposed models in detail by focusing on individual sumo wrestlers.  相似文献   

4.
Retailers supply a wide range of stock keeping units (SKUs), which may differ for example in terms of demand quantity, demand frequency, demand regularity, and demand variation. Given this diversity in demand patterns, it is unlikely that any single model for demand forecasting can yield the highest forecasting accuracy across all SKUs. To save costs through improved forecasting, there is thus a need to match any given demand pattern to its most appropriate prediction model. To this end, we propose an automated model selection framework for retail demand forecasting. Specifically, we consider model selection as a classification problem, where classes correspond to the different models available for forecasting. We first build labeled training data based on the models’ performances in previous demand periods with similar demand characteristics. For future data, we then automatically select the most promising model via classification based on the labeled training data. The performance is measured by economic profitability, taking into account asymmetric shortage and inventory costs. In an exploratory case study using data from an e-grocery retailer, we compare our approach to established benchmarks. We find promising results, but also that no single approach clearly outperforms its competitors, underlying the need for case-specific solutions.  相似文献   

5.
We develop a new method for deriving minimal state variable (MSV) equilibria of a general class of Markov switching rational expectations models and a new algorithm for computing these equilibria. We compare our approach to previously known algorithms, and we demonstrate that ours is both efficient and more reliable than previous methods in the sense that it is able to find MSV equilibria that previously known algorithms cannot. Further, our algorithm can find all possible MSV equilibria in models. This feature is essential if one is interested in using a likelihood based approach to estimation.  相似文献   

6.
The analysis of longitudinally correlated binary data has attracted considerable attention of late. Since the estimation of parameters in models for such data is based on asymptotic theory, it is necessary to investigate the small‐sample properties of estimators by simulation. In this paper, we review the mechanisms that have been proposed for generating longitudinally correlated binary data. We compare and contrast these models with regard to various features, including computational efficiency, flexibility and the range restrictions that they impose on the longitudinal association parameters. Some extensions to the data generation mechanism originally suggested by Kanter (1975) are proposed.  相似文献   

7.
We employ a Bayesian approach to analyze financial markets experimental data. We estimate a structural model of sequential trading in which trading decisions are classified in five types: private-information based, noise, herd, contrarian and irresolute. Through Monte Carlo simulation, we estimate the posterior distributions of the structural parameters. This technique allows us to compare several non-nested models of trade arrival. We find that the model best fitting the data is that in which a proportion of trades stems from subjects who do not rely only on their private information once the difference between the number of previous buy and sell decisions is at least two. In this model, the majority of trades stem from subjects following their private information. There is also a large proportion of noise trading activity, which is biased towards buying the asset. We observe little herding and contrarianism, as theory suggests. Finally, we observe a significant proportion of (irresolute) subjects who follow their own private information when it agrees with public information, but abstain from trading when it does not.  相似文献   

8.
This paper investigates different developments in non-expected utility theories. Our focus is to study the agent’s attitude towards risk in a context of monetary gambles. Based on simulated data of the “Deal or No Deal” TV game show, we first compare the performance of the expected utility model versus a loss-aversion model. We find that the loss-aversion model has a better performance compared to the expected utility model. We then study the attitude towards risk according to two parameters: the relative risk aversion coefficient defined over the value function and the probability weighting coefficient proposed by the Cumulative Prospect Theory. We find evidence for probability weighting being undertaken by contestants reflecting less risk aversion over large stakes. We also explore the performance of two models of rank-dependant utility: the Quiggin (1982) and the power probability weighting models. We find that the probability weighting coefficient is still significant for both models. Finally, we integrate initial wealth into the contestants’ preferences function and we show that the initial wealth level affects the estimates of risk attitudes.  相似文献   

9.
The M4 competition identified innovative forecasting methods, advancing the theory and practice of forecasting. One of the most promising innovations of M4 was the utilization of cross-learning approaches that allow models to learn from multiple series how to accurately predict individual ones. In this paper, we investigate the potential of cross-learning by developing various neural network models that adopt such an approach, and we compare their accuracy to that of traditional models that are trained in a series-by-series fashion. Our empirical evaluation, which is based on the M4 monthly data, confirms that cross-learning is a promising alternative to traditional forecasting, at least when appropriate strategies for extracting information from large, diverse time series data sets are considered. Ways of combining traditional with cross-learning methods are also examined in order to initiate further research in the field.  相似文献   

10.
Recent literature on panel data emphasizes the importance of accounting for time-varying unobservable individual effects, which may stem from either omitted individual characteristics or macro-level shocks that affect each individual unit differently. In this paper, we propose a simple specification test of the null hypothesis that the individual effects are time-invariant against the alternative that they are time-varying. Our test is an application of Hausman (1978) testing procedure and can be used for any generalized linear model for panel data that admits a sufficient statistic for the individual effect. This is a wide class of models which includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea of the test is to compare two alternative estimators of the model parameters based on two different formulations of the conditional maximum likelihood method. Our approach does not require assumptions on the distribution of unobserved heterogeneity, nor it requires the latter to be independent of the regressors in the model. We investigate the finite sample properties of the test through a set of Monte Carlo experiments. Our results show that the test performs well, with small size distortions and good power properties. We use a health economics example based on data from the Health and Retirement Study to illustrate the proposed test.  相似文献   

11.
A new empirical reduced-form model for credit rating transitions is introduced. It is a parametric intensity-based duration model with multiple states and driven by exogenous covariates and latent dynamic factors. The model has a generalized semi-Markov structure designed to accommodate many of the stylized facts of credit rating migrations. Parameter estimation is based on Monte Carlo maximum likelihood methods for which the details are discussed in this paper. A simulation experiment is carried out to show the effectiveness of the estimation procedure. An empirical application is presented for transitions in a 7 grade rating system. The model includes a common dynamic component that can be interpreted as the credit cycle. Asymmetric effects of this cycle across rating grades and additional semi-Markov dynamics are found to be statistically significant. Finally, we investigate whether the common factor model suffices to capture systematic risk in rating transition data by introducing multiple factors in the model.  相似文献   

12.
We propose an out-of-sample prediction approach that combines unrestricted mixed-data sampling with machine learning (mixed-frequency machine learning, MFML). We use the MFML approach to generate a sequence of nowcasts and backcasts of weekly unemployment insurance initial claims based on a rich trove of daily Google Trends search volume data for terms related to unemployment. The predictions are based on linear models estimated via the LASSO and elastic net, nonlinear models based on artificial neural networks, and ensembles of linear and nonlinear models. Nowcasts and backcasts of weekly initial claims based on models that incorporate the information in the daily Google Trends search volume data substantially outperform those based on models that ignore the information. Predictive accuracy increases as the nowcasts and backcasts include more recent daily Google Trends data. The relevance of daily Google Trends data for predicting weekly initial claims is strongly linked to the COVID-19 crisis.  相似文献   

13.
Data with large dimensions will bring various problems to the application of data envelopment analysis (DEA). In this study, we focus on a “big data” problem related to the considerably large dimensions of the input-output data. The four most widely used approaches to guide dimension reduction in DEA are compared via Monte Carlo simulation, including principal component analysis (PCA-DEA), which is based on the idea of aggregating input and output, efficiency contribution measurement (ECM), average efficiency measure (AEC), and regression-based detection (RB), which is based on the idea of variable selection. We compare the performance of these methods under different scenarios and a brand-new comparison benchmark for the simulation test. In addition, we discuss the effect of initial variable selection in RB for the first time. Based on the results, we offer guidelines that are more reliable on how to choose an appropriate method.  相似文献   

14.
We propose new forecast combination schemes for predicting turning points of business cycles. The proposed combination schemes are based on the forecasting performances of a given set of models with the aim to provide better turning point predictions. In particular, we consider predictions generated by autoregressive (AR) and Markov-switching AR models, which are commonly used for business cycle analysis. In order to account for parameter uncertainty we consider a Bayesian approach for both estimation and prediction and compare, in terms of statistical accuracy, the individual models and the combined turning point predictions for the United States and the Euro area business cycles.  相似文献   

15.
The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large‐scale Bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a two‐step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke ( 1996 ). We find that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrapped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the way to using large‐scale reduced rank models for empirical analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Semiparametric quantile regression is employed to flexibly estimate sales response for frequently purchased consumer goods. Using retail store‐level data, we compare the performance of models with and without monotonic smoothing for fit and prediction accuracy. We find that (a) flexible models with monotonicity constraints imposed on price effects dominate both in‐sample and out‐of‐sample comparisons while being robust even at the boundaries of the price distribution when data is sparse; (b) quantile‐based confidence intervals are much more accurate compared to least‐squares‐based intervals; (c) specifications reflecting that managers may not have exact knowledge about future competitive pricing perform extremely well. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
We introduce several new sports team rating models based on the gradient descent algorithm. More precisely, the models can be formulated by maximising the likelihood of match results observed using a single step of this optimisation heuristic. The proposed framework is inspired by the prominent Elo rating system, and yields an iterative version of ordinal logistic regression, as well as different variants of Poisson regression-based models. This construction makes the update equations easy to interpret, and adjusts ratings once new match results are observed. Thus, it naturally handles temporal changes in team strength. Moreover, a study of association football data indicates that the new models yield more accurate forecasts and are less computationally demanding than corresponding methods that jointly optimise the likelihood for the whole set of matches.  相似文献   

18.
We estimate and compare two models in which households periodically update their expectations. The first model assumes that households update their expectations towards survey measures. In the second model, households update their expectations towards rational expectations (RE). While the literature has used these specifications indistinguishably, we argue that there are important differences. The two models imply different updating probabilities, and the data seem to prefer the second one. We then analyse the properties of both models in terms of mean expectations, median expectations, and a measure of disagreement among households. The model with periodical updates towards RE also seems to fit the data better along these dimensions.  相似文献   

19.
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper, we develop a new time varying parameter model which permits cointegration. We use a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP–VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving the Fisher effect.  相似文献   

20.
Multinomial and ordered Logit models are quantitative techniques which are used in a range of disciplines nowadays. When applying these techniques, practitioners usually select a single model using either information-based criteria or pretesting. In this paper, we consider the alternative strategy of combining models rather than selecting a single model. Our strategy of weight choice for the candidate models is based on the minimization of a plug-in estimator of the asymptotic squared error risk of the model average estimator. Theoretical justifications of this model averaging strategy are provided, and a Monte Carlo study shows that the forecasts produced by the proposed strategy are often more accurate than those produced by other common model selection and model averaging strategies, especially when the regressors are only mildly to moderately correlated and the true model contains few zero coefficients. An empirical example based on credit rating data is used to illustrate the proposed method. To reduce the computational burden, we also consider a model screening step that eliminates some of the very poor models before averaging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号