首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Forecasting approaches that exploit analogies require the grouping of analogous time series as the first modeling step; however, there has been limited research regarding the suitability of different segmentation approaches. We argue that an appropriate analytical segmentation stage should integrate and trade off different available information sources. In particular, it should consider the actual time series patterns, in addition to the variables that characterize the drivers behind the patterns observed. The simultaneous consideration of both information sources, without prior assumptions regarding the relative importance of each, leads to a multicriteria formulation of the segmentation stage. Here, we demonstrate the impact of such an adjustment to segmentation on the final forecasting accuracy of the cross-sectional multi-state Kalman filter. In particular, we study the relative merits of single and multicriteria segmentation stages for a simulated data set with a range of noise levels. We find that a multicriteria approach consistently achieves a more reliable recovery of the original clusters, and this feeds forward to an improved forecasting accuracy across short forecasting horizons. We then use a US data set on income tax liabilities to verify that this result generalizes to a real-world setting.  相似文献   

2.
Global methods that fit a single forecasting method to all time series in a set have recently shown surprising accuracy, even when forecasting large groups of heterogeneous time series. We provide the following contributions that help understand the potential and applicability of global methods and how they relate to traditional local methods that fit a separate forecasting method to each series:
  • •Global and local methods can produce the same forecasts without any assumptions about similarity of the series in the set.
  • •The complexity of local methods grows with the size of the set while it remains constant for global methods. This result supports the recent evidence and provides principles for the design of new algorithms.
  • •In an extensive empirical study, we show that purposely naïve algorithms derived from these principles show outstanding accuracy. In particular, global linear models provide competitive accuracy with far fewer parameters than the simplest of local methods.
  相似文献   

3.
We suggest to use a factor model based backdating procedure to construct historical Euro‐area macroeconomic time series data for the pre‐Euro period. We argue that this is a useful alternative to standard contemporaneous aggregation methods. The article investigates for a number of Euro‐area variables whether forecasts based on the factor‐backdated data are more precise than those obtained with standard area‐wide data. A recursive pseudo‐out‐of‐sample forecasting experiment using quarterly data is conducted. Our results suggest that some key variables (e.g. real GDP, inflation and long‐term interest rate) can indeed be forecasted more precisely with the factor‐backdated data.  相似文献   

4.
While many methods have been proposed for detecting disease outbreaks from pre-diagnostic data, their performance is usually not well understood. We argue that most existing temporal detection methods for biosurveillance can be characterized as a forecasting component coupled with a monitoring/detection component.In this paper, we describe the effect of forecast accuracy on detection performance. Quantifying this effect allows one to measure the benefits of improved forecasting and determine when it is worth improving a forecast method’s precision at the cost of robustness or simplicity. We quantify the effect of forecast accuracy on detection metrics under different scenarios and investigate the effect when standard assumptions are violated. We illustrate our results by examining performance on authentic biosurveillance data.  相似文献   

5.
We participated in the M4 competition for time series forecasting and here describe our methods for forecasting daily time series. We used an ensemble of five statistical forecasting methods and a method that we refer to as the correlator. Our retrospective analysis using the ground truth values published by the M4 organisers after the competition demonstrates that the correlator was responsible for most of our gains over the naïve constant forecasting method. We identify data leakage as one reason for its success, due partly to test data selected from different time intervals, and partly to quality issues with the original time series. We suggest that future forecasting competitions should provide actual dates for the time series so that some of these leakages could be avoided by participants.  相似文献   

6.
The boundaryless career, which challenges the assumptions of the traditional hierarchical career, has proved to be a remarkably popular and influential concept. However, we argue that it remains theoretically and empirically undeveloped, which limits its explanatory potential. We draw on some New Zealand empirical research highlighting the issue of who gets studied. Focusing on women's career experience, local ethnic groups and collective cultures we argue that these experiences represent a challenge to boundaryless career theory. Some of the theoretical assumptions on which boundaryless careers have been built are also interrogated: freedom from boundaries, individual volition and minimal influences from societal structures. We conclude that the boundaryless career story/odyssey is in danger of becoming a narrow career theory applicable only to the minority, if there is no engagement with theoretical and empirical critiques.  相似文献   

7.
Providing forecasts for ultra-long time series plays a vital role in various activities, such as investment decisions, industrial production arrangements, and farm management. This paper develops a novel distributed forecasting framework to tackle the challenges of forecasting ultra-long time series using the industry-standard MapReduce framework. The proposed model combination approach retains the local time dependency. It utilizes a straightforward splitting across samples to facilitate distributed forecasting by combining the local estimators of time series models delivered from worker nodes and minimizing a global loss function. Instead of unrealistically assuming the data generating process (DGP) of an ultra-long time series stays invariant, we only make assumptions on the DGP of subseries spanning shorter time periods. We investigate the performance of the proposed approach with AutoRegressive Integrated Moving Average (ARIMA) models using the real data application as well as numerical simulations. Our approach improves forecasting accuracy and computational efficiency in point forecasts and prediction intervals, especially for longer forecast horizons, compared to directly fitting the whole data with ARIMA models. Moreover, we explore some potential factors that may affect the forecasting performance of our approach.  相似文献   

8.
We propose an automated method for obtaining weighted forecast combinations using time series features. The proposed approach involves two phases. First, we use a collection of time series to train a meta-model for assigning weights to various possible forecasting methods with the goal of minimizing the average forecasting loss obtained from a weighted forecast combination. The inputs to the meta-model are features that are extracted from each series. Then, in the second phase, we forecast new series using a weighted forecast combination, where the weights are obtained from our previously trained meta-model. Our method outperforms a simple forecast combination, as well as all of the most popular individual methods in the time series forecasting literature. The approach achieved second position in the M4 competition.  相似文献   

9.
Helpman, Melitz and Rubinstein [Quarterly Journal of Economics (2008) Vol. 123, pp. 441–487] (HMR) present a rich theoretical model to study the determinants of bilateral trade flows across countries. The model is then empirically implemented through a two‐stage estimation procedure. We argue that this estimation procedure is only valid under the strong distributional assumptions maintained in the article. Statistical tests using the HMR sample, however, clearly reject such assumptions. Moreover, we perform numerical experiments which show that the HMR two‐stage estimator is very sensitive to departures from the assumption of homoskedasticity. These findings cast doubts on any inference drawn from the empirical implementation of the HMR model.  相似文献   

10.
We introduce a new panel data estimation technique for production and cost functions, the recursive thick frontier approach (RTFA). RTFA has two advantages over existing econometric frontier methods. First, technical inefficiency is allowed to be dependent on the explanatory variables of the frontier model. Secondly, RTFA does not hinge on distributional assumptions on the inefficiency component of the error term. We show by means of simulation experiments that RTFA outperforms the popular stochastic frontier approach and the ‘within’ ordinary least squares estimator for realistic parameterizations of a productivity model. Although RTFAs formal statistical properties are unknown, we argue, based on these simulation experiments, that RTFA is a useful complement to existing methods.  相似文献   

11.
ABSTRACT This paper examines the interactions between banking sector policies, financial development and economic growth in Nepal employing recently developed time series techniques. Policies such as interest rate controls, directed credit programmes, reserve and liquidity requirements are identified and measured. A summary measure of repressive policies is constructed by the method of principal components. This measure is found to have a statistically significant influence deepening, independently of the real interest rate. We argue that our findings are consistent with the hypothesis of market failure. Exogeneity tests suggest that financial deepening and economic growth are jointly determined. Thus, policies which affect financial deepening may also have an influence on economic growth.  相似文献   

12.
We seek a closed-form series approximation of European option prices under a variety of diffusion models. The proposed convergent series are derived using the Hermite polynomial approach. Departing from the usual option pricing routine in the literature, our model assumptions have no requirements for affine dynamics or explicit characteristic functions. Moreover, convergent expansions provide a distinct insight into how and on which order the model parameters affect option prices, in contrast with small-time asymptotic expansions in the literature. With closed-form expansions, we explicitly translate model features into option prices, such as mean-reverting drift and self-exciting or skewed jumps. Numerical examples illustrate the accuracy of this approach and its advantage over alternative expansion methods.  相似文献   

13.
We have been publishing real-time forecasts of confirmed cases and deaths from coronavirus disease 2019 (COVID-19) since mid-March 2020 (published at www.doornik.com/COVID-19). These forecasts are short-term statistical extrapolations of past and current data. They assume that the underlying trend is informative regarding short-term developments but without requiring other assumptions about how the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus is spreading, or whether preventative policies are effective. Thus, they are complementary to the forecasts obtained from epidemiological models.The forecasts are based on extracting trends from windows of data using machine learning and then computing the forecasts by applying some constraints to the flexible extracted trend. These methods have been applied previously to various other time series data and they performed well. They have also proved effective in the COVID-19 setting where they provided better forecasts than some epidemiological models in the earlier stages of the pandemic.  相似文献   

14.
Small Area Estimation-New Developments and Directions   总被引:1,自引:0,他引:1  
The purpose of this paper is to provide a critical review of the main advances in small area estimation (SAE) methods in recent years. We also discuss some of the earlier developments, which serve as a necessary background for the new studies. The review focuses on model dependent methods with special emphasis on point prediction of the target area quantities, and mean square error assessments. The new models considered are models used for discrete measurements, time series models and models that arise under informative sampling. The possible gains from modeling the correlations among small area random effects used to represent the unexplained variation of the small area target quantities are examined. For review and appraisal of the earlier methods used for SAE, see Ghosh & Rao (1994).  相似文献   

15.
The concept of Granger-causality is formulated for a finite-dimensional multiple time series. Special attention is given to causality patterns in autoregressive series, and it is shown how these patterns can be tested under quite general assumptions using a χ2 statistic. The power of the test is discussed, and it is shown that the χ2 statistic results from a Lagrange multiplier test in the Gaussian case. The causality test is tried both on artificial data and some economic time series. Finally we consider the problem of constrained estimation in models with a known causality structure.  相似文献   

16.
We prove that the undetermined Taylor series coefficients of local approximations to the policy function of arbitrary order in a wide class of discrete time dynamic stochastic general equilibrium (DSGE) models are solvable by standard DSGE perturbation methods under regularity and saddle point stability assumptions on first order approximations. Extending the approach to nonstationary models, we provide necessary and sufficient conditions for solvability, as well as an example in the neoclassical growth model where solvability fails. Finally, we eliminate the assumption of solvability needed for the local existence theorem of perturbation solutions, complete the proof that the policy function is invariant to first order changes in risk, and attribute the loss of numerical accuracy in progressively higher order terms to the compounding of errors from the first order transition matrix.  相似文献   

17.
Stochastic frontier models are often employed to estimate fishing vessel technical efficiency. Under certain assumptions, these models yield efficiency measures that are means of truncated normal distributions. We argue that these measures are flawed, and use the results of Horrace ( 2005 ) to estimate efficiency for 39 vessels in the Northeast Atlantic herring fleet, based on each vessel's probability of being efficient. We develop a subset selection technique to identify groups of efficient vessels at pre‐specified probability levels. When homogeneous production is assumed, inferential inconsistencies exist between our methods and the methods of ranking the means of the technical inefficiency distributions for each vessel. When production is allowed to be heterogeneous, these inconsistencies are mitigated. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
Slowly moving fundamental time series can be mistaken for time trends. Use of this series can increase credibility of medium-term and long-term forecasts. This paper introduces a new slowly moving fundamental time series—the age distribution of the US population—to explain trends in real US interest rates over the past 35 years. We argue that lifecycle consumption patterns at the individual level can influence aggregate saving and real interest rates. Empirical evidence is presented that supports the relationship between age distribution and expected real interest rates. Simulations of future interest rates are developed.  相似文献   

19.
Choosing the right time to release a new movie may be the difference between success and failure. Prior research states that the “bigger” a blockbuster is, the more likely it is (and should be) released during a high‐demand week. We present a theoretical framework which is consistent with this observation but adds a rather surprising theoretical prediction: among non‐blockbuster (i.e., niche) movies, everything else constant, the greater a movie's appeal, the more likely it is released during a low‐demand week. In other words, the relation between movie appeal and high‐demand‐week release is U‐shaped: it decreases at low levels of overall appeal (niche movies) and increases at high levels of overall appeal (blockbusters). We provide intuition for this novel result and argue that it is robust to a number of changes in functional form assumptions. We then show that the theoretical results are consistent with the evidence from an extensive data set on international releases. Specifically, we run a series of movie‐country‐pair regressions with high‐demand‐week‐release as a dependent variable and exogenous shocks to the movie's appeal as an explanatory variable. As predicted by theory, the regression coefficients have opposite signs for the blockbuster and non‐blockbuster cases.  相似文献   

20.
This article explores the epistemological roots and paradigmatic boundaries of research into employee trust, a growing field in human resource management. Drawing on Burrell and Morgan's well‐known sociological paradigms and their epistemological foundations, we identify the dominant approaches to employee trust research to examine its strengths and limitations. Our review of the literature on employee trust revealed that the majority of the most cited papers were written from a psychological perspective, characterised by positivistic methodologies, variance theory explanations and quantitative data collection methods. We also found that most of the studies can be located in the functionalist paradigm, and while accepting that functionalism and psychological positivism have their merits, we argue that research in these traditions sometimes constrains our understanding of employee trust in their organisations. We conclude that trust researchers would benefit from a better understanding of the ontological, epistemological and axiological assumptions underlying of HRM research and should embrace greater epistemic reflexivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号