首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The relative performances of forecasting models change over time. This empirical observation raises two questions. First, is the relative performance itself predictable? Second, if so, can it be exploited in order to improve the forecast accuracy? We address these questions by evaluating the predictive abilities of a wide range of economic variables for two key US macroeconomic aggregates, namely industrial production and inflation, relative to simple benchmarks. We find that business cycle indicators, financial conditions, uncertainty and measures of past relative performances are generally useful for explaining the models’ relative forecasting performances. In addition, we conduct a pseudo-real-time forecasting exercise, where we use the information about the conditional performance for model selection and model averaging. The newly proposed strategies deliver sizable improvements over competitive benchmark models and commonly-used combination schemes. The gains are larger when model selection and averaging are based on both financial conditions and past performances measured at the forecast origin date.  相似文献   

2.
This study examines the role of financial ratios in predicting companies’ default risk using the quantile hazard model (QHM) approach and compares its results to the discrete hazard model (DHM). We adopt the LASSO method to select essential predictors among the variables mentioned in the literature. We show the preeminence of our proposed QHM through the fact that it presents a different degree of financial ratios’ effect over various quantile levels. While DHM only confirms the aftermaths of “stock return volatilities” and “total liabilities” and the positive effects of “stock price”, “stock excess return”, and “profitability” on businesses, under high quantile levels QHM is able to supplement “cash and short-term investment to total assets”, “market capitalization”, and “current liabilities ratio” into the list of factors that influence a default. More interestingly, “cash and short-term investment to total assets” and “market capitalization” switch signs in high quantile levels, showing their different influence on companies with different risk levels. We also discover evidence for the distinction of default probability among different industrial sectors. Lastly, our proposed QHM empirically demonstrates improved out-of-sample forecasting performance.  相似文献   

3.
This study compared the relative influences of organizational socialization and demographic variables on job satisfaction and organizational commitment. Organizational variables were assessed by asking 193 Chinese employees in Hong Kong to evaluate socialization within their companies, namely: (1) training received; (2) understanding of the organization; (3) co-worker support; and (4) future prospects within their companies. Dependent variables were standard measures of (affective, continuance and normative) commitment and of satisfaction (with co-workers, pay, promotion, supervisors and the work). Results revealed higher correlations between the socialization measures and job satisfaction and commitment than between the demographic measures and the dependent variables. Although a few demographic measures had some predictive power, the regression analyses confirmed that the socialization variables were consistently stronger predictors of both satisfaction and commitment. Strategic implications for human resource management are discussed.  相似文献   

4.
The maxims of normative ethics are often in conflict. Thus business practitioners facing ethical questions often find themselves operating in the area of relative ethics. There are arguably two dimensions to this area. One lies on a spectrum from weak companies in highly competitive industries to strong companies in protected industries. The second dimension places the would-be ethical manager in awkward situations imposed from either the hierarchy or from corrupt markets. This article develops an analysis model to portray this relative ethics dilemma. Then, an argument is made that the more the individual manager practices good ethics, the higher the level of ethics the individual is able to maintain. This article proposes an adaptation of the 1950s feedback model of group dynamics known as the “Johari Window” to show this improvement in ethical behavior.  相似文献   

5.
In the context of predicting the term structure of interest rates, we explore the marginal predictive content of real-time macroeconomic diffusion indexes extracted from a “data rich” real-time data set, when used in dynamic Nelson–Siegel (NS) models of the variety discussed in Svensson (NBER technical report, 1994; NSS) and Diebold and Li (Journal of Econometrics, 2006, 130, 337–364; DNS). Our diffusion indexes are constructed using principal component analysis with both targeted and untargeted predictors, with targeting done using the lasso and elastic net. Our findings can be summarized as follows. First, the marginal predictive content of real-time diffusion indexes is significant for the preponderance of the individual models that we examine. The exception to this finding is the post “Great Recession” period. Second, forecast combinations that include only yield variables result in our most accurate predictions, for most sample periods and maturities. In this case, diffusion indexes do not have marginal predictive content for yields and do not seem to reflect unspanned risks. This points to the continuing usefulness of DNS and NSS models that are purely yield driven. Finally, we find that the use of fully revised macroeconomic data may have an important confounding effect upon results obtained when forecasting yields, as prior research has indicated that diffusion indexes are often useful for predicting yields when constructed using fully revised data, regardless of whether forecast combination is used, or not. Nevertheless, our findings also underscore the potential importance of using machine learning, data reduction, and shrinkage methods in contexts such as term structure modeling.  相似文献   

6.
In this paper, we evaluate the performance of the ability of Markov-switching multifractal (MSM), implied, GARCH, and historical volatilities to predict realized volatility for both the S&P 100 index and equity options. Some important findings are as follows. First, we find that the ability of MSM and GARCH volatilities to predict realized volatility is better than that of implied and historical volatilities for both the index and equity options. Second, equity option volatility is more difficult to be forecast than index option volatility. Third, both index and equity option volatilities can be better forecast during non-global financial crisis periods than during global financial crisis periods. Fourth, equity option volatility exhibits distinct patterns conditional on various equity and option characteristics and its predictability by MSM and implied volatilities depends on these characteristics. And finally, we find that MSM volatility outperforms implied volatility in predicting equity option volatility conditional on various equity and option characteristics.  相似文献   

7.
In this paper we propose a composite indicator for real-time recession forecasting based on alternative dynamic probit models. For this purpose, we use a large set of monthly macroeconomic and financial leading indicators from the German and US economies. Alternative dynamic probit regressions are specified through automated general-to-specific and specific-to-general lag selection procedures on the basis of slightly different initial sets. The resulting recession probability forecasts are then combined in order to decrease the volatility of the forecast errors and increase their forecasting accuracy. This procedure features not only good in-sample forecast statistics, but also good out-of-sample performances, as is illustrated using a real-time evaluation exercise.  相似文献   

8.
Event history calendars (EHC) have proven to be a powerful tool for collecting retrospective autobiographical life course data. One problem is that they are only standardized to a limited extent. This restricts their applicability in large-scale surveys. However, in such surveys, a modularized retrospective CATI design can be combined with an EHC. This data revision module is directly integrated into the interview and used as a data revision module, allowing insights from cognitive psychology to be applied. The data revision module stimulates the respondent’s memory retrieval by detecting both temporal inconsistencies, such as gaps, and overlapping or parallel events. This approach was implemented in the IAB-ALWA study (Working and Learning in a Changing World), a large-scale representative telephone survey involving 10,000 respondents. By comparing the uncorrected data with the final data after revision, we can investigate to what extent the application of this data revision module improves data quality or, more precisely, time consistency and dating accuracy of individual reports.  相似文献   

9.
Oscar Fisch 《Socio》1985,19(3):159-165
The disjointedness of the planning sequence of trip generation and trip distribution is the main subject of this paper. We approach this disjointedness problem by analyzing the central properties of the independently discovered balancing methods of trip-distribution models in relation to two critical issues. First, in the current planning sequence, it is usual to start by forecasting how many trips will begin (production) and end (attraction) in each zone. This forecasting is done by estimating the production of trips independently from the attraction of trips and vice versa, and then forcing some mechanical balance of total trips being generated in the urban system. Second, in the same planning sequence, the output of this trip-generation process is the input to the next one (trip-distribution process): forecasting the matrix that describes the number of trips between each pair of zones. This forecasting is done in general by updating an equivalent obsolete matrix that was obtained from an origin-and-destination survey. The updating is generally accomplished by adjusting the outdated matrix with the so-called balancing factors. It is the purpose of this paper to support the balancing-factors approach in forecasting trip-distribution matrices with a methodological interpretation and to explain behaviorally the balancing factors; and in the process, to show the spatial interaction between trip production and attraction and the emerging need for simultaneous specification and estimation of the whole trip-generation process.  相似文献   

10.
This paper presents a dynamic portfolio credit model following the regulatory framework, using macroeconomic and latent risk factors to predict the aggregate loan portfolio loss in a banking system. The latent risk factors have three levels: global across the entire banking system, parent-sectoral for the intermediate loan sectors and sector-specific for the individual loan sectors. The aggregate credit loss distribution of the banking system over a risk horizon is generated by Monte Carlo simulation, and a quantile estimator is used to produce the aggregate risk measure and economic capital. The risk contributions of the individual sectors and risk factors are measured by combining the Hoeffding decomposition with the Euler capital allocation rule. For the U.S. banking system, we find that the real GDP growth rate, the global and sector-wide frailty risk factors and their spillovers significantly affect loan defaults, and the impacts of the frailty factors are not only economy-wide but also sector-specific. We also find that the frailty risk factors make more significant risk contributions to the aggregate portfolio risk than the macroeconomic factors, while the macroeconomic factors help to improve the accuracy and efficiency of the credit risk forecasts.  相似文献   

11.
This paper proposes a hybrid ensemble forecasting methodology that integrating empirical mode decomposition (EMD), long short-term memory (LSTM) and extreme learning machine (ELM) for the monthly biofuel (a typical agriculture-related energy) production based on the principle of decomposition—reconstruction—ensemble. The proposed methodology involves four main steps: data decomposition via EMD, component reconstruction via a fine-to-coarse (FTC) method, individual prediction via LSTM and ELM algorithms, and ensemble prediction via a simple addition (ADD) method. For illustration and verification, the biofuel monthly production data of the USA is used as the our sample data, and the empirical results indicate that the proposed hybrid ensemble forecasting model statistically outperforms all considered benchmark models considered in terms of the forecasting accuracy. This indicates that the proposed hybrid ensemble forecasting methodology integrating the EMD-LSTM-ELM models based on the decomposition—reconstruction—ensemble principle has been proved to be a competitive model for the prediction of biofuel production.  相似文献   

12.
《Economic Systems》2022,46(2):100979
This paper examines banking crises in a large sample of countries over a forty-year period. A multinomial modeling approach is applied to panel data in order to track and capture end-to-end cyclical crisis formations, which enhances the binary focus of previous research studies. Several macroeconomic and banking sector variables are shown to be emblematic of leading indicators across the idiosyncratic stages of a banking crisis. Gross domestic product is an early warning signal across all phases, and a concomitant deterioration in consumption spending and fixed capital formation, preceded by a credit boom, signal a banking crisis to come. Currency depreciation exemplifies ensuing financial distress, reinforced by developmental constructs and regional integration. Lower real interest rates, increasing imports, and rising deposits are frequently harbingers of a recovery. Period effects underscore the dynamic evolution of common contemporaneous precursors over time. Premised on pursuing cyclical movements through multiple outcomes, our findings on forecasting performance suggest enhanced predictive power. Several multinomial logistic models generate higher predictive accuracy in contrast to probit models. Compared to machine learning methods (which encompass artificial neural networks, gradient boost, k-nearest neighbors, and random forests methods), a multinomial logistic approach outperforms during pre-crisis periods and when crisis severity is modeled, whereas gradient boost has the highest predictive accuracy across numerous versions of the multinomial model. As investors and policy makers continue to confront banking crises, leading to high economic and social costs, enhanced multinomial modeling methods make a valuable contribution to improved forecasting performance.  相似文献   

13.
This study concerns list augmentation in direct marketing. List augmentation is a special case of missing data imputation. We review previous work on the mixed outcome factor model and apply it for the purpose of list augmentation. The model deals with both discrete and continuous variables and allows us to augment the data for all subjects in a company's transaction database with soft data collected in a survey among a sample of those subjects. We propose a bootstrap-based imputation approach, which is appealing to use in combination with the factor model, since it allows one to include estimation uncertainty in the imputation procedure in a simple, yet adequate manner. We provide an empirical case study of the performance of the approach to a transaction data base of a bank.  相似文献   

14.
We use U.S. export and import price indexes to construct a relative purchasing power parity-based model of the nominal U.S. Dollar Index. The model is successful in predicting the future direction of change in the U.S. Dollar Index over a six-month period up to 68% of the time. Finally, the model, in combination with a simple linear, recursive technique, is able to statistically significantly outperform the random walk in predicting the value of the U.S. Dollar Index at terms of less than four months for the period from 1996 to 2005. The paper provides important implications for investors who are interested in the direction of change in the Dollar’s value, forecasting the level of the U.S. Dollar Index, as well as the extent of over- and undervaluation of the U.S. Dollar, in general.  相似文献   

15.
Longitudinal data sets with the structure T (time points) × N (subjects) are often incomplete because of data missing for certain subjects at certain time points. The EM algorithm is applied in conjunction with the Kalman smoother for computing maximum likelihood estimates of longitudinal LISREL models from varying missing data patterns. The iterative procedure uses the LISREL program in the M-step and the Kalman smoother in the E-step. The application of the method is illustrated by simulating missing data on a data set from educational research.  相似文献   

16.
We estimate a Markow-switching dynamic factor model with three states based on six leading business cycle indicators for Germany, preselected from a broader set using the elastic net soft-thresholding rule. The three states represent expansions, normal recessions and severe recessions. We show that a two-state model is not sensitive enough to detect relatively mild recessions reliably when the Great Recession of 2008/2009 is included in the sample. Adding a third state helps to distinguish normal and severe recessions clearly, so that the model identifies all business cycle turning points in our sample reliably. In a real-time exercise, the model detects recessions in a timely manner. Combining the estimated factor and the recession probabilities with a simple GDP forecasting model yields an accurate nowcast for the steepest decline in GDP in 2009Q1, and a correct prediction of the timing of the Great Recession and its recovery one quarter in advance.  相似文献   

17.
This article develops a new portfolio selection method using Bayesian theory. The proposed method accounts for the uncertainties in estimation parameters and the model specification itself, both of which are ignored by the standard mean-variance method. The critical issue in constructing an appropriate predictive distribution for asset returns is evaluating the goodness of individual factors and models. This problem is investigated from a statistical point of view; we propose using the Bayesian predictive information criterion. Two Bayesian methods and the standard mean-variance method are compared through Monte Carlo simulations and in a real financial data set. The Bayesian methods perform very well compared to the standard mean-variance method.  相似文献   

18.
Although Nationalism, Ethnocentrism, and Individualism in Flanders have been the subject of several studies before, a longitudinal analysis has not been performed on all three concepts simultaneously nor have their relationships and the direction of their relationships been studied in continuous time. In this study we performed a continuous-time state-space analysis on panel data collected from 1274 subjects, in the years 1991, 1995 and 1999. The LISREL program is used for estimating the approximate discrete model (ADM), and for comparison, also the exact discrete model (EDM) is estimated by means of the Mx program. Details of continuous time modeling, especially the EDM and ADM, are dealt with. Individualism and Ethnocentrism turn out to be connected in a moderately strong feedback relationship with the effect from Individualism towards Ethnocentrism somewhat stronger than that in the opposite direction. Both Individualism and Ethnocentrism have small effects on Nationalism. The autoregression functions, cross-lagged effect functions, and mean predictions are shown.  相似文献   

19.
Criminal incident prediction using a point-pattern-based density model   总被引:3,自引:0,他引:3  
Law enforcement agencies need crime forecasts to support their tactical operations; namely, predicted crime locations for next week based on data from the previous week. Current practice simply assumes that spatial clusters of crimes or “hot spots” observed in the previous week will persist to the next week. This paper introduces a multivariate prediction model for hot spots that relates the features in an area to the predicted occurrence of crimes through the preference structure of criminals. We use a point-pattern-based transition density model for space–time event prediction that relies on criminal preference discovery as observed in the features chosen for past crimes. The resultant model outperforms the current practices, as demonstrated statistically by an application to breaking and entering incidents in Richmond, VA.  相似文献   

20.
This paper constructs hybrid forecasts that combine forecasts from vector autoregressive (VAR) model(s) with both short- and long-term expectations from surveys. Specifically, we use the relative entropy to tilt one-step-ahead and long-horizon VAR forecasts to match the nowcasts and long-horizon forecasts from the Survey of Professional Forecasters. We consider a variety of VAR models, ranging from simple fixed-parameter to time-varying parameters. The results across models indicate meaningful gains in multi-horizon forecast accuracy relative to model forecasts that do not incorporate long-term survey conditions. Accuracy improvements are achieved for a range of variables, including those that are not tilted directly but are affected through spillover effects from tilted variables. The accuracy gains for hybrid inflation forecasts from simple VARs are substantial, statistically significant, and competitive to time-varying VARs, univariate benchmarks, and survey forecasts. We view our proposal as an indirect approach to accommodating structural change and moving end points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号