首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We create a hedonic price model for house prices for six geographical submarkets in the Netherlands. Our model is based on a recent data-mining technique called boosting. Boosting is an ensemble technique that combines multiple models, in our case decision trees, into a combined prediction. Boosting enables capturing of complex nonlinear relationships and interaction effects between input variables. We report mean relative errors and mean absolute error for all regions and compare our models with a standard linear regression approach. Our model improves prediction performance by up to 39% compared with linear regression and by up to 20% compared with a log-linear regression model. Next, we interpret the boosted models: we determine the most influential characteristics and graphically depict the relationship between the most important input variables and the house price. We find the size of the house to be the most important input for all but one region, and find some interesting nonlinear relationships between inputs and price. Finally, we construct hedonic price indices and compare these with the mean and median index and find that these indices differ notably in the urban regions of Amsterdam and Rotterdam. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
The repeat sales model is commonly used to construct reliable house price indices in absence of individual characteristics of the real estate. Several adaptations of the original model by Bailey et al. (J Am Stat Assoc 58:933–942, 1963) are proposed in literature. They all have in common using a dummy variable approach for measuring price indices. In order to reduce the impact of transaction price noise on the estimates of price indices, Goetzmann (J Real Estate Finance Econ 5:5–53, 1992) used a random walk with drift process for the log price levels instead of the dummy variable approach. The model that is proposed in this article can be interpreted as a generalization of the Goetzmann methodology. We replace the random walk with drift model by a structural time series model, in particular by a local linear trend model in which both the level and the drift parameter can vary over time. An additional variable—the reciprocal of the time between sales—is included in the repeat sales model to deal with the effect of the time between sales on the estimated returns. This approach is robust can be applied in thin markets where relatively few selling prices are available. Contrary to the dummy variable approach, the structural time series model enables prediction of the price level based on preceding and subsequent information, implying that even for particular time periods where no observations are available an estimate of the price level can be provided. Conditional on the variance parameters, an estimate of the price level can be obtained by applying regression in the general linear model with a prior for the price level, generated by the local linear trend model. The variance parameters can be estimated by maximum likelihood. The model is applied to several subsets of selling prices in the Netherlands. Results are compared to standard repeat sales models, including the Goetzmann model.  相似文献   

3.
中国上市公司债务期限结构特征的实证检验   总被引:10,自引:1,他引:10  
本文采用描述性统计、混合最小二乘、参数和非参数检验等方法对中国上市公司的债务期限结构特征进行了系统分析。研究发现,中国上市公司的债务期限相对较低,中国受管制行业的债务期限相对高于其他行业,不同行业和不同地区之间的债务期限存在显著性差异.这些差异大部分归属于行业差异和地区差异而不是年度差异。总的说来,债务期限9.53%的变异能由行业类别加以解释,4.15%的变异能由地区虚拟变量加以解释。  相似文献   

4.
Optimal Hedging of Prediction Errors Using Prediction Errors   总被引:1,自引:1,他引:0  
Wind power energy has been paid much attention recently for various reasons, and the production of electricity with wind energy has been increasing rapidly for a few decades. One of the most difficult issues for using wind power in practice is that the power output largely depends on the wind condition, and as a result, the future output may be volatile or uncertain. Therefore, the prediction of power output in the future is considered important and is key to electric power generating industries making the wind power electricity market work properly. However, the use of predictions may cause other problems due to “prediction errors.” In this work, we will propose a new type of weather derivatives based on the prediction errors for wind speeds, and estimate their hedge effect on wind power energy businesses. At first, we will investigate the correlation of prediction errors between the power output and the wind speed in a Japanese wind farm, which is a collection of wind turbines that generate electricity in the same location. Then we will develop a methodology that will optimally construct a wind derivative based on the prediction errors using nonparametric regressions. A simultaneous optimization technique of the loss and payoff functions for wind derivatives is demonstrated based on the empirical data.  相似文献   

5.
This paper provides a method for testing for regime differences when regimes are long-lasting. Standard testing procedures are generally inappropriate because regime persistence causes a spurious regression problem – a problem that has led to incorrect inference in a broad range of studies involving regimes representing political, business, and seasonal cycles. The paper outlines analytically how standard estimators can be adjusted for regime dummy variable persistence. While the adjustments are helpful asymptotically, spurious regression remains a problem in small samples and must be addressed using simulation or bootstrap procedures. We provide a simulation procedure for testing hypotheses in situations where an independent variable in a time-series regression is a persistent regime dummy variable. We also develop a procedure for testing hypotheses in situations where the dependent variable has similar properties.  相似文献   

6.
Methods developed for making time-varying forecasts in economic and financial analysis include (a) equal-weighted moving-window (or rolling) regression, (b) time-weighted (e.g. exponentially weighted) regression, (c) the Kalman filter (KF) and (d) adaptive Kalman filters. This paper developed a new method based on variational approximation of sequential Bayesian inference (VASB). Concepts and notions of the sequential Bayesian analysis and the variational approximation of an intractable posterior are simple and straightforward. Our VASB algorithm is not complicated and is easy to code. For a regression on multiple time-series, the regression coefficients, standard errors, prediction and residual error are time-varying and are estimated jointly at every time step. For a single time-series (e.g. price returns of an asset), its mean and variance are time-varying and are predicted jointly at every time step. The VASB algorithm performs better than the rolling and time-weighted statistics or regressions and the KF in terms of higher predictive power and stronger robustness. Derivations of the VASB algorithm are presented in the appendices.  相似文献   

7.
In this paper we derive a rule that identifies when exact tests may be used in the context of the multivariate regression model. Our derivation extends distribution theory reported in Rao (1973) and leads to the specification of exact tests for several event study hypothesis forms of interest to accounting and finance researchers. For tests where the event parameter is constrained to be equal across firms, we show that an infinite set of exact tests is available, of which the well known portfoliot-test is a special case. We conduct simulations using data from the CRSP Daily Returns file, and find that several test statistics, including exactly distributed statistics derived using the multivariate regression model, significantly over-reject the hypotheses examined.  相似文献   

8.
This article examines the power of tests of given size to detect and distinguish between wealth (i.e., mean) and information (i.e., variance) effects in event studies. We find that an Estimated Generalized Least Squares (EGLS) mean-effects test is consistently more powerful than the test based upon the average standardized residual and is as powerful as a nonparametric rank test. Unlike the test based upon the average standardized residual and the rank test, the EGLS test is well specified even when the event affects the variances of the prediction errors. We also find that conventional parametric tests to detect changes in the variance of the event-day average abnormal return are misspecified when the null of no change is true. We analyze the reasons this occurs and suggest a rank procedure that produces tests of the correct size under the null. Our evidence suggests that the critical factors allowing researchers to distinguish between wealth and information effects are an estimation procedure incorporating the heteroskedasticity inherent in market model prediction errors and an explicit test for event-day variance changes.  相似文献   

9.
In the last three decades, a variety of stochastic reserving models have been proposed in the general insurance literature mainly using (or reproducing) the well-known Chain-Ladder claims-reserving estimates. In practice, when the data do not satisfy the Chain-Ladder assumptions, high prediction errors might occur. Thus, in this article, a combined methodology is proposed based on the stochastic vector projection method and uses the regression through the origin approach of Murphy, but with heteroscedastic errors instead, and different from those that used by Mack. Furthermore, the Mack distribution-free model appears to have higher prediction errors when compared with the proposed one, particularly, for data sets with increasing (regular) trends. Finally, three empirical examples with irregular and regular data sets illustrate the theoretical findings, and the concepts of best estimate and risk margin are reported.  相似文献   

10.
There is an abundant literature on the design of intelligent systems to forecast stock market indices. In general, the existing stock market price forecasting approaches can achieve good results. The goal of our study is to develop an effective intelligent predictive system to improve the forecasting accuracy. Therefore, our proposed predictive system integrates adaptive filtering, artificial neural networks (ANNs), and evolutionary optimization. Specifically, it is based on the empirical mode decomposition (EMD), which is a useful adaptive signal‐processing technique, and ANNs, which are powerful adaptive intelligent systems suitable for noisy data learning and prediction, such as stock market intra‐day data. Our system hybridizes intrinsic mode functions (IMFs) obtained from EMD and ANNs optimized by genetic algorithms (GAs) for the analysis and forecasting of S&P500 intra‐day price data. For comparison purposes, the performance of the EMD‐GA‐ANN presented is compared with that of a GA‐ANN trained with a wavelet transform's (WT's) resulting approximation and details coefficients, and a GA‐general regression neural network (GRNN) trained with price historical data. The mean absolute deviation, mean absolute error, and root‐mean‐squared errors show evidence of the superiority of EMD‐GA‐ANN over WT‐GA‐ANN and GA‐GRNN. In addition, it outperformed existing predictive systems tested on the same data set. Furthermore, our hybrid predictive system is relatively easy to implement and not highly time‐consuming to run. Furthermore, it was found that the Daubechies wavelet showed quite a higher prediction accuracy than the Haar wavelet. Moreover, prediction errors decrease with the level of decomposition.  相似文献   

11.
选取沪深 A股上市的制造业公司财务变量构建信用风险评价体系,在利用因子分析法对其进行维数约简后,采用数据挖掘技术和统计学方法对信用违约概率测度作了有价值的探索.模型包含两个阶段,聚类阶段采用加权模糊C均值聚类(WFCM)算法将样本聚成同质的类,使同簇样本更具代表性;违约测度阶段应用 Logistic回归方法分别对不同组样本进行测度.实证结果表明:在 Logistic 模型中引入 WFCM算法能显著提高预测样本的违约概率测度准确率;对于样本总体与 ST企业而言,其违约预测准确率比 Lo-gistic模型分别提高了10.7%和20%;ROC检验结果也说明 WFCM-Logistic模型具有更强适用性.  相似文献   

12.
We estimate tracking errors from 26 exchange-traded funds (ETFs) utilizing three different methods and test their relative performance using Jensen's model. We find that tracking errors are significantly different from zero and display persistence. Based on Jensen's alpha, risk adjusted returns are significantly inferior to benchmark returns for all ETFs with two exceptions at conventional significance levels revealing that passive investment strategy does not outperform market returns. We then examine the degree to which frequently used factors such as expense ratio, dividends, exchange rate and spreads of trading prices may be underlying sources of tracking errors causing this underperformance. We find that the change in the exchange rate is a significant source of tracking errors. Our serial correlation test, runs test and panel regression analysis reveal that Asian markets display relatively greater persistence and therefore are less efficient in disseminating information and noisier in filtering the information contained in returns.  相似文献   

13.
The paper considers a test for structural breaks based on quantile regressions instead of OLS estimates. Besides granting robustness, this allows us to verify the impact of a break in more than one point of the conditional distribution. The quantile regression test is then repeatedly implemented as a diagnostic tool to uncover partial or spurious breaks. The test is also implemented to measure the contribution of each explanatory variable to the instability of the regression coefficients, thus finding which one of the different possible sources of breaks linked to the nature of the explanatory variables is the most effective. A real data example of exchange rates shows the presence of a time-driven break, but only at the lower quartile, while the analysis of the explanatory variable excludes its involvement in the break. Since the asymptotic distribution of the OLS test for structural change depends on i.i.d. normal errors and on the exogeneity of the explanatory variables, a Monte Carlo study analyses the behavior of OLS and quantile regression tests for structural changes with lagged endogenous variables, non-normal errors, spurious or partial breaks, and misspecification.  相似文献   

14.
The accurate prediction of long-term care insurance (LTCI) mortality, lapse, and claim rates is essential when making informed pricing and risk management decisions. Unfortunately, academic literature on the subject is sparse and industry practice is limited by software and time constraints. In this article, we review current LTCI industry modeling methodology, which is typically Poisson regression with covariate banding/modification and stepwise variable selection. We test the claim that covariate banding improves predictive accuracy, examine the potential downfalls of stepwise selection, and contend that the assumptions required for Poisson regression are not appropriate for LTCI data. We propose several alternative models specifically tailored toward count responses with an excess of zeros and overdispersion. Using data from a large LTCI provider, we evaluate the predictive capacity of random forests and generalized linear and additive models with zero-inflated Poisson, negative binomial, and Tweedie errors. These alternatives are compared to previously developed Poisson regression models.

Our study confirms that variable modification is unnecessary at best and automatic stepwise model selection is dangerous. After demonstrating severe overprediction of LTCI mortality and lapse rates under the Poisson assumption, we show that a Tweedie GLM enables much more accurate predictions. Our Tweedie regression models improve average predictive accuracy (measured by several prediction error statistics) over Poisson regression models by as much as four times for mortality rates and 17 times for lapse rates.  相似文献   


15.
Improving social welfare and stimulating consumption are two important issues in promoting economic growth. Based on the panel data of China Health and Retirement Longitudinal Study 2011 and 2013, this article precisely calculated the Social Security Wealth of employees and residents. The fixed-effect model and quantile regression method were employed. Besides, this study used the interaction of year dummy and age dummy as an instrumental variable. The results from this study indicate that Social Security Wealth can promote the total household consumption as well as improve the household consumption structure. However, the impact varies in both employees and residents’ groups.  相似文献   

16.
This paper investigates the performance of Artificial Neural Networks for the classification and subsequent prediction of business entities into failed and non-failed classes. Two techniques, back-propagation and Optimal Estimation Theory (OET), are used to train the neural networks to predict bankruptcy filings. The data are drawn from Compustat data tapes representing a cross-section of industries. The results obtained with the neural networks are compared with other well-known bankruptcy prediction techniques such as discriminant analysis, probit and logit, as well as against benchmarks provided by directly applying the bankruptcy prediction models developed by Altman (1968) and Ohlson (1980) to our data set. We control the degree of ‘disproportionate sampling’ by creating ‘training’ and ‘testing’ populations with proportions of bankrupt firms ranging from 1% to 50%. For each population, we apply each technique 50 times to determine stable accuracy rates in terms of Type I, Type II and Total Error. We show that the performance of various classification techniques, in terms of their classification errors, depends on the proportions of bankrupt firms in the training and testing data sets, the variables used in the models, and assumptions about the relative costs of Type I and Type II errors. The neural network solutions do not achieve the ‘magical’ results that literature in this field often promises, although there are notable 'pockets' of superior performance by the neural networks, depending on particular combinations of proportions of bankrupt firms in training and testing data sets and assumptions about the relative costs of Type I and Type II errors. However, since we tested only one architecture for the neural network, it will be necessary to investigate potential improvements in neural network performance through systematic changes in neural network architecture.  相似文献   

17.
为检验我国货币政策对股票市场的有效性,本文在经典理论的基础上,运用协整检验、格兰杰因果检验、VECM检验方法,对货币政策与股票市场收益率关系进行实证分析。研究结果表明:货币供应量增长率与股票市场收益率呈正相关关系,但长期内影响效果不显著;利率的调整在短期内对股票收益率的影响较为显著,在长期内则表现为平稳,两者关系符合一般金融理论;进一步地,采用虚拟变量回归模型,分析了货币政策环境变化对收益率大小的影响。笔者根据我国的国情,分析这种传导效应的结果,做出了相应的判断,并对如何解决货币政策对股票市场传导中存在的问题提出了自己的看法。  相似文献   

18.
This paper examines the issue of the prediction of future spot rates by applying the seemingly unrelated regression technique to four major currencies using data from January 1974 to September 1982. The empirical evidence indicates that current spot rates provide a better prediction of future spot rates than do current forward rates. In further rolling subsample studies, the estimated coefficients for current forward rates (or spot rates) are found to be sensitive to the new information. An important implication of this paper is that since the estimated coefficients vary over time, the underlying pattern of the generated coefficients should be extrapolated and incorporated into the exchange rate predictions.  相似文献   

19.
An integral part of econometric practice is to test the adequacy of model specifications. If a model is adequately specified, it should not leave interesting features of the data-generating process in the errors. Despite the common tradition, the importance of diagnostic checking as a safeguard against mis-specification has only recently been recognized by neural network (NN) practitioners, possibly because this type of semi-parametric methodology was not originally designed for economic and financial applications. The purpose of this paper is to compare a number of analytical statistical testing procedures suitable to diagnostic checking on a neural network regression model. We present the standard Lagrange multiplier (LM) testing framework designed under the assumption of identically distributed disturbances and also examine two modifications that are robust to heteroskedasticity in errors. One modification also gives the researcher an opportunity to incorporate information concerning the volatility structure of the data-generating process in the testing procedure. By means of a Monte Carlo simulation, we investigate the performance of these tests under GARCH-type heteroskedasticity in errors and various distributional assumptions. The results show that although the primary concern of the researcher may be to design a regression model that accurately captures relations in the mean of the conditional distribution, developing a good approximation of the underlying volatility structure generally increases the efficiency of tests in detecting non-adequacy of a NN model.  相似文献   

20.
This study investigates the stock-market reaction to layoff announcements where more than 1000 workers are affected. We employ a dummy variable regression (DVR) version of the market model and compare the results obtained using ordinary least squares (OLS) versus exponential GARCH (EGARCH), and value-weighted (VW) versus equally weighted (EW) market index. We find that the stock market responds negatively to layoffs attributed to low demand. We also find that contrary to prior research, the market reacts positively to restructuring-related layoffs on the announcement date. This pattern of market reaction is observed regardless of the market index used or the parameter estimation methods employed, although the empirical results indicate that using EGARCH/VW market index tends to generate fewer statistically significant test results and smaller (in the absolute size of the cumulative) abnormal returns (ARs). Taken together, our study provides additional support for the claim that studies of stock-market reaction to corporate events must account for the time variation in return volatility. Ignoring these could result in erroneous inferences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号