首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper develops an instrumental variables framework to form a better proxy for earnings forecast errors. The key aspect of the approach is to extract information from alternative proxies for the same underlying variable, namely a portion of realized earnings signals unexpected by the market. We use signs of various proxies for earnings forecast errors obtained from different time-series forecasting models as multiple instruments. The results show that the instrumental variables approach is effective for reducing measurement errors inherent in various proxies for earnings forecast errors. It produces not only a smaller magnitude but also a narrower dispersion of earnings forecast errors. The paper provides evidence that the instrumental variables approach performs better for small-firm samples than for large-firm samples. Finally, we observe that analysts' forecast errors seasoned with the signs of various time-series forecast errors (as well as the signs of their own forecast errors) outperform those without seasoning. This indicates that analysts' forecast errors can still be improved by employing the instrumental variables technique.  相似文献   

2.
This article shows how to evaluate the performance of managedportfolios using stochastic discount factors (SDFs) from continuous-timeterm structure models. These models imply empirical factorsthat include time averages of the underlying state variables.The approach addresses a performance measurement bias, describedby Goetzmann, Ingersoll, and Ivkovic (2000) and Ferson and Khang(2002), arising because fund managers may trade within the returnmeasurement interval or hold positions in replicable options.The empirical factors contribute explanatory power in factormodel regressions and reduce model pricing errors. We illustratethe approach on US government bond funds during 1986–2000.  相似文献   

3.
This article demonstrates that the portfolio approach could suffer a serious problem when the sorting variables contain not only true values but also measurement errors. The grouped measurement errors will be embedded into the data used to test financial models and further bias the testing results. To correct for this measurement‐error problem, I develop a random sampling approach to form portfolios. Results from this new methodology are unbiased and robust. By applying this methodology to investigate beta shifts, I show that the previous results about beta shifts are driven by measurement errors. The actual beta shift pattern is more complicated than that predicted by previous studies. The risk shift hypothesis is unlikely to explain the mean‐reversion puzzle for stock returns. JEL classification: Gil, C43.  相似文献   

4.
企业绩效测评标准选择——困惑、对策与启示   总被引:1,自引:0,他引:1  
文章从绩效测评标准选择演进的脉络出发,分类简述绩效测评标准选择存在的诸多困惑,并认为造成这种状况的原因是绩效本身包含了真实绩效、可测绩效、测评绩效三个层次,测评过程总是会产生认识和技术误差,同时技术上符合独立性、完备性、预测性和适应性的理想标准很难找到。文章尝试基于组织学习理论提出具有系统性、动态性、认知性和近似性的多环绩效测评标准选择方法。最后给出对我国企业绩效测评标准选择制度的启示。  相似文献   

5.
Measurement Error and Nonlinearity in the Earnings-Returns Relation   总被引:1,自引:0,他引:1  
There is a long history of research which examines the relation between unexpected earnings and unexpected returns on common stock. Early literature used simple linear regression models to describe this relation. Recently, a number of authors have proposed nonlinear models. These authors find that the earnings-returns relation is approximately linear for small changes but is 'S'-shaped globally. However, unexpected earnings are generated by the sum of a measurement error and a true earnings innovation, so the apparent nonlinearity could be an artifact of nonlinearity in the measurement errors. Using a research design that minimizes the presence of measurement errors, we provide evidence consistent with the hypothesis that measurement errors contribute to the nonlinearities in the earnings-returns relation. While we are not suggesting that the earnings-returns relation is linear, our evidence suggests that there is no advantage to using a nonlinear model for large firms that are widely followed by analysts.  相似文献   

6.
We show that the conclusions to be drawn concerning the informational efficiency of illiquid options markets depend critically on whether one carefully recognises and appropriately deals with the econometrics of the errors‐in‐variables problem. This paper examines the information content of options on the Danish KFX share index. We consider the relation between the volatility implied in an option's price and the subsequently realised index return volatility. Since these options are traded infrequently and in low volumes, the errors‐in‐variables problem is potentially large. We address the problem directly using instrumental variables techniques. We find that when measurement errors are controlled for, call option prices even in this very illiquid market contain information about future realised volatility over and above the information contained in historical volatility.  相似文献   

7.
Bradshaw and Sloan (2002, Journal of Accounting Research, 40, 41–66.) document a significant increase in the difference between the earnings response coefficients (ERCs) for GAAP and Street (I/B/E/S) earnings over the 1990s, suggesting that the market has become increasingly reliant or fixated on Street earnings. In this study we investigate whether, alternatively, an “errors in variables” problem caused by a mismatch between the definitions of realized and expected earnings drives the ERC divergence. Our findings suggest that results from conventional analyses of GAAP and Street ERCs, including the ERC divergence pattern, are significantly contaminated by measurement errors in earnings surprises.  相似文献   

8.
This paper investigates the performance of Artificial Neural Networks for the classification and subsequent prediction of business entities into failed and non-failed classes. Two techniques, back-propagation and Optimal Estimation Theory (OET), are used to train the neural networks to predict bankruptcy filings. The data are drawn from Compustat data tapes representing a cross-section of industries. The results obtained with the neural networks are compared with other well-known bankruptcy prediction techniques such as discriminant analysis, probit and logit, as well as against benchmarks provided by directly applying the bankruptcy prediction models developed by Altman (1968) and Ohlson (1980) to our data set. We control the degree of ‘disproportionate sampling’ by creating ‘training’ and ‘testing’ populations with proportions of bankrupt firms ranging from 1% to 50%. For each population, we apply each technique 50 times to determine stable accuracy rates in terms of Type I, Type II and Total Error. We show that the performance of various classification techniques, in terms of their classification errors, depends on the proportions of bankrupt firms in the training and testing data sets, the variables used in the models, and assumptions about the relative costs of Type I and Type II errors. The neural network solutions do not achieve the ‘magical’ results that literature in this field often promises, although there are notable 'pockets' of superior performance by the neural networks, depending on particular combinations of proportions of bankrupt firms in training and testing data sets and assumptions about the relative costs of Type I and Type II errors. However, since we tested only one architecture for the neural network, it will be necessary to investigate potential improvements in neural network performance through systematic changes in neural network architecture.  相似文献   

9.
Errors in Estimating Accruals: Implications for Empirical Research   总被引:35,自引:0,他引:35  
This paper examines the impact of measuring accruals as the change in successive balance sheet accounts, as opposed to measuring accruals directly from the statement of cash flows. Our primary finding is that studies using a balance sheet approach to test for earnings management are potentially contaminated by measurement error in accruals estimates. In particular, if the partitioning variable used to indicate the presence of earnings management is correlated with the occurrence of mergers and acquisitions or discontinued operations, tests are biased and researchers are likely to erroneously conclude that earnings management exists when there is none. Additional results show that the errors in balance sheet accruals estimation can confound returns regressions where discretionary and non-discretionary accruals are used as explanatory variables. Moreover, we demonstrate that tests of market mispricing of accruals will be understated due to erroneous classification of "extreme" accruals firms.  相似文献   

10.
In this study, the performance of cross-sectional stochastic dominance (SD), first proposed by Falk and Levy (FL) (1989), is compared with three traditional event study methodologies: the Mean Adjusted model, the Market Adjusted model, and the Market and Risk Adjusted Returns model. The comparison technique we use is a simulations approach similar to that of Brown and Warner (BW) (1980). BW show that the Mean Adjusted and Market Adjusted Returns models perform as well as the more sophisticated Market and Risk Adjusted Returns model. FL, however, provide a very compelling argument against the three traditional event study methodologies. The problem, they note, is not the theoretical need for risk adjustment; it is the definition and measurement of risk. FL assert that the observed abnormal returns (or lack thereof) may be due to omitted variables, a market proxy effect, or other specification errors in implementing the traditional event study methodologies.The present research finds that SD analysis without the bootstrap method for statistical testing is not very useful at any level of abnormal return. However, when the bootstrap method of statistical testing is employed, SD is found to perform as well as, and sometimes better than, the three traditional models in detecting simulated abnormal performance at all test levels. The results are consistent with FL\'s assertion that the improved performance may result from the SD methodology being free from the specification errors inherent in the three traditional event study models.  相似文献   

11.
This article employs second-generation random coefficient (RC) modeling to investigate the time-varying behavior and the predictability of the money demand function in Taiwan over the period from 1982Q1 to 2006Q4. The RC procedure deals with some of the limitations of previous studies, such as unknown functional forms, omitted variables, measurement errors, additive error terms, and the correlations between explanatory variables and their coefficients. Our main findings are as follows. First, the empirical results indicate that the values of the elasticities in the RC estimation are significantly different from those in other studies, because of the use of coefficient drivers. Second, by observing the time-varying behavior of the coefficients, we find some specific points in our time profile of coefficients; that is, we can make an association with real events occurring in Taiwan, such as the financial liberalization after 1989 and the Asian financial crisis of 1997–1998. Finally, we compare the predicted values via the time intervals and different specifications and find that we should adapt different specifications of the RC model to estimate each interval.  相似文献   

12.
This paper revisits Fama and French [Fama, Eugene F., French, Kenneth R., (1993) Common risk factors in the returns on stock and bonds. Journal of Financial Economics 33 (1), 3–56] and Carhart [Carhart, Mark M., 1997. On persistence in mutual fund performance. Journal of Finance 52 (1), 57–82] multifactor model taking into account the possibility of errors-in-variables. In their well known paper, Fama and French [Fama, Eugene F., French, Kenneth R., 1997. Industry costs of equity. Journal of Financial Economic 43 (2), 153–193] concluded that estimates of the cost of equity for the three-factor model of FF (1993) were imprecise. We argue that this imprecision is even more severe because of the pervasive effects of measurement errors. We propose Dagenais and Dagenais [Dagenais, Marcel G., Dagenais, Denyse L., 1997. Higher moment estimators for linear regression models with errors in the variables. Journal of Econometrics 76 (1–2), 193–221] higher moments estimator as a solution. Our results show that estimates of the cost of equity obtained with Dagenais and Dagenais estimator differ sharply from popular OLS estimates and shed a new light on performance attribution and abnormal performance (α). Adapting the Generalized Treynor Ratio, recently developed by Hübner [Hübner, Georges, 2005. The generalized treynor ratio. Review of Finance 9 (3), 415–435], we show that the performance of managed portfolios with multi-index models should be revisited in presence of errors-in-variables.  相似文献   

13.
14.
This paper provides the explicit solution to the three-factor diffusion model recently proposed by the Danish Society of Actuaries to the Danish industry of life insurance and pensions. The solution is obtained by use of the known general solution to multidimensional linear stochastic differential equation systems. With offset in the explicit solution, we establish the conditional distribution of the future state variables which allows for exact simulation. Using exact simulation, we illustrate how simulation of the system can be improved compared to a standard Euler scheme. In order to analyze the effect of choosing the exact simulation scheme over the traditional Euler approximation scheme frequently applied by practitioners, we carry out a simulation study. We show that due to its recursive nature, the Euler scheme becomes computationally expensive as it requires a small step size in order to minimize discretization errors. Using our exact simulation scheme, one is able to cut these computational costs significantly and obtain even better forecasts. As probability density tail behavior is key to expected investment portfolio performance, we further conduct a risk analysis in which we compare well-known risk measures under both schemes. Finally, we conduct a sensitivity analysis and find that the relative performance of the two schemes depends on the chosen model parameter estimates.  相似文献   

15.
Many evaluation techniques typically measure performance as deviations of average returns on actively managed funds from those predicted by some asset pricing model. Empirical evidence, however,has so far suggested that all asset pricing models lack empiricalsupport, implying that the models contain mis-specification errors tovarious degrees. Evaluating mutual fund performance relative to any of these models thus becomes problematic.In this paper, we propose an approach to performancemeasurement that emphasizes minimizing explicitly the pricing error associated with an asset pricing function which isemployed to compute performance measures. This approachis henceforth called the minimum specification-error (MSE) method. We also discuss the statistical properties for implementing MSE performance measures.To demonstrate the significance of the pricing errorconfounded in evaluation measurement, we contrast our methodology with the Grinblatt and Titman (1989) period weighting approach and with the empirical implementation of Chen and Knez(1996). We find that the greater the pricing error ofpassive assets, the larger the performance measures. Given the average pricing error generated from a collection of 163 diverse passive portfolios used in this analysis, the performance values assigned to a large number of the funds become statistically and economically insignificant.  相似文献   

16.
This paper assesses the measurement errors inherent in segment reporting. Measurement errors are gauged by comparing the correlation of segment results with their industry to the corresponding correlation for single line-of-business firms operating in the same industry. The findings show that the measurement errors in segment information, particularly earnings, are larger than those in the financial information reported by single line-of-business firms. The cross-sectional variation in the measurement errors can be traced to cost/revenue allocations, management intervention in segment reporting, and the operational structure of multi-segment firms. Market tests indicate that the information content of segment information is inversely related to the estimated measurement errors. This revised version was published online in November 2006 with corrections to the Cover Date.  相似文献   

17.
This study examines how comprehensive performance measurement systems (PMS) affect managerial performance. It is proposed that the effect of comprehensive PMS on managerial performance is indirect through the mediating variables of role clarity and psychological empowerment. Data collected from a survey of 83 strategic business unit managers are used to test the model. Results from a structural model tested using Partial Least Squares regression indicate that comprehensive PMS is indirectly related to managerial performance through the intervening variables of role clarity and psychological empowerment. This result highlights the role of cognitive and motivational mechanisms in explaining the effect of management accounting systems on managerial performance. In particular, the results indicate that comprehensive PMS influences managers’ cognition and motivation, which, in turn, influence managerial performance.  相似文献   

18.
This paper uses the pricing models of Cox, Ingersoll and Ross (1981), Richard and Sundaresan (1981), and French (1982) to examine the relation between futures and forward prices for copper and silver. There are significant differences between these prices. The average differences are generally consistent with the predictions of the futures and forward price models. However, these models are not helpful in describing intra-sample variations in the futures-forward price differences. This failure is apparently caused by measurement errors in both the price differences and in the explanatory variables.  相似文献   

19.
Conditioning Variables and the Cross Section of Stock Returns   总被引:8,自引:0,他引:8  
Previous studies identify predetermined variables that predict stock and bond returns through time. This paper shows that loadings on the same variables provide significant cross-sectional explanatory power for stock portfolio returns. The loadings are significant given the three factors advocated by Fama and French (1993) and the four factors of Elton, Gruber, and Blake (1995). The explanatory power of the loadings on lagged variables is robust to various portfolio grouping procedures and other considerations. The results carry implications for risk analysis, performance measurement, cost-of-capital calculations, and other applications.  相似文献   

20.
This paper tests the Expectations Hypothesis (EH) of the term structure of interest rates using new data for Germany. The German term structure appears to forecast future short-term interest rates surprisingly well, compared with previous studies with US data, while it has lower predictive power for long-term interest rates. However, the direction suggested by the coefficient estimates is consistent with that implied by the EH, that is when the term spread widens, long rates increase. The use of instrumental variables to deal with possible measurement errors in the data significantly improves regressions for the long rates. Moreover, re-estimation with proxy variables to account for the possibility of time-varying term premia confirms that the evolution of both short and long rates corresponds to the predictions of the EH and that most of the information is in the term spread. These results are important as they suggest that monetary policy in Germany could be guided by the slope of the term structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号