首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The findings of this study indicate that although the overall announcement effect of equity offerings is negative on the issuing firms' common stock prices, 23 percent of the sample firms have a positive two-day announcement period stock abnormal return (CAR). Further tests show that firms in the positive CAR group have significantly higher research and development (R&D) and profitability than firms in the negative CAR group, while the average leverage ratio of firms in the positive CAR group are significantly lower. Results of the regression analysis suggest a positive relation between the price effect and R&D as well as the profitability ratio. It also is found that only in the sample of large R&D firms is the magnitude of R&D significantly associated with the magnitude of announcement effect.  相似文献   

2.
The relative costs of misclassifying institutions by their financial health is an issue that concerns researchers. In this paper, a model and decision rule are developed that improve the probability of identifying those Savings and Loans that are predicted not to fail, but are actually failing. For obvious reasons, stakeholders in those institutions are very much interested in avoiding this type I error. The study also makes available evidence that the examination of Z-scores can be useful in identifying other financial institutions that may experience financial failure.  相似文献   

3.
While it is widely recognized that bank regulators choose the policy of forbearance because of an undercapitalized bank insurance fund, a key policy question centers on what were the effects of forbearance. This study uses a survival model to track capital deficient banks from December 1986 to December 1989, a period in which regulators explicitly used forbearance in managing troubled banks. Despite the contention that forbearance returns banks to safe-and-sound practices, this study concludes that forbearance did not enable troubled banks to return to viability. Furthermore, the evidence from the survival analysis provides negligible support, if any, for the interrelationship between forbearance and too-big-to-fail doctrine.  相似文献   

4.
In this study, an interesting aspect of the secondary market's pricing of the riskiness of insured municipal bonds is examined. Do secondary market investors still consider the underlying intrinsic credit quality of the issuer in pricing insured bonds? Is the pricing of this “issuer effect” treated equally for both revenue and general obligation bonds? We find that, while the market does not differentiate between equally insured revenue and general obligation bonds, the intrinsic credit quality of the issuer is reflected in the bond issue's quality spread. The results suggest that investors are concerned with the default risk protection of insured municipal bonds regardless of whether they are revenue or general obligation bonds. The market does not perceive insurers to be perfect substitutes for issuers in a risk transfer. Further, the results suggest there is justification for obtaining an agency rating according to issuer credit quality even for insured bonds.  相似文献   

5.
Standard bankruptcy prediction methods lead to models weighted by the types of failure firms included in the estimation sample. These kinds of weighted models may lead to severe classification errors when they are applied to such types of failing (and non-failing) firms which are in the minority in the estimation sample (frequency effect). The purpose of this study is to present a bankruptcy prediction method based on identifying two different failure types, i.e. the solidity and liquidity bankruptcy firms, to avoid the frequency effect. Both of the types are depicted by a theoretical gambler's ruin model of its own to yield an approximation of failure probability separately for both types. These models are applied to the data of randomly selected Finnish bankrupt and non-bankrupt firms. A logistic regression model based on a set of financial variables is used as a benchmark model. Empirical results show that the resulting heavily solidity-weighted logistic model may lead to severe errors in classifying non-bankrupt firms. The present approach will avoid these kinds of error by separately evaluating the probability of the solidity and liquidity bankruptcy; the firm is not classified bankrupt as long as neither of the probabilities exceeds the critical value. This leads the present prediction method slightly to outperform the logistic model in the overall classification accuracy.  相似文献   

6.
Bowen and Pelzman (1984) used a Constant Market Share model to analyze the performance of the U.S. in manufactured exports over the years 1962 to 1977. Their findings indicated continued erosion of U.S. competitiveness in the world market. During the eighties, the U.S. performance in exporting manufactured goods has been dismal and has raised concern about the nation's ability to compete in the world market. In this paper, we present estimates of the CMS components of the export growth of the U.S. for the years 1979 to 1987. We find that the decline in export competitiveness reported by Bowen and Pelzman (1984) has continued and worsened during the eighties. Bowen and Pelzman (1984) reported the world-trade effect to be the major positive source of growth and the competitive effect to be the major retarding factor of growth. We find that in addition to the poor competitive effect during the eighties, the United States traded in sluggish export markets and within an environment of volatile growth of exports and demand in the industrialized world.  相似文献   

7.
This paper contributes to the literature by documenting the improved performance of bankruptcy prediction models after including corporate governance variables. The empirical results demonstrate better predictive power for financial bankruptcy than previous bankruptcy prediction models, particularly in the post-SOX period. Our theoretical argument emphasizes the urgent need for such improvements to the bankruptcy prediction model following the introduction of the SOX Act, with the empirical results providing intuitive economic meaning for all relevant market participants. Policymakers may consider enacting laws to include designs for corporate governance monitoring mechanisms, entrepreneurs may use this model to improve their own governance structures and compensation mechanisms to avoid financial bankruptcy, and investors may refer to it to ensure that ‘losers’ are excluded from their investment portfolios.  相似文献   

8.
A recent article by Krause (Qual Quant, doi:10.1007/s11135-012-9712-5, Krause (2012)) maintains that: (1) it is untenable to characterize the error term in multiple regression as simply an extraneous random influence on the outcome variable, because any amount of error implies the possibility of one or more omitted, relevant explanatory variables; and (2) the only way to guarantee the prevention of omitted variable bias and thereby justify causal interpretations of estimated coefficients is to construct fully specified models that completely eliminate the error term. The present commentary argues that such an extreme position is impractical and unnecessary, given the availability of specialized techniques for dealing with the primary statistical consequence of omitted variables, namely endogeneity, or the existence of correlations between included explanatory variables and the error term. In particular, the current article discusses the method of instrumental variable estimation, which can resolve the endogeneity problem in causal models where one or more relevant explanatory variables are excluded, thus allowing for accurate estimation of effects. An overview of recent methodological resources and software for conducting instrumental variables estimation is provided, with the aim of helping to place this crucial technique squarely in the statistical toolkit of applied researchers.  相似文献   

9.
An easy method to construct efficient blocked mixture experiments in the presence of fixed and/or random blocks is presented. The method can be used when qualitative variables are involved in a mixture experiment as well. The resulting designs are -optimal in the class of minimum support designs. It is illustrated that the minimum support designs are more efficient than orthogonally blocked mixture experiments presented in the literature and only slightly less efficient than -optimal designs.  相似文献   

10.
A bandit problem consisting of a sequence of n choices (n) from a number of infinitely many Bernoulli arms is considered. The parameters of Bernoulli arms are independent and identically distributed random variables from a common distribution F on the interval [0,1] and F is continuous with F(0)=0 and F(1)=1. The goal is to investigate the asymptotic expected failure rates of k-failure strategies, and obtain a lower bound for the expected failure proportion over all strategies presented in Berry et al. (1997). We show that the asymptotic expected failure rates of k-failure strategies when 0<b1 and a lower bound can be evaluated if the limit of the ratio F(1)–F(t) versus (1–t)b exists as t1 for some b>0.  相似文献   

11.
The accounting information should help investors and creditors evaluate the amounts, timing, and uncertainty of firms' future cash receipts and disbursements. The Financial Accounting Standards Board (FASB) contends that accrual-based historical earnings are superior to cash flows in predicting future cash flows. But, Bowen, Burgstahler, and Daley (1986) showed that traditional measures of cash flows (net income (NI) plus depreciation and working capital from operations) appear to be better predictors of future cash flows than accrual accounting earnings. Since then, many researchers have articulated the importance of accounting data, especially cash flows and NI, in the predictive and forecasting processes. In this study, we empirically re-examined the ability of cash flows from operating activities (CFO) and accrual-based NI in predicting firms' bankruptcy. In the past, the results of this type of research were mixed. Differently from previous research, we focus on the timing of predictive ability, i.e., which indicator, cash flows or NI, is faster in predicting a firm's bankruptcy. We also investigate the timing of auditors' issuance of a going-concern opinion. The preliminary results show that the accrual-based NI is more accurate and faster than either CFO or audit opinion in predicting firms' failures. On average, NI signals a firm's bankruptcy 2.41 years before the bankruptcy filing, while CFO signals 1.48 years before filing. Auditors issued a going-concern opinion, another signal for firms' failure, to only 16 out of 41 bankrupt firms one year before bankruptcy, and no auditor issued the going-concern opinion two years before bankruptcy.  相似文献   

12.
Let be independent and identically distributed random variables with continuous distribution function. Denote by the corresponding order statistics. In the present paper, the concept of -neighbourhood runs, which is an extension of the usual run concept to the continuous case, is developed for the sequence of ordered random variables   相似文献   

13.
段隐华  王刚 《企业经济》2012,(1):185-188
破产与清算是困境企业经营活动的两个相互联系的阶段。本文首先在企业瞬时收益流遵循几何布朗运动的条件下,建立了困境企业在负债时的破产与清算的实物期权分析模型,并利用工程数学软件Matlab7.0对已建立的决策模型进行数值模拟分析,得出了有关的研究结论。研究表明,困境企业在负债投资经营的情况下,由债权人经营,债权人会提早清算;当瞬时现金流的不确定程度加剧时,困境企业负债投资后会延迟清算与破产。  相似文献   

14.
Let \(X_{1},\ldots , X_{n}\) be lifetimes of components with independent non-negative generalized Birnbaum–Saunders random variables with shape parameters \(\alpha _{i}\) and scale parameters \(\beta _{i},~ i=1,\ldots ,n\), and \(I_{p_{1}},\ldots , I_{p_{n}}\) be independent Bernoulli random variables, independent of \(X_{i}\)’s, with \(E(I_{p_{i}})=p_{i},~i=1,\ldots ,n\). These are associated with random shocks on \(X_{i}\)’s. Then, \(Y_{i}=I_{p_{i}}X_{i}, ~i=1,\ldots ,n,\) correspond to the lifetimes when the random shock does not impact the components and zero when it does. In this paper, we discuss stochastic comparisons of the smallest order statistic arising from such random variables \(Y_{i},~i=1,\ldots ,n\). When the matrix of parameters \((h({\varvec{p}}), {\varvec{\beta }}^{\frac{1}{\nu }})\) or \((h({\varvec{p}}), {\varvec{\frac{1}{\alpha }}})\) changes to another matrix of parameters in a certain mathematical sense, we study the usual stochastic order of the smallest order statistic in such a setup. Finally, we apply the established results to two special cases: classical Birnbaum–Saunders and logistic Birnbaum–Saunders distributions.  相似文献   

15.
This paper assesses the classification performance of the Z‐Score model in predicting bankruptcy and other types of firm distress, with the goal of examining the model's usefulness for all parties, especially banks that operate internationally and need to assess the failure risk of firms. We analyze the performance of the Z‐Score model for firms from 31 European and three non‐European countries using different modifications of the original model. This study is the first to offer such a comprehensive international analysis. Except for the United States and China, the firms in the sample are primarily private, and include non‐financial companies across all industrial sectors. We use the original Z′′‐Score model developed by Altman, Corporate Financial Distress: A Complete Guide to Predicting, Avoiding, and Dealing with Bankruptcy (1983) for private and public manufacturing and non‐manufacturing firms. While there is some evidence that Z‐Score models of bankruptcy prediction have been outperformed by competing market‐based or hazard models, in other studies, Z‐Score models perform very well. Without a comprehensive international comparison, however, the results of competing models are difficult to generalize. This study offers evidence that the general Z‐Score model works reasonably well for most countries (the prediction accuracy is approximately 0.75) and classification accuracy can be improved further (above 0.90) by using country‐specific estimation that incorporates additional variables.  相似文献   

16.
Aiting Shen  Andrei Volodin 《Metrika》2017,80(6-8):605-625
In the paper, the Marcinkiewicz–Zygmund type moment inequality for extended negatively dependent (END, in short) random variables is established. Under some suitable conditions of uniform integrability, the \(L_r\) convergence, weak law of large numbers and strong law of large numbers for usual normed sums and weighted sums of arrays of rowwise END random variables are investigated by using the Marcinkiewicz–Zygmund type moment inequality. In addition, some applications of the \(L_r\) convergence, weak and strong laws of large numbers to nonparametric regression models based on END errors are provided. The results obtained in the paper generalize or improve some corresponding ones for negatively associated random variables and negatively orthant dependent random variables.  相似文献   

17.
This paper considers the estimation of Kumbhakar et al. (J Prod Anal. doi:10.1007/s11123-012-0303-1, 2012) (KLH) four random components stochastic frontier (SF) model using MLE techniques. We derive the log-likelihood function of the model using results from the closed-skew normal distribution. Our Monte Carlo analysis shows that MLE is more efficient and less biased than the multi-step KLH estimator. Moreover, we obtain closed-form expressions for the posterior expected values of the random effects, used to estimate short-run and long-run (in)efficiency as well as random-firm effects. The model is general enough to nest most of the currently used panel SF models; hence, its appropriateness can be tested. This is exemplified by analyzing empirical results from three different applications.  相似文献   

18.
A note on consistency and unbiasedness of point estimation with fuzzy data   总被引:2,自引:0,他引:2  
Based on the SLLN for fuzzy random variables in uniform metric d, some asymptotical properties of point estimation with fuzzy random samples are investigated. The results of this paper establish a corresponding version on the consistency and unbiasedness of point estimation with n-dimensional fuzzy samples under considering a kind of fuzzy statistic.  相似文献   

19.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

20.
R. Grübel 《Metrika》1985,32(1):327-337
Summary This paper deals with the distribution of the first partial sum of a sequence of independent and identically distributed random variables which is divisible by some integerk>1. We give a formula for its characteristic function and obtain limit results fork. A new characterization of the geometric distribution is also given. The results are applicable to models such as periodically observed self-renewing aggregates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号