共查询到20条相似文献,搜索用时 15 毫秒
1.
Diagnostics for normal errors in regression typically utilize ordinary residuals, despite the failure of assumptions to validate their use. Case studies here show that such misuse may be critical. A remedy invokes recovered errors having the required properties, taking into account that such errors are closer to normality than are disturbances in the observations themselves. Simulation studies show consistent improvement over the usual methods in small samples. In addition, effects on normal diagnostics due to various model violations are examined. Received: January 1999 相似文献
2.
In the paper we study regressional versions of Lukacs' characterization of the gamma law. We consider constancy of regression
instead of Lukacs' independence condition in three new schemes. Up to now the constancy of regressions of U=X/(X + Y) given V=X + Y for independent X and Y has been considered in the literature. Here we are concerned with constancy of regressions for X and Y while independence of U and V is assumed instead. 相似文献
3.
Lynn Roy LaMotte 《Metrika》1999,50(2):109-119
Deleted-case diagnostic statistics in regression analysis are based on changes in estimates due to deleting one or more cases. Bounds on these statistics, suggested in the literature for identifying influential cases, are widely used. In a linear regression model for Y in terms of X and Z, the model is “collapsible” with respect to Z if the Y−X relation is unchanged by deleting Z from the model. Deleted-case diagnostic statistics can be viewed as test statistics for collapsibility hypotheses in the mean shift outlier model. It follows that, for any given case, all deleted-case statistics test the same hypothesis, hence all have the same p-value, while the bounds correspond to different levels of significance among the several statistics. Furthermore, the bound for any particular deleted-case statistic gives widely varying levels of significance over the cases in the data set. Received: April 1999 相似文献
4.
Sándor Csörg 《Statistica Neerlandica》2002,56(2):132-142
The phenomenon of smoothing dichotomy in random-design nonparametric regression is exposed in nontechnical terms from two recent papers published jointly with Jan Mielniczuk. This concerns the asymptotic distribution of kernel estimators when the errors exhibit long-range dependence, being instantaneous functions either of Gaussian sequences or of infinite-order moving averages, depending on the amount of smoothing. 相似文献
5.
Two ge neral classes of search designs for factor screening experiments with factors at three levels
In this paper, we are presenting general classes of factor screening designs for identifying a few important factors from a list of m (≥ 3) factors each at three levels. A design is a subset of 3m possible runs. The problem of finding designs with small number of runs is considered here. A main effect plan requires at least (2m + 1) runs for estimating the general mean, linear and quadratic effects of m factors. An orthogonal main effect plan requires, in addition, the number of runs as a multiple of 9. For example, when m=5, a main effect plan requires at least 11 runs and an orthogonal main effect plan requires 18 runs. Two general factor screening designs presented here are nonorthogonal designs with (2m− 1) runs. These designs, called search designs permit us to search for and identify at most two important factors out of m factors under the search linear model introduced in Srivastava (1975). For example, when m=5, the two new plans given in this paper have 9 runs, which is a significant improvement over an orthogonal main effect plan with 18 runs in terms of the number of runs and an improvement over a main effect plan with at least 11 runs. We compare these designs, for 4≤m≤ 10, using arithmetic and geometric means of the determinants, traces, and maximum characteristic roots of certain matrices. Two designs D1 and D2 are identical for m=3 and this design is an optimal design in the class of all search designs under the six criteria discussed above. Designs D1 and D2 are also identical for m=4 under some row and column permutations. Consequently, D1 and D2 are equally good for searching and identifying one important factor out of m factors when m=4. The design D1 is marginally better than the design D2 for searching and identifying one important factor out of m factors when m=5, … , 10. The design D1 is marginally better than the D2 for searching and identifying two important factors out of m factors when m=5, 7, 9. The design D2 is somewhat better than the design D1 for m=6, 8. For m=10, D1 is marginally better than D2 w.r.t. the geometric mean and D2 is marginally better than D1 w.r.t. the arithmetic mean of the maximum characteristic roots. 相似文献
6.
In this paper we consider 100% inspection of a product of which several characteristics have to satisfy given specification limits. A 100% inspection procedure may be necessary to bring the percentage of nonconforming items down to a level acceptable for the consumer. If one can observe the actual values of the characteristics, it would be possible to bring this percentage down to zero. However, quite often this is not possible, as a measurement error occurs in measuring the characteristics. Therefore, it is common practice to inspect each characteristic by comparing its measurement to a test limit which is slightly more strict than the corresponding specification limit. An item then is accepted if for each characteristic the measurement conforms to the corresponding test limit. However, instead of inspecting an individual characteristic merely using its own measurement, it is (much) more efficient to use the measurements of the other characteristics as well, especially if some of the characteristics are highly correlated. In this paper it is shown how the measurements of all the characteristics can be used to test whether an item is conforming. 相似文献
7.
A number of studies have sought to determine whether economic forecasts had predictive value. These analyses used a single statistical methodology based on the independence of the actual and predicted changes. This paper questions whether the observed results are robust if alternative statistical methodologies are used to analyze this question. Procedures suggested by Cumby and Modest as well as rationality tests were applied to two data sets. Sometimes the conclusions differ depending on the procedures that are used. The results yield a guideline for the diagnostics that should be employed in testing for the value of economic forecasts. 相似文献
8.
In this paper a new approach is presented for testing statistical hypotheses when the hypotheses are fuzzy rather than crisp. In order to establish optimality criteria, we first give new definitions for probability of type I and type II errors. Then, we state and prove the Neyman-Pearson Lemma, on the basis of these new errors, for testing fuzzy hypotheses, and we give a few examples. Received February 1998 相似文献
9.
We present a simple agent-based model of a financial system composed of leveraged investors such as banks that invest in stocks and manage their risk using a Value-at-Risk constraint, based on historical observations of asset prices. The Value-at-Risk constraint implies that when perceived risk is low, leverage is high and vice versa; a phenomenon that has been dubbed pro-cyclical leverage. We show that this leads to endogenous irregular oscillations, in which gradual increases in stock prices and leverage are followed by drastic market collapses, i.e. a leverage cycle. This phenomenon is studied using simplified models that give a deeper understanding of the dynamics and the nature of the feedback loops and instabilities underlying the leverage cycle. We introduce a flexible leverage regulation policy in which it is possible to continuously tune from pro-cyclical to countercyclical leverage. When the policy is sufficiently countercyclical and bank risk is sufficiently low the endogenous oscillation disappears and prices go to a fixed point. While there is always a leverage ceiling above which the dynamics are unstable, countercyclical leverage policies can be used to raise the ceiling. We also study the impact on leverage cycles of direct, temporal control of the bank׳s riskiness via the bank׳s required Value-at-Risk quantile. Under such a rule the regulator relaxes the Value-at-Risk quantile following a negative stock price shock and tightens it following a positive shock. While such a policy rule can reduce the amplitude of leverage cycles, its effectiveness is highly dependent on the choice of parameters. Finally, we investigate fixed limits on leverage and show how they can control the leverage cycle. 相似文献
10.
Let X
1,X
2,…,X
n be a random sample from a continuous distribution with the corresponding order statistics X
1:n≤X
2:n≤…≤X
n:n. All the distributions for which E(X
k+r: n|X
k:n)=a
X
k:n+b are identified, which solves the problem stated in Ferguson (1967).
Received February 1998 相似文献
11.
12.
财务杠杆作为企业调节权益收益的手段,对其利用的程度和方法不同,所产生的效果也会很不同。企业选择融资结构策略时必须权衡财务杠杆利益和财务风险,以便投资者以尽量低的风险取得尽可能大的利益。 相似文献
13.
14.
假设检验是统计推断的内容之一,统计推断在体育统计学中的地位也十分重要。在假设检验中存在两类错误。在很多时候,我们往往只注意第一类错误的控制,而对于第二类错误经常不考虑。其实,对于第二类错误的控制也是十分必要的。本文对于两类错误的成因以及如何控制第二类错误进行了探讨,希望对于第二类错误的控制提出一些解决的方法。 相似文献
15.
16.
We study a “direct test” of Chu and White (1992) proposed for detecting changes in the trend of a linear regression model. The power of this test strongly depends on a suitable estimation of the variance of the error variables involved. We discuss various types of variance estimators and derive their asymptotic properties under the null-hypothesis of “no change” as well as under the alternative of “a change in linear trend”. A small simulation study illustrates the estimators' finite sample behaviour. 相似文献
17.
随着教育的不断升温,地方中小型高校迅速崛起,成为区域经济建设和社会发展中的一支重要力量。突出办学特色、打造优势学科、创新教育理念、改革管理机制是中小型高校走出办学理念误区的正确选择。 相似文献
18.
财务杠杆在房地产投资中的运用及风险防范策略 总被引:2,自引:0,他引:2
合理利用财务杠杆,可以达到放大资本金投资收益的作用,在房地产投资中也不例外。但是充分利用财务杠杆给投资者带来高收益的同时也带来了高的财务风险。这就有必要研究在房地产投资中如何获得有利的财务杠杆以及如何才能降低房地产投资中的财务风险,以提高房地产投资者的投资效率,减少投资风险。 相似文献
19.
The Dynamics of Capital Structure in Transition Economies 总被引:2,自引:0,他引:2
Eugene Nivorozhkin 《Economics of Planning》2004,37(1):25-45
This paper uses a dynamic unrestricted capital structure model to examine the determinants of the private companies' target financial leverage and the speed of adjustment to it in two transition economies, the Czech Republic and Bulgaria. We explicitly model the adjustment of companies' leverage to a target leverage, and this target leverage is itself explained by a set of factors. The panel data methodology combines cross-section and time-series information. The results indicate that the Bulgarian corporate credit markets were less supply -constrained than those of the Czech Republic during the period under investigation. Bulgariancompanies adjusted much faster to the target leverage than Czech firms. The speed ofadjustment related positively to the distance between target and observed ratio for Bulgarian companies while the relationship was neutral for Czech companies. The conservative policies of Czech banks and the exposure control were likely responsible for the slower adjustment among the larger companies while the opposite were true for Bulgarian banks and companies. 相似文献
20.
Comparisons between different randomized response strategies have already been performed by several workers but all have
concentrated solely on comparing the variances of the appropriate estimators. A very little attention has been paid by these
workers to the degree of privacy protection offered to the interviewees. In the present paper, an attempt has been made in
this direction and some important randomized response strategies have been compared with the Warner's model, taking into account
the aspect of privacy protection.
Received February 2000 相似文献