首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
Without acknowledging the paradigm difference between testing theory and predicting events, researchers in the field of management and organization continue to use the DEL-technique as a promising technique to evaluate theory based on cross-classification data analysis. We address the purpose and interpretation of the DEL-measure within the theory-testing and events-predicting paradigm. We argue that DEL, a proportionate reduction in error measure, is not to be interpreted in terms of the proportionate error reduction of knowing a prediction rule over not knowing it. In addition, a significant DEL-value is not to be interpreted as a dependence-measure of acceptance of a hypothesis as the only and best relationship between two categorical variables, just as a non-significant DEL-value cannot be interpreted as a measure of independence. Furthermore, an alternative proportionate reduction in error measure generates unequivocally interpretable results compared to the DEL-technique. Ton Steerneman passed away on September, 28, 2005.  相似文献   

2.
The empirical literature that tests for purchasing power parity (PPP) by focusing on the stationarity of real exchange rates has so far provided, at best, mixed results. The behaviour of the yen real exchange rate has most stubbornly challenged the PPP hypothesis and deepened this puzzle. This paper contributes to this discussion by providing new evidence on the stationarity of bilateral yen real exchange rates. We employ a non‐linear version of the augmented Dickey–Fuller test, based on an exponentially smooth‐transition autoregressive model (ESTAR) that enhances the power of the tests against mean‐reverting non‐linear alternative hypotheses. Our results suggest that the bilateral yen real exchange rates against the other G7 and Asian currencies were mean reverting during the post‐Bretton Woods era. Thus, the real yen behaviour may not be so different after all but simply perceived to be so because of the use of a restrictive alternative hypothesis in previous tests.  相似文献   

3.
This paper considers tests of seasonal integration and cointegration for multivariate unobserved component models. First, the locally best invariant (LBI) test of the null hypothesis of a deterministic seasonal pattern against the alternative of seasonal integration is derived for a model with Gaussian i.i.d. disturbances and deterministic trend. Then the null hypothesis of seasonal cointegration is considered and a test for common nonstationary components at the seasonal frequencies is proposed. The tests are subsequently generalized to account for stochastic trends, weakly dependent errors and unattended unit roots. Asymptotic representations and critical values of the tests are provided, while the finite sample performance is evaluated by Monte Carlo simulation experiments. Finally, the tests are applied to the series of industrial production of the four largest countries of the European Monetary Union. It is found that Germany does not appear to cointegrate with the other countries at most seasonal frequencies, while there seems to exist a common nonstationary seasonal component between France, Italy and Spain. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Credit ratings are ordinal predictions of the default risk of an obligor. The most commonly used measure for evaluating their predictive accuracy is the Accuracy Ratio, or equivalently, the area under the ROC curve. The disadvantages of these measures are that they treat default as a binary variable, thus neglecting the timing of default events, and they fail to use all of the information available from censored observations. We present an alternative measure which is related to the Accuracy Ratio but does not suffer from these drawbacks. As a second contribution, we study statistical inference for the Accuracy Ratio and the proposed measure in the case of multiple cohorts of obligors with overlapping lifetimes. We derive methods which use more sample information and lead to tests which are more powerful than alternatives which filter just the independent part of the dataset. All procedures are illustrated in the empirical section using a dataset of S&P Credit Ratings.  相似文献   

5.
In applied research in econometrics a general model determined from the current knowledge of economic theory often establishes a ‘natural’ method of embedding a number of otherwise non-nested hypotheses. Under these circumstances, significant tests of various hypotheses can be carried out within the classical framework, and tests of non-nested or separate families of hypotheses do not require development of new statistical methods. The application of some suitable variant of likelihood ratio testing procedure will be quite appropriate.There are, however, many ocassions in applied econometrics where the hypotheses under consideration are intended to provide genuine rival explanations of the same given phenomenon and the state of economic theory is not such as to furnish us with a general model that contains both of the rival hypotheses in a ‘natural’ and theoretically consistent manner. A number of investigators have advocated that even when a ‘natural’ comprehensive model containing both of the hypotheses under consideration cannot be obtained from theoretical considerations, it is still appropriate to base significant tests of non-nested hypotheses upon a combined model ‘artificially’ constructed from the rival alternatives. Moreover, in a recent paper on the application of Lagrange Multiplier (LM) tests to model specification, T.S. Breusch and A.R. Pagan (1980) have claimed that Cox's test statistic is connected to an LM or ‘score’ statistic derived from the application of the LM method to an exponentially combined model earlier employed by A.C. Atkinson (1970).Although the use of ‘artificially’ constructed comprehensive models fortesting separate families of hypotheses is analytically tempting, nevertheless it is subject to two major difficulties. Firstly, in many cases of interest in econometrics, the structural parameters under the combined hypothesis are not identified. Secondly, the log likelihood function of the artificially constructed model has singularities under both the null and alternative hypotheses.The paper firstly examines the derivation of LM statistics in the case of non-nested hypotheses and shows that Atkinson's general test statistic, or Breusch and Pagan's result, can be regarded as an LM test if the parameters of the alternative hypothesis are known. The paper also shows that unless all the parameters of the combined models are identified, no meaningful test of the separate families of the hypotheses by the artificial embedding procedure is possible, and in the identified case an expression for the LM statistic which avoids the problem of the singularity of the information matrix under the null and the alternative hypotheses is obtained.The paper concludes that none of the artificially embedding procedures are satisfactory for testing non-nested models and should be abandoned. It, however, emphasizes that despite these difficulties associated with the use of artificial embedding procedures, Cox's original statistic (which is not derived as an LM statistic and does not depend on any arbitrary synthetic combination of hypotheses) can still be employed as a useful procedure for testing the rival hypotheses often encountered in applied econometrics.  相似文献   

6.
We compare some nonparametric tests for the (/+ 1)–sample problem with additive effects under the constraint that in every sample the treatment effect is not less than that in the first sample, i.e. of some control. The behavior of the Pitman efficiency of the respective tests (essentially tests of a Kruskal–Wallis–, Wilcoxon–, Fligner–Wolfe–, Steel–, and Nemenyi–type) is discussed which turns out to depend on the level and power of the tests as well as on the directions, from which the alternative tends to the hypothesis. It will be shown that none of the tests under consideration is uniformly superior to the others.  相似文献   

7.
This paper considers the problem of constructing confidence sets for the date of a single break in a linear time series regression. We establish analytically and by small sample simulation that the current standard method in econometrics for constructing such confidence intervals has a coverage rate far below nominal levels when breaks are of moderate magnitude. Given that breaks of moderate magnitude are a theoretically and empirically relevant phenomenon, we proceed to develop an appropriate alternative. We suggest constructing confidence sets by inverting a sequence of tests. Each of the tests maintains a specific break date under the null hypothesis, and rejects when a break occurs elsewhere. By inverting a certain variant of a locally best invariant test, we ensure that the asymptotic critical value does not depend on the maintained break date. A valid confidence set can hence be obtained by assessing which of the sequence of test statistics exceeds a single number.  相似文献   

8.
In this study, Tinbergen's hierarchy hypothesis is extended to include Dokmeci's optimization of the hierarchical production model. The optimal location of hierarchically coordinated plants is determined on a non-homogenous plane by taking into consideration price-elastic demand, production cost and transportation cost. The objective is to determine the maximum-profit location while satisfying the income constraint of the region. A stepwise heuristic approach is used for the solution. In the region, the markets are divided into optimum subsets according to a chosen number of plants in each level. Market demand is calculated with respect to a selected uniform price. The optimum location of plants is calculated iteratively by the use of Dökmeci's model in each level of the hierarchy. Then, the same procedure is repeated for different numbers of plants in each level of the hierarchy by taking into consideration the interdependence among the levels. The alternative which produces the maximum profits within the limits of regional income is determined as the best system.  相似文献   

9.
Perron [Perron, P., 1989. The great crash, the oil price shock and the unit root hypothesis. Econometrica 57, 1361–1401] introduced a variety of unit root tests that are valid when a break in the trend function of a time series is present. The motivation was to devise testing procedures that were invariant to the magnitude of the shift in level and/or slope. In particular, if a change is present it is allowed under both the null and alternative hypotheses. This analysis was carried under the assumption of a known break date. The subsequent literature aimed to devise testing procedures valid in the case of an unknown break date. However, in doing so, most of the literature and, in particular the commonly used test of Zivot and Andrews [Zivot, E., Andrews, D.W.K., 1992. Further evidence on the great crash, the oil price shock and the unit root hypothesis. Journal of Business and Economic Statistics 10, 251–270], assumed that if a break occurs, it does so only under the alternative hypothesis of stationarity. This is undesirable since (a) it imposes an asymmetric treatment when allowing for a break, so that the test may reject when the noise is integrated but the trend is changing; (b) if a break is present, this information is not exploited to improve the power of the test. In this paper, we propose a testing procedure that addresses both issues. It allows a break under both the null and alternative hypotheses and, when a break is present, the limit distribution of the test is the same as in the case of a known break date, thereby allowing increased power while maintaining the correct size. Simulation experiments confirm that our procedure offers an improvement over commonly used methods in small samples.  相似文献   

10.
Summary We consider a new principle for constructing tests for one simple hypothesis against another simple hypothesis. In order to avoid a certain povery of the Neyman-Pearson approach, we regard more than two issues of a test, which are ordered according to the strength of support an observation is giving to one hypothesis against the other. As a measure of support we don't take the quotient of the likelihoods but the quotient of the two probabilities of some appropriate region in the sample space. Therefore a frequency interpretation of our tests is possible.  相似文献   

11.
Recent research has found that trend‐break unit root tests derived from univariate linear models do not support the hypothesis of long‐run purchasing power parity (PPP) for US dollar real exchange rates. In this paper univariate smooth transition models are utilized to develop unit root tests that allow under the alternative hypothesis for stationarity around a gradually changing deterministic trend function. These tests reveal statistically significant evidence against the null hypothesis of a unit root for the real exchange rates of a number of countries against the US dollar. However, restrictions consistent with long‐run PPP are rejected for some of the countries for which a rejection of the unit root hypothesis is obtained. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

12.
Regression tests of the expectations theory of the term structure typically reject the null hypothesis of orthogonality between implied forecast errors and the yield spreads. In the statistical literature on the term structure, these rejections are sometimes attributed to time-varying liquidity premia, and Engle et al . (1987) suggest that the ARCH-M model of time-variation in the liquidity premium may be sufficient to account for rejections of the expectations theory. We use non-parametric (kernel) regression to explore the regression test results on a number of data sets, and find some evidence of a persistent deviation from orthogonality for large absolute values of the spread. Incorporating ARCH-in-mean into models of the term premium indicates that this specification does explain significant time variation in liquidity premia, but the effect does not apepar to be sufficient to account for all of the deviations from orthogonality of forecast errors and spreads.  相似文献   

13.
An essential element of any realistic investment portfolio selection is the consideration of transaction costs. Our purpose, in this paper, is to determine the maximum return and the corresponding number of securities to buy giving such return, whenever practical constraints features related to budget, buy-in thresholds, and transaction costs are taken into consideration. Dealing with the portfolio selection and optimization problem in the point of view of individual investors, we will arrive to get an analytic result, leading to a new and simple alternative solution to heuristic algorithms. Moreover, this result can be considered as another approach to integer optimization.  相似文献   

14.
For univariate time series we suggest a new variant of efficient score tests against fractional alternatives. This test has three important merits. First, by means of simulations we observe that it is superior in terms of size and power in some situations of practical interest. Second, it is easily understood and implemented as a slight modification of the Dickey–Fuller test, although our score test has a limiting normal distribution. Third and most important, our test generalizes to multivariate cointegration tests just as the Dickey–Fuller test does. Thus it allows to determine the cointegration rank of fractionally integrated time series. It does so by solving a generalized eigenvalue problem of the type proposed by Johansen (J. Econ. Dyn. Control 12 (1988) 231). However, the limiting distribution of the corresponding trace statistic is χ2, where the degrees of freedom depend only on the cointegration rank under the null hypothesis. The usefulness of the asymptotic theory for finite samples is established in a Monte Carlo experiment.  相似文献   

15.
In this paper, we develop two cointegration tests for two varying coefficient cointegration regression models, respectively. Our test statistics are residual based. We derive the asymptotic distributions of test statistics under the null hypothesis of cointegration and show that they are consistent against the alternative hypotheses. We also propose a wild bootstrap procedure companioned with the continuous moving block bootstrap method proposed in  Paparoditis and Politis (2001) and  Phillips (2010) to rectify severe distortions found in simulations when the sample size is small. We apply the proposed test statistic to examine the purchasing power parity (PPP) hypothesis between the US and Canada. In contrast to the existing results from linear cointegration tests, our varying coefficient cointegration test does not reject that PPP holds between the US and Canada.  相似文献   

16.
This paper proposes a vector equilibrium correction model of stock returns that exploits the information in the futures market, while allowing for both regime‐switching behaviour and international spillovers across stock market indices. Using data for three major stock market indices since 1989, we find that: (i) in sample, our model outperforms several alternative models on the basis of standard statistical criteria; (ii) in out‐of‐sample forecasting, our model does not produce significant gains in terms of point forecasts relative to more parsimonious alternative specifications, but it does so both in terms of market timing ability and in density forecasting performance. The economic value of the density forecasts is illustrated with an application to a simple risk management exercise. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
18.
A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test   总被引:17,自引:0,他引:17  
The panel data unit root test suggested by Levin and Lin (LL) has been widely used in several applications, notably in papers on tests of the purchasing power parity hypothesis. This test is based on a very restrictive hypothesis which is rarely ever of interest in practice. The Im–Pesaran–Shin (IPS) test relaxes the restrictive assumption of the LL test. This paper argues that although the IPS test has been offered as a generalization of the LL test, it is best viewed as a test for summarizing the evidence from a number of independent tests of the sample hypothesis. This problem has a long statistical history going back to R. A. Fisher. This paper suggests the Fisher test as a panel data unit root test, compares it with the LL and IPS tests, and the Bonferroni bounds test which is valid for correlated tests. Overall, the evidence points to the Fisher test with bootstrap-based critical values as the preferred choice. We also suggest the use of the Fisher test for testing stationarity as the null and also in testing for cointegration in panel data.  相似文献   

19.
Permutation tests for serial independence using three different statistics based on empirical distributions are proposed. These tests are shown to be consistent under the alternative of m‐dependence and are all simple to perform in practice. A small simulation study demonstrates that the proposed tests have good power in small samples. The tests are then applied to Canadian gross domestic product (GDP data), corroborating the random‐walk hypothesis of GDP growth.  相似文献   

20.
In this paper a nonparametric variance ratio testing approach is proposed for determining the cointegration rank in fractionally integrated systems. The test statistic is easily calculated without prior knowledge of the integration order of the data, the strength of the cointegrating relations, or the cointegration vector(s). The latter property makes it easier to implement than regression-based approaches, especially when examining relationships between several variables with possibly multiple cointegrating vectors. Since the test is nonparametric, it does not require the specification of a particular model and is invariant to short-run dynamics. Nor does it require the choice of any smoothing parameters that change the test statistic without being reflected in the asymptotic distribution. Furthermore, a consistent estimator of the cointegration space can be obtained from the procedure. The asymptotic distribution theory for the proposed test is non-standard but easily tabulated or simulated. Monte Carlo simulations demonstrate excellent finite sample properties, even rivaling those of well-specified parametric tests. The proposed methodology is applied to the term structure of interest rates, where, contrary to both fractional- and integer-based parametric approaches, evidence in favor of the expectations hypothesis is found using the nonparametric approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号