首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A nonparametric method for comparing multiple forecast models is developed and implemented. The hypothesis of Optimal Predictive Ability generalizes the Superior Predictive Ability hypothesis from a single given loss function to an entire class of loss functions. Distinction is drawn between General Loss functions, Convex Loss functions, and Symmetric Convex Loss functions. The research hypothesis is formulated in terms of moment inequality conditions. The empirical moment conditions are reduced to an exact and finite system of linear inequalities based on piecewise-linear loss functions. The hypothesis can be tested in a statistically consistent way using a blockwise Empirical Likelihood Ratio test statistic. A computationally feasible test procedure computes the test statistic using Convex Optimization methods, and estimates conservative, data-dependent critical values using a majorizing chi-square limit distribution and a moment selection method. An empirical application to inflation forecasting reveals that a very large majority of thousands of forecast models are redundant, leaving predominantly Phillips Curve-type models, when convexity and symmetry are assumed.  相似文献   

2.
In nonparametric instrumental variable estimation, the function being estimated is the solution to an integral equation. A solution may not exist if, for example, the instrument is not valid. This paper discusses the problem of testing the null hypothesis that a solution exists against the alternative that there is no solution. We give necessary and sufficient conditions for existence of a solution and show that uniformly consistent testing of an unrestricted null hypothesis is not possible. Uniformly consistent testing is possible, however, if the null hypothesis is restricted by assuming that any solution to the integral equation is smooth. Many functions of interest in applied econometrics, including demand functions and Engel curves, are expected to be smooth. The paper presents a statistic for testing the null hypothesis that a smooth solution exists. The test is consistent uniformly over a large class of probability distributions of the observable random variables for which the integral equation has no smooth solution. The finite-sample performance of the test is illustrated through Monte Carlo experiments.  相似文献   

3.
Caves, Christensen and Diewert [1982a] showed that the Törnqvist productivity index is superlative in a considerably more general sense than had been previously believed. We examine the allocative and technical efficiency hypotheses on which their finding rests. We show that the allocative efficiency hypothesis can be modified, which makes the Törnqvist index superlative in a wider sense than even Caves, Christensen and Diewert showed, since it is consistent with a type of allocative efficiency other than the standard cost minimization and revenue maximization hypotheses considered by Caves, Christensen and Diewert. We also show that if the technical efficiency hypothesis is relaxed, the CCD result may no longer hold, and the distance functions that form the basis of the Malmquist productivity indexes, and hence of the Törnqvist productivity index, must be calculated. We then show how to calculate the underlying distance functions, and we argue that there are real advantages to doing so.The refereeing process of this paper was handled through N.R. Adam.  相似文献   

4.
《Journal of econometrics》2003,114(1):165-196
This paper re-visits the problem of estimating the regression error variance in a linear multiple regression model after preliminary hypothesis tests for either linear restrictions on the coefficients or homogeneity of variances. There is an extensive literature that discusses these problems, particularly in terms of the sampling properties of the pre-test estimators using various loss functions as the basis for risk analysis. In this paper, a unified framework for analysing the risk properties of these estimators is developed under a general class of loss structures that incorporates virtually all first-order differentiable losses. Particular consideration is given to the choice of critical values for the pre-tests. Analytical results indicate that an α-level substantially higher than those normally used may be appropriate for optimal risk properties under a wide range of loss functions. The paper also generalizes some known analytical results in the pre-test literature and proves other results only previously shown numerically.  相似文献   

5.
Summary If one wants to test the hypothesis as to whether a set of observations comes from a completely specified continuous distribution or not, one can use the Kuiper test. But if one or more parameters have to be estimated, the standard tables for the Kuiper test are no longer valid. This paper presents a table to use with the Kuiper statistic for testing whether a sample comes from a normal distribution when the mean and variance are to be estimated from the sample. The critical points are obtained by means of Monte-Carlo calculation; the power of the test is estimated by simulation; and the results of the powers for several alternative distributions are compared with the estimated powers of the Kolmogorov-Smirnov test.  相似文献   

6.
This paper employs response surface regressions based on simulation experiments to calculate asymptotic distribution functions for the Johansen-type likelihood ratio tests for cointegration. These are carried out in the context of the models recently proposed by Pesaran, Shin, and Smith ( 1997 ) that allow for the possibility of exogenous variables integrated of order one. The paper calculates critical values that are very much more accurate than those available previously. The principal contributions of the paper are a set of data files that contain estimated asymptotic quantiles obtained from response surface estimation and a computer program for utilizing them. This program, which is freely available via the Internet, can be used to calculate both asymptotic critical values and P-values. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
This paper employs response surface regressions based on simulation experiments to calculate distribution functions for some well-known unit root and cointegration test statistics. The principal contributions of the paper are a set of data files that contain estimated response surface coefficients and a computer program for utilizing them. This program, which is freely available via the Internet, can easily be used to calculate both asymptotic and finite-sample critical values and P-values for any of the tests. Graphs of some of the tabulated distribution functions are provided. An empirical example deals with interest rates and inflation rates in Canada.  相似文献   

8.
Forecast evaluations aim to choose an accurate forecast for making decisions by using loss functions. However, different loss functions often generate different ranking results for forecasts, which complicates the task of comparisons. In this paper, we develop statistical tests for comparing performances of forecasting expectiles and quantiles of a random variable under consistent loss functions. The test statistics are constructed with the extremal consistent loss functions of Ehm et al. (2016). The null hypothesis of the tests is that a benchmark forecast at least performs equally well as a competing one under all extremal consistent loss functions. It can be shown that if such a null holds, the benchmark will also perform at least equally well as the competitor under all consistent loss functions. Thus under the null, when different consistent loss functions are used, the result that the competitor does not outperform the benchmark will not be altered. We establish asymptotic properties of the proposed test statistics and propose to use the re-centered bootstrap to construct their empirical distributions. Through simulations, we show that the proposed test statistics perform reasonably well. We then apply the proposed method to evaluations of several different forecast methods.  相似文献   

9.
This paper is concerned with developing uniform confidence bands for functions estimated nonparametrically with instrumental variables. We show that a sieve nonparametric instrumental variables estimator is pointwise asymptotically normally distributed. The asymptotic normality result holds in both mildly and severely ill-posed cases. We present methods to obtain a uniform confidence band and show that the bootstrap can be used to obtain the required critical values. Monte Carlo experiments illustrate the finite-sample performance of the uniform confidence band.  相似文献   

10.
This paper uses cross-section data from individual establishments to estimate directly, i.e., without using side conditions, translog functions for 44 four-digit ISIC Chilean manufacturing industries. Main results are: (1) The null hypothesis that the production function is Cobb-Douglas cannot be rejected for 39 out of 44 four-digit ISIC industries. (2) The null hypothesis of constant returns to scale cannot be rejected for 35 out of 44 industries; the remaining 9 sectors show evidence of increasing returns to scale.  相似文献   

11.
In this paper, we examine empirically the effect that certificate-of-need regulation by state health planning organizations has had on the speed of diffusion of a relatively new medical technology—haemodialysis. Specifically, we test the hypothesis that a requirement that investments be subject to certificate-of-need review has significantly slowed the rate of adoption of this particular treatment modality. In subjecting this hypothesis to empirical verification, we estimate a random coefficient model. This approach allows us to make more efficient use of the available data than the traditional two-stage approach to modelling diffusion processes wherein separate logistic functions are first estimated over the time series observations followed by hypothesis tests conducted over the cross-sectional observations. We find evidence that certificate-of-need regulation slows the spread of haemodialysis technology.  相似文献   

12.
We propose a new diagnostic tool for time series called the quantilogram. The tool can be used formally and we provide the inference tools to do this under general conditions, and it can also be used as a simple graphical device. We apply our method to measure directional predictability and to test the hypothesis that a given time series has no directional predictability. The test is based on comparing the correlogram of quantile hits to a pointwise confidence interval or on comparing the cumulated squared autocorrelations with the corresponding critical value. We provide the distribution theory needed to conduct inference, propose some model free upper bound critical values, and apply our methods to S&P500 stock index return data. The empirical results suggest some directional predictability in returns. The evidence is strongest in mid range quantiles like 5–10% and for daily data. The evidence for predictability at the median is of comparable strength to the evidence around the mean, and is strongest at the daily frequency.  相似文献   

13.
A complication in the decision making of local governments is that a community's residents may move before the future benefits associated with current tax and expenditure policies are fully realized. It has been argued that the capitalization of these benefits into current property values can induce local governments to behave efficiently. This paper drops two assumptions behind the capitalization hypothesis: perfect labor mobility and the absence of uncertainty about the future benefits. It is shown that these benefits may be overcapitalized or undercapitalized, depending on a critical aspect of the uncertainty. Consequently, there is underinvestment in some projects, but overinvestment in others.  相似文献   

14.
Durbin's distribution-free rank test can be used to test the null hypothesis that there are no differences among the treatments in a Balanced Incomplete Block Design. Until now only the chi-square and F approximations for the test statistic were known. In this paper the exact distribution has been given for 15 designs and comparison is made with the chi-square and F approximation.  相似文献   

15.
Banerjee, Dolado and Mestre (J. Time Ser. Anal. 19 (1998) 267−283)introduce an error-correction test for the null hypothesis of no cointegration. The present paper supplements their work. They provide critical values forregressions with and without detrending. Here it is shown that the latter arenot appropriate if the series display linear trends. This does not mean thatdetrending is required. Correct percentiles are suggested for the case thatseries follow linear time trends but tests are based on regressions withoutdetrending. They are readily available from the literature.  相似文献   

16.
This paper solves a general continuous-time single-agent consumption and portfolio decision problem with subsistence consumption in closed form. The analysis allows for general continuously differentiable concave utility functions. The model takes into consideration that consumption must be no smaller than a given subsistence rate and that bankruptcy can occur. Thus the paper generalizes the results of Karatzas, Lehoczky, Sethi, and Shreve (1986).  相似文献   

17.
Xu Zheng 《Metrika》2012,75(4):455-469
This paper proposes a new goodness-of-fit test for parametric conditional probability distributions using the nonparametric smoothing methodology. An asymptotic normal distribution is established for the test statistic under the null hypothesis of correct specification of the parametric distribution. The test is shown to have power against local alternatives converging to the null at certain rates. The test can be applied to testing for possible misspecifications in a wide variety of parametric models. A bootstrap procedure is provided for obtaining more accurate critical values for the test. Monte Carlo simulations show that the test has good power against some common alternatives.  相似文献   

18.
Wangli Xu  Xu Guo 《Metrika》2013,76(4):459-482
In this paper, we propose a test on the parametric form of the coefficient functions in the varying coefficient model with missing response. Two groups of completed data sets are constructed by using imputation and inverse probability weighting methods respectively. By noting that the coefficient part can be a regression function for a specific model, we construct two empirical process-based tests. The asymptotical distributions of the proposed tests under null and local alternative hypothesis are investigated respectively. Simulation study is carried out to show the power performance of the test. We illustrate the proposed approaches with an environmental data set.  相似文献   

19.
The financial sector is a critical component of any economic system, as it delivers key qualitative asset transformation services in terms of liquidity, maturity and volume. Although these functions could in principle be carried out separately by specialized actors, in the end it is their systemic co-evolution that determines how the aggregate economy performs and withstands disruptions. In this paper we argue that a functional perspective on financial intermediation can be usefully employed to investigate the functioning of financial networks. We do this in two steps. First, we use previously unreleased data to show that focusing on the economic functions performed over time by the different institutions exchanging funds in an interbank market can be informative, even if the underlying topological structure of their relations remains constant. Second, a set of alternative artificial histories are generated and stress-tested by using real data as a calibration base, with the aim of performing counterfactual welfare comparisons among different topological structures.  相似文献   

20.
This paper establishes existence of subgame perfect equilibrium in pure strategies for a general class of sequential multi-lateral bargaining games, without assuming a stationary setting. The only required hypothesis is that utility functions are continuous on the space of economic outcomes. In particular, no assumption on the space of feasible payoffs is needed. The result covers arbitrary and even time-varying bargaining protocols (acceptance rules), externalities, and other-regarding preferences. As a side result, we clarify the meaning of assumptions on “continuity at infinity.”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号