首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Tests based on higher-order orm-step spacings have been considered in the literature for the goodness of fit problem. This paper studies the asymptotic distribution theory for such tests based on non-overlappingm-step spacings whenm, the length of the step, also increases with the sample sizen, to inifinity. By utilizing the asymptotic distributions under a sequence of close alternatives and studying their relative efficiencies, we try to answer a central question about the choice ofm in relation ton. Efficiency comparisons are made with tests based on overlappingm-step spacings, as well as corresponding chi-square tests.  相似文献   

2.
Tests for the goodness-of-fit problem based on sample spacings, i.e., observed distances between successive order statistics, have been used in the literature. We propose a new test based on the number of “small” and “large” spacings. The asymptotic theory under close alternative sequences is also given thus enabling one to calculate the asymptotic relative efficiencies of such tests. A comparison of the new test and other spacings tests is given.  相似文献   

3.
This paper develops an asymptotic theory for test statistics in linear panel models that are robust to heteroskedasticity, autocorrelation and/or spatial correlation. Two classes of standard errors are analyzed. Both are based on nonparametric heteroskedasticity autocorrelation (HAC) covariance matrix estimators. The first class is based on averages of HAC estimators across individuals in the cross-section, i.e. “averages of HACs”. This class includes the well known cluster standard errors analyzed by Arellano (1987) as a special case. The second class is based on the HAC of cross-section averages and was proposed by Driscoll and Kraay (1998). The ”HAC of averages” standard errors are robust to heteroskedasticity, serial correlation and spatial correlation but weak dependence in the time dimension is required. The “averages of HACs” standard errors are robust to heteroskedasticity and serial correlation including the nonstationary case but they are not valid in the presence of spatial correlation. The main contribution of the paper is to develop a fixed-b asymptotic theory for statistics based on both classes of standard errors in models with individual and possibly time fixed-effects dummy variables. The asymptotics is carried out for large time sample sizes for both fixed and large cross-section sample sizes. Extensive simulations show that the fixed-b approximation is usually much better than the traditional normal or chi-square approximation especially for the Driscoll-Kraay standard errors. The use of fixed-b critical values will lead to more reliable inference in practice especially for tests of joint hypotheses.  相似文献   

4.
When estimating hedonic models of housing prices, the use of time series cross-section repeat sales data can provide improvements in estimator efficiency and correct for unobserved characteristics. However, in cases where serial correlation is present, the irregular timing of sales should also be considered. In this paper we develop a model that uses information on the timing of events to account for the sporadic occurrence of events. The model presumes that the serial correlation process can be decomposed into a time-independent (event-wise) component and a time-dependent (time-wise) component. Empirical tests cannot reject the presence of sporadic correlation patterns, while simulations show that the failure to account for sporadic correlation leads to significant losses in efficiency, and that the losses from ignoring sporadic correlation when it exists are larger than losses when sporadic correlation is falsely assumed. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
We investigate the estimation and inference in difference in difference econometric models used in the analysis of treatment effects. When the innovations in such models display serial correlation, commonly used ordinary least squares (OLS) procedures are inefficient and may lead to tests with incorrect size. Implementation of feasible generalized least squares (FGLS) procedures is often hindered by too few observations in the cross-section to allow for unrestricted estimation of the weight matrix without leading to tests with similar size distortions as conventional OLS based procedures. We analyze the small sample properties of FGLS based tests with a formal higher order Edgeworth expansion that allows us to construct a size corrected version of the test. We also address the question of optimal temporal aggregation as a method to reduce the dimension of the weight matrix. We apply our procedure to data on regulation of mobile telephone service prices. We find that a size corrected FGLS based test outperforms tests based on OLS.  相似文献   

6.
Zinoviy Landsman  Meir Rom 《Metrika》1995,42(1):421-439
Various distances between distributions and between densities are considered. The corresponding goodness-of-fit tests derived from them are examined for their abilities to detect multimodal alternatives. It is found that many well known techniques fail to detect such alternatives, while others do better in terms of their power results. These are mainly the tests derived from the variational metric which are based on spacings and gaps.  相似文献   

7.
A wide selection of classical and recent tests for exponentiality are discussed and compared. The classical procedures include the statistics of Kolmogorov-Smirnov and Cramér-von Mises, a statistic based on spacings, and a method involving the score function. Among the most recent approaches emphasized are methods based on the empirical Laplace transform and the empirical characteristic function, a method based on entropy as well as tests of the Kolmogorov-Smirnov and Cramér-von Mises type that utilize a characterization of exponentiality via the mean residual life function. We also propose a new goodness-of-fit test utilizing a novel characterization of the exponential distribution through its characteristic function. The finite-sample performance of the tests is investigated in an extensive simulation study.Received: January 2002/Revised: January 2004  相似文献   

8.
This paper investigates diversification opportunities for investors among different alternative energy markets. We sample six alternative energy markets namely World, Developed, Emerging, EU, BRIC and G7 with daily data ranging from January 2006 to December 2017. For estimations, we use wavelet multiple correlation and wavelet multiple cross correlation proposed by Polanco-Martínez and Fernández-Macho (2014) to find pairwise correlation at different investment horizons. Our work contributes in measuring integration level among alternative regional energy markets across different investment horizons. Our results highlight that World, Developed, Emerging, EU markets offer maximum diversification when included with either Emerging or BRIC energy markets in a portfolio. Furthermore, diversification benefits are more prominent under intra-week to monthly investment horizons for all portfolio combinations. We also rank different pairs of alternative markets based on their integration level which carry important implications for portfolio diversification and risk management.  相似文献   

9.
Eunju Hwang  Dong Wan Shin 《Metrika》2017,80(6-8):767-787
Stationary bootstrapping is applied to a CUSUM test for common mean break detection in cross-sectionally correlated panel data. Asymptotic null distribution of the bootstrapped test is derived, which is the same as that of the original CUSUM test depending on cross-sectional correlation parameter. A bootstrap test using the CUSUM test with bootstrap critical values is proposed and its asymptotic validity is proved. Finite sample Monte-Carlo simulation shows that the proposed test has reasonable size while other existing tests have severe size distortion under cross-section correlation. The simulation also shows good power performance of the proposed test against non-cancelling mean changes. The simulation also shows that the theoretically justified stationary bootstrapping CUSUM test has comparable size and power relative to other, theoretically unjustified, moving block or tapered block bootstrapping CUSUM tests.  相似文献   

10.
Correlation stress testing refers to the correlation matrix adjustment to evaluate potential impact of the changes in correlations under financial crises. There are two categories, sensitivity tests and scenario tests. For a scenario test, the correlation matrix is adjusted to mimic the situation under an underlying stress event. It is only natural that when some correlations are altered, the other correlations (peripheral correlations) should vary as well. However, most existing methods ignore this potential change in peripheral correlations. In this paper, we propose a Bayesian correlation adjustment method to give a new correlation matrix for a scenario test based on the original correlation matrix and views on correlations such that peripheral correlations are altered according to the dependence structure of empirical correlations. The algorithm of posterior simulation is also extended so that two correlations can be updated in one Gibbs sampler step. This greatly enhances the rate of convergence. The proposed method is applied to an international stock portfolio dataset.  相似文献   

11.
Luis G. Vargas 《Socio》1986,20(6):387-391
The criticisms of Utility Theory focus on either its axioms or the construction of utility functions. Here we present a method which avoids the problems of uniqueness encountered in the construction of utility functions when using either the certainty equivalence method or the probability equivalence method. The method is based on the construction of ratio scale value functions from reciprocal pairwise comparisons and Saaty's Eigenvector Method. We show that under the assumption of cardinal consistency utility functions are a particular case of these ratio scales. Reciprocal pairwise comparisons allow decision makers to relax the transitivity assumption and help to derive a unique scaling of preferences.  相似文献   

12.
This article studies inference of multivariate trend model when the volatility process is nonstationary. Within a quite general framework we analyze four classes of tests based on least squares estimation, one of which is robust to both weak serial correlation and nonstationary volatility. The existing multivariate trend tests, which either use non-robust standard errors or rely on non-standard distribution theory, are generally non-pivotal involving the unknown time-varying volatility function in the limit. Two-step residual-based i.i.d. bootstrap and wild bootstrap procedures are proposed for the robust tests and are shown to be asymptotically valid. Simulations demonstrate the effects of nonstationary volatility on the trend tests and the good behavior of the robust tests in finite samples.  相似文献   

13.
This article proposes a class of joint and marginal spectral diagnostic tests for parametric conditional means and variances of linear and nonlinear time series models. The use of joint and marginal tests is motivated from the fact that marginal tests for the conditional variance may lead to misleading conclusions when the conditional mean is misspecified. The new tests are based on a generalized spectral approach and do not need to choose a lag order depending on the sample size or to smooth the data. Moreover, the proposed tests are robust to higher order dependence of unknown form, in particular to conditional skewness and kurtosis. It turns out that the asymptotic null distributions of the new tests depend on the data generating process. Hence, we implement the tests with the assistance of a wild bootstrap procedure. A simulation study compares the finite sample performance of the proposed and competing tests, and shows that our tests can play a valuable role in time series modeling. Finally, an application to the S&P 500 highlights the merits of our approach.  相似文献   

14.
Statistics based on uniform spacings are often used in goodness-of-fit problems. In this paper special attention is paid to the distribution of Greenwood's statistic. Although its asymptotic distribution is normal, the normal approximation is extremely bad, even for large sample sizes. It is shown that the Edge-worth expansion yields a considerably better approximation for the distribution of this statistic. Furthermore, an overview is given of the higher order asympto-tics for the sum of functions of uniform spacings, of which Greenwood's statistic is a special case.  相似文献   

15.
Diagnostic test plays a major role in reducing the prevalence of non-communicable diseases (NCDs). The present study examines the relationships between the utilization of diagnostic tests and socioeconomic, insurance, lifestyle, and health factors among the elderly in Malaysia. Analyses based on the National Health and Morbidity Survey 2011 (NHMS 2011) suggest that high income and having private insurance are associated with a higher likelihood of utilizing diagnostic tests. However, low education levels, being employed and smoking are associated with a lower propensity to utilize. These results provide public health administrators with useful information on policy development. In particular, the proposed policies include providing the poor with nominal price of basic diagnostic tests, introducing various health education programmes to the public, creating health awareness campaigns to encourage elders who do not own private insurance to utilize diagnostic tests, as well as making basic diagnostic tests compulsory for all elders owning government insurance.  相似文献   

16.
The receiver operating curve (ROC) has had wide application in choosing acceptable diagnostic criteria in clinical medicine. There are several problems that are involved in such intuitive estimation of the ROC, the most important of which is the very nature of the subjective approach. To overcome these problems we used a method named characteristic intervals, which allows the quantitative estimation of informational capacities of given tests based on real-world data, and quantification of absolute and relative specific intervals of diagnostic tests based on statistical techniques, including Bayes' theorem. Clinical illustrations include in vivo and in vitro nuclear medicine tests for screening thyroid gland disease, and Nuclear Cardiology. Results reveal that the mathematical partitioning procedure provides a more statistically precise diagnostic aid than do current intuitive approaches.  相似文献   

17.
The time-series distributed lag techniques of econometrics can be usefully applied to cross-sectional, spatial and cross-section time-series situations. The application is perfectly natural in cross-section, time-series models when regression coefficients evolve systematically as the cross-section grouping variable changes. The evolution of such coefficients lends itself to polynomial approximation or more general smoothing restrictions. These ideas are not new, Gersovitz and McKinnon (1978) and Trivedi and Lee (1981) providing two of the earliest applications of cross-equation smoothing techniques. However, their applications were in the context of coefficient variation due to seasonal changes and this may account for the non-diffusion of these techniques. The approach here is illustrated in the context of age-specific household formation equations based on census data, using Almon polynomials when the regression coefficients vary systematically by age group. A second application is provided, using spatial data, explaining the incidence of crime, by region; using polynomial and geometric smoothing to model distance declining regional effects.  相似文献   

18.
Authors dealing with combined cross-section/time-series data usually assume that complete time-series exist for all units under observation. In the context of micro data, however, this may be a very restrictive assumption. The paper is concerned with problems of model specification and estimation when the data at hand are incomplete time-series from a sample of micro units. Particular attention is paid to a situation where the sample of micro units ‘rotates’ over time. The main results are compared with those derived by Nerlove and others for the standard specification with complete cross-section/time-series data. Some illustrative examples based on data from Norwegian household budget surveys are also given.  相似文献   

19.
20.
We study the role of institutional quality on the fiscal transparency of a country. We show that such a link does exist even when controlling for endogeneity. Our findings are robust to changes in specification and a host of transparency sub-measures. An advantage of our study is that we use new data on fiscal transparency for a cross-section of 82 countries, which are based on in-depth reports based on a standardized methodology and protocol. Furthermore, the fiscal measures were obtained with the collaboration of government authorities, which makes them particularly reliable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号