首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
We propose two new semiparametric specification tests which test whether a vector of conditional moment conditions is satisfied for any vector of parameter values θ0. Unlike most existing tests, our tests are asymptotically valid under weak and/or partial identification and can accommodate discontinuities in the conditional moment functions. Our tests are moreover consistent provided that identification is not too weak. We do not require the availability of a consistent first step estimator. Like Robinson [Robinson, Peter M., 1987. Asymptotically efficient estimation in the presence of heteroskedasticity of unknown form. Econometrica 55, 875–891] and many others in similar problems subsequently, we use k-nearest neighbor (knn) weights instead of kernel weights. The advantage of using knn weights is that local power is invariant to transformations of the instruments and that under strong point identification computation of the test statistic yields an efficient estimator of θ0 as a byproduct.  相似文献   

2.
In many manufacturing and service industries, the quality department of the organization works continuously to ensure that the mean or location of the process is close to the target value. In order to understand the process, it is necessary to provide numerical statements of the processes that are being investigated. That is why the researcher needs to check the validity of the hypotheses that are concerned with some physical phenomena. It is usually assumed that the collected data behave well. However, sometimes the data may contain outliers. The presence of one or more outliers might seriously distort the statistical inference. Since the sample mean is very sensitive to outliers, this research will use the smooth adaptive (SA) estimator to estimate the population mean. The SA estimator will be used to construct testing procedures, called smooth adaptive test (SA test), for testing various null hypotheses. A Monte Carlo study is used to simulate the values of the probability of a Type I error and the power of the SA test. This is accomplished by constructing confidence intervals of the process mean by using the SA estimator and bootstrap methods. The SA test will be compared with other tests such as the normal test, t test and a nonparametric statistical method, namely, the Wilcoxon signed-rank test. Also, the cases with and without outliers will be considered. For the right-skewed distributions, the SA test is the best choice. When the population is a right-skewed distribution with one outlier, the SA test controls the probability of a Type I error better than other tests and is recommended.  相似文献   

3.
Theoretical models of multi-unit, uniform-price auctions assume that the price is given by the highest losing bid. In practice, however, the price is usually given by the lowest winning bid. We derive the equilibrium bidding function of the lowest-winning-bid auction when there are k objects for sale and n bidders with unit demand, and prove that it converges to the bidding function of the highest-losing-bid auction if and only if the number of losers nk gets large. When the number of losers grows large, the bidding functions converge at a linear rate and the prices in the two auctions converge in probability to the expected value of an object to the marginal winner.  相似文献   

4.
-learning agents in a Cournot oligopoly model   总被引:1,自引:1,他引:0  
Q-learning is a reinforcement learning model from the field of artificial intelligence. We study the use of Q-learning for modeling the learning behavior of firms in repeated Cournot oligopoly games. Based on computer simulations, we show that Q-learning firms generally learn to collude with each other, although full collusion usually does not emerge. We also present some analytical results. These results provide insight into the underlying mechanism that causes collusive behavior to emerge. Q-learning is one of the few learning models available that can explain the emergence of collusive behavior in settings in which there is no punishment mechanism and no possibility for explicit communication between firms.  相似文献   

5.
We consider the problem of testing the null hypothesis of no change against the alternative of multiple change points in a series of independent observations when the changes are in the same direction. We extend the tests of Terpstra (1952), Jonckheere (1954) and Puri (1965) to the change point setup. We obtain the asymptotic null distribution of the proposed tests. We also give approximations for their limiting critical values and tables of their finite sample Monte Carlo critical values. The results of Monte Carlo power studies conducted to compare the proposed tests with some competitors are reported. This research was supported by research grant SS045 of Kuwait University. Acknowledgments. We wish to thank the two referees for their comments and suggestions which have significantly improved the presentation. We are particularly thankful to one of the referees for suggesting the test statistics Tn1 * (k) and Tn2 * (k).  相似文献   

6.
Subsampling and the m out of n bootstrap have been suggested in the literature as methods for carrying out inference based on post-model selection estimators and shrinkage estimators. In this paper we consider a subsampling confidence interval (CI) that is based on an estimator that can be viewed either as a post-model selection estimator that employs a consistent model selection procedure or as a super-efficient estimator. We show that the subsampling CI (of nominal level 1−α for any α(0,1)) has asymptotic confidence size (defined to be the limit of finite-sample size) equal to zero in a very simple regular model. The same result holds for the m out of n bootstrap provided m2/n→0 and the observations are i.i.d. Similar zero-asymptotic-confidence-size results hold in more complicated models that are covered by the general results given in the paper and for super-efficient and shrinkage estimators that are not post-model selection estimators. Based on these results, subsampling and the m out of n bootstrap are not recommended for obtaining inference based on post-consistent model selection or shrinkage estimators.  相似文献   

7.
This paper develops an asymptotic theory for test statistics in linear panel models that are robust to heteroskedasticity, autocorrelation and/or spatial correlation. Two classes of standard errors are analyzed. Both are based on nonparametric heteroskedasticity autocorrelation (HAC) covariance matrix estimators. The first class is based on averages of HAC estimators across individuals in the cross-section, i.e. “averages of HACs”. This class includes the well known cluster standard errors analyzed by Arellano (1987) as a special case. The second class is based on the HAC of cross-section averages and was proposed by Driscoll and Kraay (1998). The ”HAC of averages” standard errors are robust to heteroskedasticity, serial correlation and spatial correlation but weak dependence in the time dimension is required. The “averages of HACs” standard errors are robust to heteroskedasticity and serial correlation including the nonstationary case but they are not valid in the presence of spatial correlation. The main contribution of the paper is to develop a fixed-b asymptotic theory for statistics based on both classes of standard errors in models with individual and possibly time fixed-effects dummy variables. The asymptotics is carried out for large time sample sizes for both fixed and large cross-section sample sizes. Extensive simulations show that the fixed-b approximation is usually much better than the traditional normal or chi-square approximation especially for the Driscoll-Kraay standard errors. The use of fixed-b critical values will lead to more reliable inference in practice especially for tests of joint hypotheses.  相似文献   

8.
Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping.  相似文献   

9.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

10.
This paper is concerned with the Bayesian analysis of stochastic volatility (SV) models with leverage. Specifically, the paper shows how the often used Kim et al. [1998. Stochastic volatility: likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361–393] method that was developed for SV models without leverage can be extended to models with leverage. The approach relies on the novel idea of approximating the joint distribution of the outcome and volatility innovations by a suitably constructed ten-component mixture of bivariate normal distributions. The resulting posterior distribution is summarized by MCMC methods and the small approximation error in working with the mixture approximation is corrected by a reweighting procedure. The overall procedure is fast and highly efficient. We illustrate the ideas on daily returns of the Tokyo Stock Price Index. Finally, extensions of the method are described for superposition models (where the log-volatility is made up of a linear combination of heterogenous and independent autoregressions) and heavy-tailed error distributions (student and log-normal).  相似文献   

11.
In this paper we compare alternative asymptotic approximations to the power of the likelihood ratio test used in covariance structure analysis for testing the fit of a model. Alternative expressions for the noncentrality parameter (ncp) lead to different approximations to the power function. It appears that for alternative covariance matrices close to the null hypothesis, the alternative ncp's lead to similar values, while for alternative covariance matrices far from Ho the different expressions for the ncp can conflict substantively. Monte Carlo evidence shows that the ncp proposed in Satorra and Saris (1985) gives the most accurate power approximations.  相似文献   

12.
In this paper, a bootstrap algorithm for a reduced rank vector autoregressive (VAR) model which also includes stationary regressors, is analyzed. It is shown that the bootstrap distribution for estimating the rank converges to the distribution derived from the usual asymptotic framework. Because the asymptotic distribution will typically depend on unknown parameters, bootstrap distributions are of considerable interest in this context. The result of an application and some Monte Carlo experiments are also presented.  相似文献   

13.
In this paper, we consider bootstrapping cointegrating regressions. It is shown that the method of bootstrap, if properly implemented, generally yields consistent estimators and test statistics for cointegrating regressions. For the cointegrating regression models driven by general linear processes, we employ the sieve bootstrap based on the approximated finite-order vector autoregressions for the regression errors and the first differences of the regressors. In particular, we establish the bootstrap consistency for OLS method. The bootstrap method can thus be used to correct for the finite sample bias of the OLS estimator and to approximate the asymptotic critical values of the OLS-based test statistics in general cointegrating regressions. The bootstrap OLS procedure, however, is not efficient. For the efficient estimation and hypothesis testing, we consider the procedure proposed by Saikkonen [1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7, 1–21] and Stock and Watson [1993. A simple estimator of cointegrating vectors in higher order integrating systems. Econometrica 61, 783–820] relying on the regression augmented with the leads and lags of differenced regressors. The bootstrap versions of their procedures are shown to be consistent, and can be used to do asymptotically valid inferences. A Monte Carlo study is conducted to investigate the finite sample performances of the proposed bootstrap methods.  相似文献   

14.
This paper develops an estimation and testing framework for a stationary large panel model with observable regressors and unobservable common factors. We allow for slope heterogeneity and for correlation between the common factors and the regressors. We propose a two stage estimation procedure for the unobservable common factors and their loadings, based on Common Correlated Effects estimator and the Principal Component estimator. We also develop two tests for the null of no factor structure: one for the null that loadings are cross sectionally homogeneous, and one for the null that common factors are homogeneous over time. Our tests are based on using extremes of the estimated loadings and common factors. The test statistics have an asymptotic Gumbel distribution under the null, and have power versus alternatives where only one loading or common factor differs from the others. Monte Carlo evidence shows that the tests have the correct size and good power.  相似文献   

15.
16.
The problem of comparing the precisions of two instruments using repeated measurements can be cast as an extension of the Pitman-Morgan problem of testing equality of variances of a bivariate normal distribution. Hawkins (1981) decomposes the hypothesis of equal variances in this model into two subhypotheses for which simple tests exist. For the overall hypothesis he proposes to combine the tests of the subhypotheses using Fisher's method and empirically compares the component tests and their combination with the likelihood ratio test. In this paper an attempt is made to resolve some discrepancies and puzzling conclusions in Hawkins's study and to propose simple modifications.
The new tests are compared to the tests discussed by Hawkins and to each other both in terms of the finite sample power (estimated by Monte Carlo simulation) and theoretically in terms of asymptotic relative efficiencies.  相似文献   

17.
Process capability analysis is an effective means of measuring process performance and potential capability. In the service industries, process capability indices (PCIs) are utilized to assess whether business quality meets the required level. Hence, the performance index C L is used as a means of measuring business performance, where L is the lower specification limit. In the technology of data transformation, this study constructs a uniformly minimum variance unbiased estimator (UMVUE) of C L based on the right type II censored sample from the pareto distribution. The UMVUE of C L is then utilized to develop a novel hypothesis testing procedure in the condition of known L. Finally, we give one practical example and the Monte Carlo simulation to assess the behavior of this test statistic for testing null hypothesis under given significance level. Moreover, the managers can then employ the new testing procedure to determine whether the business performance adheres to the required level.  相似文献   

18.
In this paper we consider the issue of unit root testing in cross-sectionally dependent panels. We consider panels that may be characterized by various forms of cross-sectional dependence including (but not exclusive to) the popular common factor framework. We consider block bootstrap versions of the group-mean (Im et al., 2003) and the pooled (Levin et al., 2002) unit root coefficient DF tests for panel data, originally proposed for a setting of no cross-sectional dependence beyond a common time effect. The tests, suited for testing for unit roots in the observed data, can be easily implemented as no specification or estimation of the dependence structure is required. Asymptotic properties of the tests are derived for T going to infinity and N finite. Asymptotic validity of the bootstrap tests is established in very general settings, including the presence of common factors and cointegration across units. Properties under the alternative hypothesis are also considered. In a Monte Carlo simulation, the bootstrap tests are found to have rejection frequencies that are much closer to nominal size than the rejection frequencies for the corresponding asymptotic tests. The power properties of the bootstrap tests appear to be similar to those of the asymptotic tests.  相似文献   

19.
The t regression models provide a useful extension of the normal regression models for datasets involving errors with longer-than-normal tails. Homogeneity of variances (if they exist) is a standard assumption in t regression models. However, this assumption is not necessarily appropriate. This paper is devoted to tests for heteroscedasticity in general t linear regression models. The asymptotic properties, including asymptotic Chi-square and approximate powers under local alternatives of the score tests, are studied. Based on the modified profile likelihood (Cox and Reid in J R Stat Soc Ser B 49(1):1–39, 1987), an adjusted score test for heteroscedasticity is developed. The properties of the score test and its adjustment are investigated through Monte Carlo simulations. The test methods are illustrated with land rent data (Weisberg in Applied linear regression. Wiley, New York, 1985). The project supported by NSFC 10671032, China, and a grant (HKBU2030/07P) from the Grant Council of Hong Kong, Hong Kong, China.  相似文献   

20.
We develop a sequential procedure to test the adequacy of jump-diffusion models for return distributions. We rely on intraday data and nonparametric volatility measures, along with a new jump detection technique and appropriate conditional moment tests, for assessing the import of jumps and leverage effects. A novel robust-to-jumps approach is utilized to alleviate microstructure frictions for realized volatility estimation. Size and power of the procedure are explored through Monte Carlo methods. Our empirical findings support the jump-diffusive representation for S&P500 futures returns but reveal it is critical to account for leverage effects and jumps to maintain the underlying semi-martingale assumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号