首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Microeconomic data often have within‐cluster dependence, which affects standard error estimation and inference. When the number of clusters is small, asymptotic tests can be severely oversized. In the instrumental variables (IV) model, the potential presence of weak instruments further complicates hypothesis testing. We use wild bootstrap methods to improve inference in two empirical applications with these characteristics. Building from estimating equations and residual bootstraps, we identify variants robust to the presence of weak instruments and a small number of clusters. They reduce absolute size bias significantly and demonstrate that the wild bootstrap should join the standard toolkit in IV and cluster‐dependent models.  相似文献   

2.
Robust methods for instrumental variable inference have received considerable attention recently. Their analysis has raised a variety of problematic issues such as size/power trade‐offs resulting from weak or many instruments. We show that information reduction methods provide a useful and practical solution to this and related problems. Formally, we propose factor‐based modifications to three popular weak‐instrument‐robust statistics, and illustrate their validity asymptotically and in finite samples. Results are derived using asymptotic settings that are commonly used in both the factor and weak‐instrument literature. For the Anderson–Rubin statistic, we also provide analytical finite‐sample results that do not require any underlying factor structure. An illustrative Monte Carlo study reveals the following. Factor‐based tests control size regardless of instruments and factor quality. All factor‐based tests are systematically more powerful than standard counterparts. With informative instruments and in contrast to standard tests: (i) power of factor‐based tests is not affected by k even when large; and (ii) weak factor structure does not cost power. An empirical study on a New Keynesian macroeconomic model suggests that our factor‐based methods can bridge a number of gaps between structural and statistical modeling. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Social and economic scientists are tempted to use emerging data sources like big data to compile information about finite populations as an alternative for traditional survey samples. These data sources generally cover an unknown part of the population of interest. Simply assuming that analyses made on these data are applicable to larger populations is wrong. The mere volume of data provides no guarantee for valid inference. Tackling this problem with methods originally developed for probability sampling is possible but shown here to be limited. A wider range of model‐based predictive inference methods proposed in the literature are reviewed and evaluated in a simulation study using real‐world data on annual mileages by vehicles. We propose to extend this predictive inference framework with machine learning methods for inference from samples that are generated through mechanisms other than random sampling from a target population. Describing economies and societies using sensor data, internet search data, social media and voluntary opt‐in panels is cost‐effective and timely compared with traditional surveys but requires an extended inference framework as proposed in this article.  相似文献   

4.
Long‐horizon predictive regressions in finance pose formidable econometric problems when estimated using available sample sizes. Hodrick in 1992 proposed a remedy that is based on running a reverse regression of short‐horizon returns on the long‐run mean of the predictor. Unfortunately, this only allows the null of no predictability to be tested, and assumes stationary regressors. In this paper, we revisit long‐horizon forecasting from reverse regressions, and argue that reverse regression methods avoid serious size distortions in long‐horizon predictive regressions, even when there is some predictability and/or near unit roots. Meanwhile, the reverse regression methodology has the practical advantage of being easily applicable when there are many predictors. We apply these methods to forecasting excess bond returns using the term structure of forward rates, and find that there is indeed some return forecastability. However, confidence intervals for the coefficients of the predictive regressions are about twice as wide as those obtained with the conventional approach to inference. We also include an application to forecasting excess stock returns. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
We investigate the issue of model uncertainty in cross‐country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is spread widely among many models, suggesting the superiority of BMA over choosing any single model. Out‐of‐sample predictive results support this claim. In contrast to Levine and Renelt ( 1992 ), our results broadly support the more ‘optimistic’ conclusion of Sala‐i‐Martin ( 1997b ), namely that some variables are important regressors for explaining cross‐country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
A randomized two‐stage adaptive design is proposed and studied for allocation of patients to treatments and comparison in a phase III clinical trial with survival time as treatment responses. We consider the possibility of several covariates in the design and analysis. Several exact and limiting properties of the design and the follow‐up inference are studied, both numerically and theoretically. The applicability of the proposed methodology is illustrated by using some real data.  相似文献   

7.
Two measures of an error‐ridden variable make it possible to solve the classical errors‐in‐Variable problem by using one measure as an instrument for the other. It is well known that a second IV‐estimate can be obtained by reversing the roles of the two measures. We explore the optimal linear combination of these two estimates. In a Monte Carlo study, we show that the gain in precision is significant. The proposed estimator also compares well with full information maximum likelihood under normality. We illustrate the method by estimating the capital elasticity in the Norwegian ICT‐industry.  相似文献   

8.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

9.
Covariate information is often available in randomised clinical trials for each subject prior to treatment assignment and is commonly utilised to make covariate adjustment for baseline characteristics predictive of the outcome in order to increase precision and improve power in the detection of a treatment effect. Motivated by a nonparametric covariance analysis, we study a projection approach to making objective covariate adjustment in randomised clinical trials on the basis of two unbiased estimating functions that decouple the outcome and covariate data. The proposed projection approach extends a weighted least‐squares procedure by projecting one of the estimating functions onto the linear subspace spanned by the other estimating function that is E‐ancillary for the average treatment effect. Compared with the weighted least‐squares method, the projection method allows for objective inference on the average treatment effect by exploiting the treatment specific covariate–outcome associations. The resulting projection‐based estimator of the average treatment effect is asymptotically efficient when the treatment‐specific working regression models are correctly specified and is asymptotically more efficient than other existing competitors when the treatment‐specific working regression models are misspecified. The proposed projection method is illustrated by an analysis of data from an HIV clinical trial. In a simulation study, we show that the proposed projection method compares favourably with its competitors in finite samples.  相似文献   

10.
We present a sequential approach to estimating a dynamic Hausman–Taylor model. We first estimate the coefficients of the time‐varying regressors and subsequently regress the first‐stage residuals on the time‐invariant regressors. In comparison to estimating all coefficients simultaneously, this two‐stage procedure is more robust against model misspecification, allows for a flexible choice of the first‐stage estimator, and enables simple testing of the overidentifying restrictions. For correct inference, we derive analytical standard error adjustments. We evaluate the finite‐sample properties with Monte Carlo simulations and apply the approach to a dynamic gravity equation for US outward foreign direct investment.  相似文献   

11.
12.
We propose a non‐parametric test to compare two correlated diagnostic tests for a three‐category classification problem. Our development was motivated by a proteomic study where the objectives are to detect glycan biomarkers for liver cancer and to compare the discrimination ability of various markers. Three distinct disease categories need to be identified from this analysis. We therefore chose to use three‐dimensional receiver operating characteristic (ROC) surfaces and volumes under the ROC surfaces to describe the overall accuracy for different biomarkers. Each marker in this study might include a cluster of similar individual markers and thus was considered as a hierarchically structured sample. Our proposed statistical test incorporated the within‐marker correlation as well as the between‐marker correlation. We derived asymptotic distributions for three‐dimensional ROC surfaces and subsequently implemented bootstrap methods to facilitate the inferences. Simulation and real‐data analysis were included to illustrate our methods. Our distribution‐free test may be simplified for paired and independent two‐sample comparisons as well. Previously, only parametric tests were known for clustered and correlated three‐category ROC analyses.  相似文献   

13.
This note provides a warning against careless use of the generalized method of moments (GMM) with time series data. We show that if time series follow non‐causal autoregressive processes, their lags are not valid instruments, and the GMM estimator is inconsistent. Moreover, endogeneity of the instruments may not be revealed by the J‐test of overidentifying restrictions that may be inconsistent and has, in general, low finite‐sample power. Our explicit results pertain to a simple linear regression, but they can easily be generalized. Our empirical results indicate that non‐causality is quite common among economic variables, making these problems highly relevant.  相似文献   

14.
This paper introduces the notion of common non‐causal features and proposes tools to detect them in multivariate time series models. We argue that the existence of co‐movements might not be detected using the conventional stationary vector autoregressive (VAR) model as the common dynamics are present in the non‐causal (i.e. forward‐looking) component of the series. We show that the presence of a reduced rank structure allows to identify purely causal and non‐causal VAR processes of order P>1 even in the Gaussian likelihood framework. Hence, usual test statistics and canonical correlation analysis can be applied, where either lags or leads are used as instruments to determine whether the common features are present in either the backward‐ or forward‐looking dynamics of the series. The proposed definitions of co‐movements are also valid for the mixed causal—non‐causal VAR, with the exception that a non‐Gaussian maximum likelihood estimator is necessary. This means however that one loses the benefits of the simple tools proposed. An empirical analysis on Brent and West Texas Intermediate oil prices illustrates the findings. No short run co‐movements are found in a conventional causal VAR, but they are detected when considering a purely non‐causal VAR.  相似文献   

15.
We present finite sample evidence on different IV estimators available for linear models under weak instruments; explore the application of the bootstrap as a bias reduction technique to attenuate their finite sample bias; and employ three empirical applications to illustrate and provide insights into the relative performance of the estimators in practice. Our evidence indicates that the random‐effects quasi‐maximum likelihood estimator outperforms alternative estimators in terms of median point estimates and coverage rates, followed by the bootstrap bias‐corrected version of LIML and LIML. However, our results also confirm the difficulty of obtaining reliable point estimates in models with weak identification and moderate‐size samples. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
A test statistic is developed for making inference about a block‐diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N ? 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher‐order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.  相似文献   

17.
This paper introduces nonparametric econometric methods that characterize general power law distributions under basic stability conditions. These methods extend the literature on power laws in the social sciences in several directions. First, we show that any stationary distribution in a random growth setting is shaped entirely by two factors: the idiosyncratic volatilities and reversion rates (a measure of cross‐sectional mean reversion) for different ranks in the distribution. This result is valid regardless of how growth rates and volatilities vary across different economic agents, and hence applies to Gibrat's law and its extensions. Second, we present techniques to estimate these two factors using panel data. Third, we describe how our results imply predictability as higher‐ranked processes must on average grow more slowly than lower‐ranked processes. We employ our empirical methods using data on commodity prices and show that our techniques accurately describe the empirical distribution of relative commodity prices. We also show that rank‐based out‐of‐sample forecasts of future commodity prices outperform random‐walk forecasts at a 1‐month horizon.  相似文献   

18.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

19.
This paper considers a panel data stochastic frontier model that disentangles unobserved firm effects (firm heterogeneity) from persistent (time‐invariant/long‐term) and transient (time‐varying/short‐term) technical inefficiency. The model gives us a four‐way error component model, viz., persistent and time‐varying inefficiency, random firm effects and noise. We use Bayesian methods of inference to provide robust and efficient methods of estimating inefficiency components in this four‐way error component model. Monte Carlo results are provided to validate its performance. We also present results from an empirical application that uses a large panel of US commercial banks. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
Service workers are expected to maintain high‐quality service delivery despite customer mistreatment—the poor‐quality treatment of service workers by customers—which can be demeaning and threatening to self‐esteem. Although service work is increasingly delivered by middle‐aged and older workers, very little is known about how employees across the age range navigate abuse from customers on the job. Does advancing age help or hinder service performance in reaction to customer mistreatment? Drawing on strength and vulnerability integration theory, we proposed that age paradoxically both helps and hinders performance after customer mistreatment, albeit at different stages. We tested our proposed model in a two‐sample field investigation of service workers and their supervisors using a time‐lagged, dyadic design. Results showed that age heightens the experience of self‐esteem threat but, nevertheless, dampens reactions to self‐esteem threat, leading to divergent effects on performance at different stages. Implications for age and service work, as well as aging and the sense of self, are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号