首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Structural vector‐autoregressive models are potentially very useful tools for guiding both macro‐ and microeconomic policy. In this study, we present a recently developed method for estimating such models, which uses non‐normality to recover the causal structure underlying the observations. We show how the method can be applied to both microeconomic data (to study the processes of firm growth and firm performance) and macroeconomic data (to analyse the effects of monetary policy).  相似文献   

3.
4.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:1,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

5.
In the areas of missing data and causal inference, there is great interest in doubly robust (DR) estimators that involve both an outcome regression (RG) model and a propensity score (PS) model. These DR estimators are consistent and asymptotically normal if either model is correctly specified. Despite their theoretical appeal, the practical utility of DR estimators has been disputed (e.g. Kang and Schaffer, Statistical Science 2007; 22: 523–539). One of the major concerns is the possibility of erratic estimates resulting from near‐zero denominators due to extreme values of the estimated PS. In contrast, the usual RG estimator based on the RG model alone is efficient when the RG model is correct and generally more stable than the DR estimators, although it can be biased when the RG model is incorrect. In light of the unique advantages of the RG and DR estimators, we propose a class of hybrid estimators that attempt to strike a reasonable balance between the RG and DR estimators. These hybrid estimators are motivated by heuristic arguments that coarsened PS estimates are less likely to take extreme values and less sensitive to misspecification of the PS model than the original model‐based PS estimates. The proposed estimators are compared with existing estimators in simulation studies and illustrated with real data from a large observational study on obstetric labour progression and birth outcomes.  相似文献   

6.
7.
Copulas are distributions with uniform marginals. Non-parametric copula estimates may violate the uniformity condition in finite samples. We look at whether it is possible to obtain valid piecewise linear copula densities by triangulation. The copula property imposes strict constraints on design points, making an equi-spaced grid a natural starting point. However, the mixed-integer nature of the problem makes a pure triangulation approach impractical on fine grids. As an alternative, we study the ways of approximating copula densities with triangular functions which guarantees that the estimator is a valid copula density. The family of resulting estimators can be viewed as a non-parametric MLE of B-spline coefficients on possibly non-equally spaced grids under simple linear constraints. As such, it can be easily solved using standard convex optimization tools and allows for a degree of localization. A simulation study shows an attractive performance of the estimator in small samples and compares it with some of the leading alternatives. We demonstrate empirical relevance of our approach using three applications. In the first application, we investigate how the body mass index of children depends on that of parents. In the second application, we construct a bivariate copula underlying the Gibson paradox from macroeconomics. In the third application, we show the benefit of using our approach in testing the null of independence against the alternative of an arbitrary dependence pattern.  相似文献   

8.
Copulas are distributions with uniform marginals. Non‐parametric copula estimates may violate the uniformity condition in finite samples. We look at whether it is possible to obtain valid piecewise linear copula densities by triangulation. The copula property imposes strict constraints on design points, making an equi‐spaced grid a natural starting point. However, the mixed‐integer nature of the problem makes a pure triangulation approach impractical on fine grids. As an alternative, we study the ways of approximating copula densities with triangular functions which guarantees that the estimator is a valid copula density. The family of resulting estimators can be viewed as a non‐parametric MLE of B‐spline coefficients on possibly non‐equally spaced grids under simple linear constraints. As such, it can be easily solved using standard convex optimization tools and allows for a degree of localization. A simulation study shows an attractive performance of the estimator in small samples and compares it with some of the leading alternatives. We demonstrate empirical relevance of our approach using three applications. In the first application, we investigate how the body mass index of children depends on that of parents. In the second application, we construct a bivariate copula underlying the Gibson paradox from macroeconomics. In the third application, we show the benefit of using our approach in testing the null of independence against the alternative of an arbitrary dependence pattern.  相似文献   

9.
We revisit the methodology and historical development of subsampling, and then explore in detail its use in hypothesis testing, an area which has received surprisingly modest attention. In particular, the general set‐up of a possibly high‐dimensional parameter with data from K populations is explored. The role of centring the subsampling distribution is highlighted, and it is shown that hypothesis testing with a data‐centred subsampling distribution is more powerful. In addition we demonstrate subsampling’s ability to handle a non‐standard Behrens–Fisher problem, i.e., a comparison of the means of two or more populations which may possess not only different and possibly infinite variances, but may also possess different distributions. However, our formulation is general, permitting even functional data and/or statistics. Finally, we provide theory for K ‐ sample U ‐ statistics that helps establish the asymptotic validity of subsampling confidence intervals and tests in this very general setting.  相似文献   

10.
11.
12.
We develop a panel count model with a latent spatio‐temporal heterogeneous state process for monthly severe crimes at the census‐tract level in Pittsburgh, Pennsylvania. Our dataset combines Uniform Crime Reporting data with socio‐economic data. The likelihood is estimated by efficient importance sampling techniques for high‐dimensional spatial models. Estimation results confirm the broken‐windows hypothesis whereby less severe crimes are leading indicators for severe crimes. In addition to ML parameter estimates, we compute several other statistics of interest for law enforcement such as spatio‐temporal elasticities of severe crimes with respect to less severe crimes, out‐of‐sample forecasts, predictive distributions and validation test statistics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
This paper appliesa large number of models to three previously-analyzed data sets,and compares the point estimates and confidence intervals fortechnical efficiency levels. Classical procedures include multiplecomparisons with the best, based on the fixed effects estimates;a univariate version, marginal comparisons with the best; bootstrappingof the fixed effects estimates; and maximum likelihood givena distributional assumption. Bayesian procedures include a Bayesianversion of the fixed effects model, and various Bayesian modelswith informative priors for efficiencies. We find that fixedeffects models generally perform poorly; there is a large payoffto distributional assumptions for efficiencies. We do not findmuch difference between Bayesian and classical procedures, inthe sense that the classical MLE based on a distributional assumptionfor efficiencies gives results that are rather similar to a Bayesiananalysis with the corresponding prior.  相似文献   

14.
15.
16.
17.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

18.
19.
20.
Samples with overlapping observations are used for the study of uncovered interest rate parity, the predictability of long‐run stock returns and the credibility of exchange rate target zones. This paper quantifies the biases in parameter estimation and size distortions of hypothesis tests of overlapping linear and polynomial autoregressions, which have been used in target‐zone applications. We show that both estimation bias and size distortions of hypothesis tests are generally larger, if the amount of overlap is larger, the sample size is smaller, and autoregressive root of the data‐generating process is closer to unity. In particular, the estimates are biased in a way that makes it more likely that the predictions of the Bertola–Svensson model will be supported. Size distortions of various tests also turn out to be substantial even when using a heteroskedasticity and autocorrelation‐consistent covariance matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号