首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we show how it is possible to develop a Bayesian framework for analyzing structural models for treatment response data without the joint distribution of the potential outcomes. That this is possible has not been noticed in the literature. We also discuss the computation of the model marginal likelihood and present recipes for finding relevant treatment effects, averaged over both parameters and covariates. As compared to an approach in which the counterfactuals are part of the prior-posterior analysis (as in the work to date), the approach we suggest is simpler in terms of the required prior inputs, computational burden and extensibility to more complex settings.  相似文献   

2.
In the classical theory of randomized trials (RTs), the treatment effect is defined as a mean difference of potential outcomes. To achieve nominal coverage probability for confidence intervals (CIs) on treatment effects in RTs, certain assumptions are necessary. Specifically, one must either make assumptions about the joint distribution of potential outcomes or enroll subjects in the trial by random sampling of the target population on which the treatment effect is defined. In practice, no such sampling usually takes place and assumptions about the joint distribution of potential outcomes cannot be verified based on observed data. Furthermore, the most common of these assumptions, such as treatment‐unit additivity (TUA) or independence are biologically implausible in most RTs involving human subjects. Hence, it is not usually possible to construct CIs on treatment effects with nominal coverage probability. However, for any joint distribution of potential outcomes, the standard estimator of the variance of the difference of two independent sample means produces CIs with asymptotic coverage at least at the nominal level. This interpretation of CIs as conservative bounds may not always hold in conventional regression models applied to RT data.  相似文献   

3.
We consider panel parametric, semiparametric and nonparametric methods of constructing counterfactuals. We show through extensive simulations that no method is able to dominate other methods in all circumstances, since the true data‐generating process is typically unknown. We therefore also suggest a model‐averaging method as a robust method to generate counterfactuals. As an illustration of the sensitivity of counterfactual construction, we reexamine the impact of California's Tobacco Control Program on per capita cigarette consumption and election day registration (EDR) laws on voters' turnout by different methods.  相似文献   

4.
In this paper we develop likelihood‐based methods for statistical inference in a joint system of equations for the choice of length of schooling and earnings. The model for schooling choice is assumed to be an ordered probit model, whereas the earnings equation contains variables that are flexible transformations of schooling and experience, with corresponding coefficients that are allowed to be heterogeneous across individuals. Under the assumption that the distribution of the random terms of the model can be expressed as a finite mixture of multinormal distributions, we show that the joint probability distribution for schooling and earnings can be expressed on closed form. In an application of our method on Norwegian data, we find that the mixed Gaussian model offers a substantial improvement in fit to the (heavy‐tailed) empirical distribution of log‐earnings compared to a multinormal benchmark model. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
This paper considers a panel data model with time-varying individual effects. The data are assumed to contain a large number of cross-sectional units repeatedly observed over a fixed number of time periods. The model has a feature of the fixed-effects model in that the effects are assumed to be correlated with the regressors. The unobservable individual effects are assumed to have a factor structure. For consistent estimation of the model, it is important to estimate the true number of individual effects. We propose a generalized methods of moments procedure by which both the number of individual effects and the regression coefficients can be consistently estimated. Some important identification issues are also discussed. Our simulation results indicate that the proposed methods produce reliable estimates.  相似文献   

6.
Children in households reporting the receipt of free or reduced-price school meals through the National School Lunch Program (NSLP) are more likely to have negative health outcomes than observationally similar nonparticipants. Assessing causal effects of the program is made difficult, however, by missing counterfactuals and systematic underreporting of program participation. Combining survey data with auxiliary administrative information on the size of the NSLP caseload, we extend nonparametric partial identification methods that account for endogenous selection and nonrandom classification error in a single framework. Similar to a regression discontinuity design, we introduce a new way to conceptualize the monotone instrumental variable (MIV) assumption using eligibility criteria as monotone instruments. Under relatively weak assumptions, we find evidence that the receipt of free and reduced-price lunches improves the health outcomes of children.  相似文献   

7.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

8.
Summary This paper is an exposition about the model and techniques in factor analysis, a method of studying the covariance matrix of several properties on the basis of a sample co-variance matrix of independent observations on n individuals. The indeterminacy of the basis of the so called factor space and several possibilities of interpretation are discussed. The scale invariant maximum likelihood estimation of the parameters of the assumed normal distribution which also provides a test on the dimension of the factor space is compared with the customary but unjustified attack of the estimation problem by means of component analysis or modifications of it. The prohibitive slowness of convergence of iterative procedures recommended till now can be removed by steepest ascent methods together with Aitken's acceleration method. An estimate of the original observations according to the model assumed, as to be compared with the data, is given.  相似文献   

9.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

10.
Summary  This paper is an exposition about the model and techniques in factor analysis, a method of studying the covariance matrix of several properties on the basis of a sample co-variance matrix of independent observations on n individuals. The indeterminacy of the basis of the so called factor space and several possibilities of interpretation are discussed. The scale invariant maximum likelihood estimation of the parameters of the assumed normal distribution which also provides a test on the dimension of the factor space is compared with the customary but unjustified attack of the estimation problem by means of component analysis or modifications of it. The prohibitive slowness of convergence of iterative procedures recommended till now can be removed by steepest ascent methods together with Aitken's acceleration method. An estimate of the original observations according to the model assumed, as to be compared with the data, is given.  相似文献   

11.
From the literature on nonparametric rank tests, limiting distributions of Wilcoxon's test tor symmetry and ot Friedman's test for treatment effect are known for observations that are classified in blocks. It is assumed that there is no interaction between blocks and treatments. In the case of fixed blocks this assumption is quite reasonable, in the case of random blocks it is not, as the presence of a random interaction does not make testing for treatment effect superfluous. For classified, categorical data in random blocks the limiting distribution will be derived in this paper of Wilcoxon's rank test in a model which includes a random interaction between blocks and treatments.
An illustration is given by some data from a judgement comparison experiment for the image quality of Video Long Play discs.  相似文献   

12.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

13.
Multivariate Clustered Data Analysis in Developmental Toxicity Studies   总被引:1,自引:0,他引:1  
In this paper we review statistical methods for analyzing developmental toxicity data. Such data raise a number of challenges. Models that try to accommodate the complex data generating mechanism of a developmental toxicity study, should take into account the litter effect and the number of viable fetuses, malformation indicators, weight and clustering, as a function of exposure. Further, the size of the litter may be related to outcomes among live fetuses. Scientific interest may be in inference about the dose effect, on implications of model misspecification, on assessment of model fit, and on the calculation of derived quantities such as safe limits, etc. We describe the relative merits of conditional, marginal and random-effects models for multivariate clustered binary data and present joint models for both continuous and discrete data.  相似文献   

14.
Previous studies have discouraged the use of the Cox proportional hazards (PH) model for traditional mediation analysis as it might provide biased results. Accelerated failure time (AFT) models have been proposed as an alternative for Cox PH models. In addition, the use of the potential outcomes framework has been proposed for mediation models with time-to-event outcomes. The aim of this paper is to investigate the performance of traditional mediation analysis and potential outcomes mediation analysis based on both the Cox PH and the AFT model. This is done by means of a Monte Carlo simulation study and the illustration of the methods using an empirical data set. Both the product-of-coefficients method of the traditional mediation analysis and the potential outcomes framework yield unbiased estimates with respect to their own underlying indirect effect value for simple mediation models with a time-to-event outcome and estimated based on Cox PH or AFT.  相似文献   

15.
针对现有文献估计高频交易风险与实际风险存在偏误,提出基于趋势持续时间与价格变化相依结构下的CVaR模型。该方法首先定义了趋势持续时间和价格变化幅度,并得到趋势持续时间和趋势持续期内价格变化幅度两者边缘分布。然后结合Copula理论构造出趋势持续时间和价格变化幅度的联合分布和条件分布,并在此基础上计算CVaR。最后采用沪深300股指期货高频交易数据对本文提出的模型进行了实证检验。结果表明:下跌趋势持续时间要比上涨趋势持续时间长,对应的下跌幅度要比上涨幅度更大,股指期货上涨与下跌风险具有不对称性。  相似文献   

16.
In the empirical analysis of unemployment durations or job durations, it is generally assumed that the stochastic processes underlying labour market behaviour and the behaviour concerning participation in a panel survey are independent. As a consequence, spells that are incomplete due to attrition can be treated as spells that are subjected to independent right censoring. However, if the assumption of independence is violated, i.e. if for example the probability of dropping out of the panel is related to the rate at which a job is found, then attrition may have to be modelled and estimated jointly with the unemployment duration distribution to avoid biased estimates of the rate at which individuals become employed. A way to model the joint dependence is by means of stochastically related unobserved determinants. We discuss some properties of these kinds of models and state conditions needed to estimate such models in the case of stock sampled duration data.  相似文献   

17.
In this paper we examine the productive performance of a group of three East European carriers and compare it to thirteen of their West European competitors during the period 1977–1990. We first model the multiple output/multiple input technology with a stochastic distance frontier using recently developed semiparametric efficient methods. The endogeneity of multiple outputs is addressed in part by introducing multivariate kernel estimators for the joint distribution of the multiple outputs and potentially correlated firm random effects. We augment estimates from our semiparametric stochastic distance function with nonparametric distance function methods, using linear programming techniques, as well as with extended decomposition methods, based on the Malmquist index number. Both semi- and nonparametric methods indicate significant slack in resource utilization in the East European carriers relative to their Western counterparts, and limited convergence in efficiency or technical change between them. The implications are rather stark for the long run viability of the East European carriers in our sample.  相似文献   

18.
Monitoring small area contrasts in life expectancy is important for health policy purposes but subject to difficulties under conventional life table analysis. Additionally, the implicit model underlying conventional life table analysis involves a highly parametrized fixed effect approach. An alternative strategy proposed here involves an explicit model based on random effects for both small areas and age groups. The area effects are assumed to be spatially correlated, reflecting unknown mortality risk factors that are themselves typically spatially correlated. Often mortality observations are disaggregated by demographic category as well as by age and area, e.g. by gender or ethnic group, and multivariate area and age random effects will be used to pool over such groups. A case study considers variations in life expectancy in 1 118 small areas (known as wards) in Eastern England over a five-year period 1999–2003. The case study deaths data are classified by gender, age, and area, and a bivariate model for area and age effects is therefore applied. The interrelationship between the random area effects and two major influences on small area life expectancy is demonstrated in the study, these being area socio-economic status (or deprivation) and the location of nursing and residential homes for frail elderly.  相似文献   

19.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

20.
Chin-Tsang Chiang 《Metrika》2011,73(2):151-170
In this article, a more flexible and easily explained joint latent model with time-varying coefficients is used to characterize time-dependent responses and a failure time. The dependence within time-dependent responses and between time-dependent responses and a failure time, and the heterogeneity in both processes are established through partially non-parametric latent variables. Based on longitudinal and survival time data, an estimation procedure is proposed for the parameter functions of the joint latent model. In our estimation, the approximated likelihood is constructed via substituting the basis function expansions for parameter functions. The expectation and maximization (EM) algorithm is then implemented to obtain the maximizer of the approximated likelihood function, and, hence, the estimated parameter functions. The validity of the considered joint latent model enables us to derive the asymptotic properties of the estimated functions. Moreover, the corresponding finite sample properties and the usefulness of our methods are demonstrated through a Monte Carlo simulation and the AIDS Clinical Trials Group (ACTG) 175 data. A possible extension of our joint latent model and some additional topics of interest are also discussed herein.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号