首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper applies the minimax regret criterion to choice between two treatments conditional on observation of a finite sample. The analysis is based on exact small sample regret and does not use asymptotic approximations or finite-sample bounds. Core results are: (i) Minimax regret treatment rules are well approximated by empirical success rules in many cases, but differ from them significantly–both in terms of how the rules look and in terms of maximal regret incurred–for small sample sizes and certain sample designs. (ii) Absent prior cross-covariate restrictions on treatment outcomes, they prescribe inference that is completely separate across covariates, leading to no-data rules as the support of a covariate grows. I conclude by offering an assessment of these results.  相似文献   

2.
This paper continues the investigation of minimax regret treatment choice initiated by Manski (2004). Consider a decision maker who must assign treatment to future subjects after observing outcomes experienced in a sample. A certain scoring rule is known to achieve minimax regret in simple versions of this decision problem. I investigate its sensitivity to perturbations of the decision environment in realistic directions. They are as follows. (i) Treatment outcomes may be influenced by a covariate whose effect on outcome distributions is bounded (in one of numerous probability metrics). This is interesting because introduction of a covariate with unrestricted effects leads to a pathological result. (ii) The experiment may have limited validity because of selective noncompliance or because the sampling universe is a potentially selective subset of the treatment population. Thus, even large samples may generate misleading signals. These problems are formalized via a “bounds” approach that turns the problem into one of partial identification.In both scenarios, small but positive perturbations leave the minimax regret decision rule unchanged. Thus, minimax regret analysis is not knife-edge-dependent on ignoring certain aspects of realistic decision problems. Indeed, it recommends to entirely disregard covariates whose effect is believed to be positive but small, as well as small enough amounts of missing data or selective attrition. All findings are finite sample results derived by game theoretic analysis.  相似文献   

3.
I use the minimax-regret criterion to study choice between two treatments when some outcomes in the study population are unobservable and the distribution of missing data is unknown. I first assume that observable features of the study population are known and derive the treatment rule that minimizes maximum regret over all possible distributions of missing data. When no treatment is dominant, this rule allocates positive fractions of persons to both treatments. I then assume that the data are a random sample of the study population and show that in some instances, treatment rules that estimate certain point-identified population means by sample averages are finite-sample minimax regret.  相似文献   

4.
We propose a finite sample approach to some of the most common limited dependent variables models. The method rests on the maximized Monte Carlo (MMC) test technique proposed by Dufour [1998. Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and nonstandard asymptotics. Journal of Econometrics, this issue]. We provide a general way for implementing tests and confidence regions. We show that the decision rule associated with a MMC test may be written as a Mixed Integer Programming problem. The branch-and-bound algorithm yields a global maximum in finite time. An appropriate choice of the statistic yields a consistent test, while fulfilling the level constraint for any sample size. The technique is illustrated with numerical data for the logit model.  相似文献   

5.
Fixed effects estimators of nonlinear panel models can be severely biased due to the incidental parameters problem. In this paper, I characterize the leading term of a large-T expansion of the bias of the MLE and estimators of average marginal effects in parametric fixed effects panel binary choice models. For probit index coefficients, the former term is proportional to the true value of the coefficients being estimated. This result allows me to derive a lower bound for the bias of the MLE. I then show that the resulting fixed effects estimates of ratios of coefficients and average marginal effects exhibit no bias in the absence of heterogeneity and negligible bias for a wide variety of distributions of regressors and individual effects in the presence of heterogeneity. I subsequently propose new bias-corrected estimators of index coefficients and marginal effects with improved finite sample properties for linear and nonlinear models with predetermined regressors.  相似文献   

6.
This paper develops methodology for nonparametric estimation of a measure of the overlap of two distributions based on kernel estimation techniques. This quantity has been proposed as a measure of economic polarization between two groups, Anderson (2004) and Anderson et al. (2010). In ecology it has been used to measure the overlap of species. We give the asymptotic distribution theory of our estimator, which in some cases of practical relevance is nonstandard due to a boundary value problem. We also propose a method for conducting inference based on estimation of unknown quantities in the limiting distribution and show that our method yields consistent inference in all cases we consider. We investigate the finite sample properties of our methods by simulation methods. We give an application to the study of polarization within China in recent years using household survey data from two provinces taken in 1987 and 2001. We find a big increase in polarization between 1987 and 2001 according to monetary outcomes but less change in terms of living space.  相似文献   

7.
A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations “just above” and “just below” the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function—the specification errors—as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework.  相似文献   

8.
Hypothesis testing on cointegrating vectors based on the asymptotic distributions of the test statistics are known to suffer from severe small sample size distortion. In this paper an alternative bootstrap procedure is proposed and evaluated through a Monte Carlo experiment, finding that the Type I errors are close to the nominal signficance levels but power might be not entirely adequate. It is then shown that a combined test based on the outcomes of both the asymptotic and the bootstrap tests will have both correct size and low Type II error, therefore improving the currently available procedures.  相似文献   

9.
We propose methods for constructing confidence sets for the timing of a break in level and/or trend that have asymptotically correct coverage for both I(0) and I(1) processes. These are based on inverting a sequence of tests for the break location, evaluated across all possible break dates. We separately derive locally best invariant tests for the I(0) and I(1) cases; under their respective assumptions, the resulting confidence sets provide correct asymptotic coverage regardless of the magnitude of the break. We suggest use of a pre-test procedure to select between the I(0)- and I(1)-based confidence sets, and Monte Carlo evidence demonstrates that our recommended procedure achieves good finite sample properties in terms of coverage and length across both I(0) and I(1) environments. An application using US macroeconomic data is provided which further evinces the value of these procedures.  相似文献   

10.
We investigate the finite sample properties of a large number of estimators for the average treatment effect on the treated that are suitable when adjustment for observed covariates is required, like inverse probability weighting, kernel and other variants of matching, as well as different parametric models. The simulation design used is based on real data usually employed for the evaluation of labour market programmes in Germany. We vary several dimensions of the design that are of practical importance, like sample size, the type of the outcome variable, and aspects of the selection process. We find that trimming individual observations with too much weight as well as the choice of tuning parameters are important for all estimators. A conclusion from our simulations is that a particular radius matching estimator combined with regression performs best overall, in particular when robustness to misspecifications of the propensity score and different types of outcome variables is considered an important property.  相似文献   

11.
This paper shows how valid inferences can be made when an instrumental variable does not perfectly satisfy the orthogonality condition. When there is a mild violation of the orthogonality condition, the Anderson and Rubin (1949) test is oversized. In order to correct this problem, the fractionally resampled Anderson-Rubin test is derived by modifying Wu’s (1990) resampling technique. We select half of the sample when resampling and obtain valid but conservative critical values. Simulations show that our technique performs well even with moderate to large violation of exogeneity when there is a finite sample correction for the block size choice.  相似文献   

12.
In empirical Bayes decision making, the Bayes empirical Bayes approach is diccussed by Gilliland and Boyer (1979). In the finite state component case, the Bayes empirical Bayes procedures are shown to have optimal properties in a fairly general setting and believed to have small sample advantage over the classical rules. The flexibility of making desirable adjustments for these decision procedures by choice of prior enables one to set a proper strategy when dealing with actual problems.
The applications of Bayes empirical Bayes procedures, however, create some interesting theoretical and computational problems as they are fairly complicated in structure. This paper gives a brief introduction into the Bayes empirical Bayes approach, and, to illustrate it, explicit results are given for testing H0: N(-1,1) against H1: N(1,1).  相似文献   

13.
In a sample selection or treatment effects model, common unobservables may affect both the outcome and the probability of selection in unknown ways. This paper shows that the distribution function of potential outcomes, conditional on covariates, can be identified given an observed variable VV that affects the treatment or selection probability in certain ways and is conditionally independent of the error terms in a model of potential outcomes. Selection model estimators based on this identification are provided, which take the form of simple weighted averages, GMM, or two stage least squares. These estimators permit endogenous and mismeasured regressors. Empirical applications are provided to estimation of a firm investment model and a schooling effects on wages model.  相似文献   

14.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

15.
This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. In this paper, I first consider a canonical large deviations criterion for optimality and show that inference based on the empirical likelihood ratio statistic is optimal. Second, I introduce a new empirical likelihood bootstrap that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of non-pivotal limit distributions. Lastly, I analyze the finite sample properties of the proposed framework using Monte Carlo simulations. The simulation results are encouraging.  相似文献   

16.
In this paper we provide a joint treatment of two major problems that surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data, and uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not. We suggest decision rules based on the union of rejections of four standard unit root tests (OLS and quasi-differenced demeaned and detrended ADF unit root tests), along with information regarding the magnitude of the trend and initial condition, to allow simultaneously for both trend and initial condition uncertainty.  相似文献   

17.
This paper considers semiparametric identification of structural dynamic discrete choice models and models for dynamic treatment effects. Time to treatment and counterfactual outcomes associated with treatment times are jointly analyzed. We examine the implicit assumptions of the dynamic treatment model using the structural model as a benchmark. For the structural model we show the gains from using cross-equation restrictions connecting choices to associated measurements and outcomes. In the dynamic discrete choice model, we identify both subjective and objective outcomes, distinguishing ex post and ex ante outcomes. We show how to identify agent information sets.  相似文献   

18.
In this paper I use the National Supported Work (NSW) data to examine the finite‐sample performance of the Oaxaca–Blinder unexplained component as an estimator of the population average treatment effect on the treated (PATT). Precisely, I follow sample and variable selections from Dehejia and Wahba (1999), and conclude that Oaxaca–Blinder performs better than any of the estimators in this influential paper, provided that overlap is imposed. As a robustness check, I consider alternative sample (Smith and Todd, 2005) and variable (Abadie and Imbens, 2011) selections, and present a simulation study which is also based on the NSW data.  相似文献   

19.
Most route choice models assume that people are completely rational. Recently, regret theory has attracted researchers’ attentions because of its power to depict real travel behavior. This paper proposes a multiclass stochastic user equilibrium assignment model by using regret theory. All users are differentiated by their own regret aversion. The route travel disutility for users of each class is defined as a linear combination of the travel time and anticipated regret. The proposed model is formulated as a variational inequality problem and solved by using the self-regulated averaging method. The numerical results show that users’ regret aversion indeed influences their route choice behavior and that users with high regret aversion are more inclined to change route choice when the traffic congestion degree varies.  相似文献   

20.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号