首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 475 毫秒
1.
This paper continues the investigation of minimax regret treatment choice initiated by Manski (2004). Consider a decision maker who must assign treatment to future subjects after observing outcomes experienced in a sample. A certain scoring rule is known to achieve minimax regret in simple versions of this decision problem. I investigate its sensitivity to perturbations of the decision environment in realistic directions. They are as follows. (i) Treatment outcomes may be influenced by a covariate whose effect on outcome distributions is bounded (in one of numerous probability metrics). This is interesting because introduction of a covariate with unrestricted effects leads to a pathological result. (ii) The experiment may have limited validity because of selective noncompliance or because the sampling universe is a potentially selective subset of the treatment population. Thus, even large samples may generate misleading signals. These problems are formalized via a “bounds” approach that turns the problem into one of partial identification.In both scenarios, small but positive perturbations leave the minimax regret decision rule unchanged. Thus, minimax regret analysis is not knife-edge-dependent on ignoring certain aspects of realistic decision problems. Indeed, it recommends to entirely disregard covariates whose effect is believed to be positive but small, as well as small enough amounts of missing data or selective attrition. All findings are finite sample results derived by game theoretic analysis.  相似文献   

2.
This paper studies the problem of treatment choice between a status quo treatment with a known outcome distribution and an innovation whose outcomes are observed only in a finite sample. I evaluate statistical decision rules, which are functions that map sample outcomes into the planner’s treatment choice for the population, based on regret, which is the expected welfare loss due to assigning inferior treatments. I extend previous work started by Manski (2004) that applied the minimax regret criterion to treatment choice problems by considering decision criteria that asymmetrically treat Type I regret (due to mistakenly choosing an inferior new treatment) and Type II regret (due to mistakenly rejecting a superior innovation) and derive exact finite sample solutions to these problems for experiments with normal, Bernoulli and bounded distributions of outcomes. The paper also evaluates the properties of treatment choice and sample size selection based on classical hypothesis tests and power calculations in terms of regret.  相似文献   

3.
This paper applies the minimax regret criterion to choice between two treatments conditional on observation of a finite sample. The analysis is based on exact small sample regret and does not use asymptotic approximations or finite-sample bounds. Core results are: (i) Minimax regret treatment rules are well approximated by empirical success rules in many cases, but differ from them significantly–both in terms of how the rules look and in terms of maximal regret incurred–for small sample sizes and certain sample designs. (ii) Absent prior cross-covariate restrictions on treatment outcomes, they prescribe inference that is completely separate across covariates, leading to no-data rules as the support of a covariate grows. I conclude by offering an assessment of these results.  相似文献   

4.
Most route choice models assume that people are completely rational. Recently, regret theory has attracted researchers’ attentions because of its power to depict real travel behavior. This paper proposes a multiclass stochastic user equilibrium assignment model by using regret theory. All users are differentiated by their own regret aversion. The route travel disutility for users of each class is defined as a linear combination of the travel time and anticipated regret. The proposed model is formulated as a variational inequality problem and solved by using the self-regulated averaging method. The numerical results show that users’ regret aversion indeed influences their route choice behavior and that users with high regret aversion are more inclined to change route choice when the traffic congestion degree varies.  相似文献   

5.
A play-the-winner-type urn design with reduced variability   总被引:1,自引:0,他引:1  
We propose a new adaptive allocation rule, the drop-the-loser, that randomizes subjects in the course of a trial comparing treatments with dichotomous outcomes. The rule tends to assign more patients to better treatments with the same limiting proportion as the randomized play-the-winner rule. The new design has significantly less variable allocation proportion than the randomized play-the-winner rule. Decrease in variability translates into a gain in statistical power. For some values of success probabilities the drop-the-loser rule has a double advantage over conventional equal allocation in that it has better power and assigns more subjects to the better treatment. Acknowledgments. I thank Stephen Durham, the associate editor, and the referees for their helpful suggestions.  相似文献   

6.
One of the most difficult problems confronting investigators who analyze data from surveys is how treat missing data. Many statistical procedures can not be used immediately if any values are missing. This paper considers the problem of estimating the population mean using auxiliary information when some observations on the sample are missing and the population mean of the auxiliary variable is not available. We use tools of classical statistical estimation theory to find a suitable estimator. We study the model and design properties of the proposed estimator. We also report the results of a broad-based simulation study of the efficiency of the estimator, which reveals very promising results.  相似文献   

7.
I study inverse probability weighted M-estimation under a general missing data scheme. Examples include M-estimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect in the linear exponential family, and variable probability sampling with observed retention frequencies. I extend an important result known to hold in special cases: estimating the selection probabilities is generally more efficient than if the known selection probabilities could be used in estimation. For the treatment effect case, the setup allows a general characterization of a “double robustness” result due to Scharfstein et al. [1999. Rejoinder. Journal of the American Statistical Association 94, 1135–1146].  相似文献   

8.
In cross-national longitudinal studies it is often impossible to administer the same measurement instruments at the same occasions to all sample units in all participating countries. This quickly results in large quantities of missing data, due to (a) missing measurement instruments in some countries, (b) missing assessment waves within or across countries, (c) missing data for individual sample units. As compared to cross-sectional studies, the problem of missing values is further aggravated by the fact that missing values are always associated with different time intervals between repeated observations. In the past, this has often been dealt with by the use of phantom-variables, but this approach is limited to simple designs with few missing value patters. In the present paper we propose a new way to think of, and deal with, missing values in longitudinal studies. Instead of conceiving of a longitudinal study as a study with \(T\) discrete time points of which some are missing, we propose to conceive of a longitudinal study as a way to measure an underlying process that develops continuously over time, but is only observed at some selected discrete time points. This transforms the problem of missing values into a problem of unequal time intervals. After a quick introduction to the basic idea of continuous time modeling, we demonstrate how this approach provides a straightforward solution to missing measurement instruments in some countries, missing assessment waves within or across countries, and missing data for individual sample units.  相似文献   

9.
Assuming that two‐step monotone missing data is drawn from a multivariate normal population, this paper derives the Bartlett‐type correction to the likelihood ratio test for missing completely at random (MCAR), which plays an important role in the statistical analysis of incomplete datasets. The advantages of our approach are confirmed in Monte Carlo simulations. Our correction drastically improved the accuracy of the type I error in Little's (1988, Journal of the American Statistical Association, 83 , 1198–1202) test for MCAR and performed well even on moderate sample sizes.  相似文献   

10.
微观计量分析中缺失数据的极大似然估计   总被引:3,自引:0,他引:3  
微观计量经济分析中常常遇到缺失数据,传统的处理方法是删除所要分析变量中的缺失数据,或用变量的均值替代缺失数据,这种方法经常造成样本有偏。极大似然估计方法可以有效地处理和估计缺失数据。本文首先介绍缺失数据的极大似然估计方法,然后对一实际调查数据中的缺失数据进行极大似然估计,并与传统处理方法的估计结果进行比较和评价。  相似文献   

11.
Using longitudinal data on individuals from the European Community Household Panel (ECHP) for eleven countries during 1995-2001, I investigate temporary job contract duration and job search effort. The countries are Austria, Belgium, Denmark, Finland, France, Greece, Ireland, Italy, the Netherlands, Portugal and Spain. I construct a search model for workers in temporary jobs which predicts that shorter duration raises search intensity. Calibration of the model to the ECHP data implies that at least 75% of the increase in search intensity over the life of a 2+ year temporary contract occurs in the last six months of the contract. I then estimate regression models for search effort that control for human capital, pay, local unemployment, and individual and time fixed effects. I find that workers on temporary jobs indeed search harder than those on permanent jobs. Moreover, search intensity increases as temporary job duration falls, and roughly 84% of this increase occurs on average in the shortest duration jobs. These results are robust to disaggregation by gender and by country. These empirical results are noteworthy, since it is not necessary to assume myopia or hyperbolic discounting in order to explain them, although the data clearly also do not rule out such explanations.  相似文献   

12.
The purchase behaviour of consumers is observed in a panel during a month. The quantity of interest is the penetration of a product. The problem is that this quantity has to be estimated on the basis of incomplete data. For some or all respondents some weeks are missing. To this end the purchasing process is modeled with a variety of stochastic processes. The performance of some existing models is compared for penetrations of the complete population, but also for Bayesian estimates in subpopulations.  相似文献   

13.
This study considers how changes in wealth affect insurance demand when individuals suffer disutility from regret. Anticipated regret stems from a comparison between the ex-post maximum and actual wealth. We consider a situation wherein individuals maximize their expected utility incorporating anticipated regret. The wealth effect on insurance demand can be classified into the risk and the regret effects. These effects are determined by the properties of the utility function and the regret function. We show that insurance can be normal when individuals place weight on anticipated regret, even though the utility function exhibit decreasing absolute risk aversion. This result indicates that regret theory is a possible explanation to the wealth effect puzzle, in which insurance is normal from empirical observation, but it should be inferior by theoretical prediction under expected utility theory.  相似文献   

14.
Juries charged with evaluating economic policy alternatives are the focus of this study. The recruitment and management of juries is a principal-agent problem involving the design of incentive mechanisms for participation and truthful revelation of values. This paper considers a simple general equilibrium economy in which juries of consumers are used to estimate the value of public projects and determine their provision. The impact of participation fees on jury selection and representativeness, and on statistical mitigation of response errors, is analyzed. Manski set identification is used to bound selection bias and determine participation fee treatments that minimize welfare regret from imperfect jury findings.  相似文献   

15.
Linkage errors can occur when probability‐based methods are used to link records from two distinct data sets corresponding to the same target population. Current approaches to modifying standard methods of regression analysis to allow for these errors only deal with the case of two linked data sets and assume that the linkage process is complete, that is, all records on the two data sets are linked. This study extends these ideas to accommodate the situation when more than two data sets are probabilistically linked and the linkage is incomplete.  相似文献   

16.
In this paper, we propose an estimator for the population mean when some observations on the study and auxiliary variables are missing from the sample. The proposed estimator is valid for any unequal probability sampling design, and is based upon the pseudo empirical likelihood method. The proposed estimator is compared with other estimators in a simulation study.  相似文献   

17.
Principal decision-makers are sometimes obliged to rely on multiple sources of information when drawing conclusions about the desirability of given actions in response to decisions they face. They may hire specialized agents to inform their decisions. Principals have authority both to allow communication among agents of information and to prevent information-sharing. I assume that communication facilitates the emergence of some complementarities among agents, but it may also promote collusion. I study the optimal design of contracts focusing on how to sequence communication of expertise. I show that from a principal’s point of view, when the advantages of allowing communication dominate, communication is more effective before effort choices are made rather than after.  相似文献   

18.
The paper studies the assignment of property rights. By assignment I mean a social mechanism that transfers a valuable resource from an “unowned” state to an “owned” state (for example, a first-possession rule). I argue that any assignment mechanism faces an implementation constraint with one exception, namely the assignment by conflict. I characterize this constraint and show that under some conditions population growth facilitates rule-based assignments because appropriation by conflict becomes more costly. In other cases, however, this effect is reversed. The model may give some insights regarding the emergence and the disappearance of the open-field system in medieval Europe which, paradoxically, both have been attributed to population growth. This paper is dedicated to Horst Hegmann. For helpful and insightful comments in discussions and on the paper I thank Roderick Hay, Horst Hegmann, Christopher Kingston, Guy Kirsch, Krishna Ladha, Marc Law, Anton Miglo, Douglass North, and John Nye. I also thank the editor Amihai Glazer and two anonymous referees for their comments. The usual disclaimer applies.  相似文献   

19.
Conclusions on the development of delinquent behaviour during the life-course can only be made with longitudinal data, which is regularly gained by repeated interviews of the same respondents. Missing data are a problem for the analysis of delinquent behaviour during the life-course shown with data from an adolescents’ four-wave panel. In this article two alternative techniques to cope with missing data are used: full information maximum likelihood estimation and multiple imputation. Both methods allow one to consider all available data (including adolescents with missing information on some variables) in order to estimate the development of delinquency. We demonstrate that self-reported delinquency is systematically underestimated with listwise deletion (LD) of missing data. Further, LD results in false conclusions on gender and school specific differences of the age–crime relationship. In the final discussion some hints are given for further methods to deal with bias in panel data affected by the missing process.  相似文献   

20.
Imputation procedures such as fully efficient fractional imputation (FEFI) or multiple imputation (MI) create multiple versions of the missing observations, thereby reflecting uncertainty about their true values. Multiple imputation generates a finite set of imputations through a posterior predictive distribution. Fractional imputation assigns weights to the observed data. The focus of this article is the development of FEFI for partially classified two-way contingency tables. Point estimators and variances of FEFI estimators of population proportions are derived. Simulation results, when data are missing completely at random or missing at random, show that FEFI is comparable in performance to maximum likelihood estimation and multiple imputation and superior to simple stochastic imputation and complete case anlaysis. Methods are illustrated with four data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号