首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
This paper studies the problem of treatment choice between a status quo treatment with a known outcome distribution and an innovation whose outcomes are observed only in a finite sample. I evaluate statistical decision rules, which are functions that map sample outcomes into the planner’s treatment choice for the population, based on regret, which is the expected welfare loss due to assigning inferior treatments. I extend previous work started by Manski (2004) that applied the minimax regret criterion to treatment choice problems by considering decision criteria that asymmetrically treat Type I regret (due to mistakenly choosing an inferior new treatment) and Type II regret (due to mistakenly rejecting a superior innovation) and derive exact finite sample solutions to these problems for experiments with normal, Bernoulli and bounded distributions of outcomes. The paper also evaluates the properties of treatment choice and sample size selection based on classical hypothesis tests and power calculations in terms of regret.  相似文献   

2.
This paper continues the investigation of minimax regret treatment choice initiated by Manski (2004). Consider a decision maker who must assign treatment to future subjects after observing outcomes experienced in a sample. A certain scoring rule is known to achieve minimax regret in simple versions of this decision problem. I investigate its sensitivity to perturbations of the decision environment in realistic directions. They are as follows. (i) Treatment outcomes may be influenced by a covariate whose effect on outcome distributions is bounded (in one of numerous probability metrics). This is interesting because introduction of a covariate with unrestricted effects leads to a pathological result. (ii) The experiment may have limited validity because of selective noncompliance or because the sampling universe is a potentially selective subset of the treatment population. Thus, even large samples may generate misleading signals. These problems are formalized via a “bounds” approach that turns the problem into one of partial identification.In both scenarios, small but positive perturbations leave the minimax regret decision rule unchanged. Thus, minimax regret analysis is not knife-edge-dependent on ignoring certain aspects of realistic decision problems. Indeed, it recommends to entirely disregard covariates whose effect is believed to be positive but small, as well as small enough amounts of missing data or selective attrition. All findings are finite sample results derived by game theoretic analysis.  相似文献   

3.
I use the minimax-regret criterion to study choice between two treatments when some outcomes in the study population are unobservable and the distribution of missing data is unknown. I first assume that observable features of the study population are known and derive the treatment rule that minimizes maximum regret over all possible distributions of missing data. When no treatment is dominant, this rule allocates positive fractions of persons to both treatments. I then assume that the data are a random sample of the study population and show that in some instances, treatment rules that estimate certain point-identified population means by sample averages are finite-sample minimax regret.  相似文献   

4.
Least-squares forecast averaging   总被引:2,自引:0,他引:2  
This paper proposes forecast combination based on the method of Mallows Model Averaging (MMA). The method selects forecast weights by minimizing a Mallows criterion. This criterion is an asymptotically unbiased estimate of both the in-sample mean-squared error (MSE) and the out-of-sample one-step-ahead mean-squared forecast error (MSFE). Furthermore, the MMA weights are asymptotically mean-square optimal in the absence of time-series dependence. We show how to compute MMA weights in forecasting settings, and investigate the performance of the method in simple but illustrative simulation environments. We find that the MMA forecasts have low MSFE and have much lower maximum regret than other feasible forecasting methods, including equal weighting, BIC selection, weighted BIC, AIC selection, weighted AIC, Bates–Granger combination, predictive least squares, and Granger–Ramanathan combination.  相似文献   

5.
Despite their great popularity, all the conventional Divisia productivity indexes ignore undesirable outputs. The purpose of this study is to fill in this gap by proposing a primal Divisia-type productivity index that is valid in the presence of undesirable outputs. The new productivity index is derived by total differentiation of the directional output distance function with respect to a time trend and referred to as the Divisia–Luenberger productivity index. We also empirically compare the Divisia–Luenberger productivity index and a representative of the conventional Divisia productivity indexes–the radial-output-distance-function-based Feng and Serletis (2010) productivity index–using aggregate data on 15 OECD countries over the period 1981–2000. Our empirical results show that failure to take into account undesirable outputs not only leads to misleading rankings of countries both in terms of productivity growth and in terms of technological change, but also results in wrong conclusions concerning efficiency change.  相似文献   

6.
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small‐sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey‐based probability forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
This paper examines organizational flexibility in Korea by concentrating on the rules and procedures on managerial and employee behaviours (safeguard and control rules), and their association with size, ownership, strategy and performance. The data reported were collected from forty-five organizations in South Korea. The results showed that flexibility in terms of low control rules (rules on employees) relates to innovation strategies, and lack of flexibility (high control rules) relates to cost-reduction strategies. The results also show that in Korea, a 'fit' between this aspect of organizational flexibility and strategy had a positive impact on organizational performance. Rules on managerial behaviour (safeguard rules) were strongly related to family or individual ownership in the Korean context. The implications of 'congruence' between organizational strategy and presence/absence of flexibility for employee behaviour are discussed.  相似文献   

8.
Improving hospital supply chain performance has become increasingly important as healthcare organizations strive to improve operational efficiency and to reduce cost. In this study, we propose a research model based on a relational view, delineating the factors that influence hospital supply chain performance: trust, knowledge exchange, IT integration between the hospital and its suppliers, and hospital–supplier integration. Testing results of the research model based on data from a sample of 117 supply chain executives from U.S. hospitals show positive direct effects: (1) from trust and from IT integration to knowledge exchange respectively; (2) from knowledge exchange and from IT integration to hospital–supplier integration respectively; and (3) from hospital–supplier integration to hospital supply chain performance. The results also show the following indirect effects: (1) the influences of knowledge exchange and IT integration on hospital supply chain performance are partially and fully mediated by hospital–supplier integration, respectively and (2) the influences of trust and IT integration on hospital–supplier integration are fully and partially mediated by knowledge exchange, respectively. In addition, the results show the following moderating effects: (1) hospital system membership moderates the relationships between IT integration and knowledge exchange and between trust and knowledge exchange; (2) hospital environmental uncertainty moderates the relationship between trust and knowledge exchange; and (3) trust moderates the relationship between knowledge exchange and hospital–supplier integration. Implications of the study findings and directions for future research are discussed.  相似文献   

9.
Most route choice models assume that people are completely rational. Recently, regret theory has attracted researchers’ attentions because of its power to depict real travel behavior. This paper proposes a multiclass stochastic user equilibrium assignment model by using regret theory. All users are differentiated by their own regret aversion. The route travel disutility for users of each class is defined as a linear combination of the travel time and anticipated regret. The proposed model is formulated as a variational inequality problem and solved by using the self-regulated averaging method. The numerical results show that users’ regret aversion indeed influences their route choice behavior and that users with high regret aversion are more inclined to change route choice when the traffic congestion degree varies.  相似文献   

10.
We provide analytical formulae for the asymptotic bias (ABIAS) and mean-squared error (AMSE) of the IV estimator, and obtain approximations thereof based on an asymptotic scheme which essentially requires the expectation of the first stage F-statistic to converge to a finite (possibly small) positive limit as the number of instruments approaches infinity. Our analytical formulae can be viewed as generalizing the bias and MSE results of [Richardson and Wu 1971. A note on the comparison of ordinary and two-stage least squares estimators. Econometrica 39, 973–982] to the case with nonnormal errors and stochastic instruments. Our approximations are shown to compare favorably with approximations due to [Morimune 1983. Approximate distributions of kk-class estimators when the degree of overidentifiability is large compared with the sample size. Econometrica 51, 821–841] and [Donald and Newey 2001. Choosing the number of instruments. Econometrica 69, 1161–1191], particularly when the instruments are weak. We also construct consistent estimators for the ABIAS and AMSE, and we use these to further construct a number of bias corrected OLS and IV estimators, the properties of which are examined both analytically and via a series of Monte Carlo experiments.  相似文献   

11.
Summary A lot is accepted if the number of defective units in a sample of sizen does not exceed the acceptance numberc. The usefulness of the sampling plan (n, c) is described by the regret function. This regret functionR(p), depending on the proportionp of defective units in the lot, is the expectation of the avoidable costs. There always exists an optimum sampling plan which minimizes the maximum ofR(p). The dependence of the maxima ofR(p) onn andc is studied and some theorems are given which are useful for calculating the minimax solution, that is the optimum sampling plan.   相似文献   

12.
This study considers how changes in wealth affect insurance demand when individuals suffer disutility from regret. Anticipated regret stems from a comparison between the ex-post maximum and actual wealth. We consider a situation wherein individuals maximize their expected utility incorporating anticipated regret. The wealth effect on insurance demand can be classified into the risk and the regret effects. These effects are determined by the properties of the utility function and the regret function. We show that insurance can be normal when individuals place weight on anticipated regret, even though the utility function exhibit decreasing absolute risk aversion. This result indicates that regret theory is a possible explanation to the wealth effect puzzle, in which insurance is normal from empirical observation, but it should be inferior by theoretical prediction under expected utility theory.  相似文献   

13.
Dr. U. Mäder 《Metrika》1986,33(1):143-163
Summary A sample inspection plan is said to be optimal in the sense of the minimax regret principle, if it minimizes the difference between the expected total costs and the unavoidable costs. The results of this article can be used to calculate such sample inspection plans for a quantitative quality control with one-sided tolerance limits and known or unknown variance of the test variate. As an example of practical importance the case of a normal variate with unknown variance is considered. Formulae are given to estimate the error that arises if the assumed distribution of the test variate differs from the actual distribution.  相似文献   

14.
Usual inference methods for stable distributions are typically based on limit distributions. But asymptotic approximations can easily be unreliable in such cases, for standard regularity conditions may not apply or may hold only weakly. This paper proposes finite-sample tests and confidence sets for tail thickness and asymmetry parameters (αα and ββ) of stable distributions. The confidence sets are built by inverting exact goodness-of-fit tests for hypotheses which assign specific values to these parameters. We propose extensions of the Kolmogorov–Smirnov, Shapiro–Wilk and Filliben criteria, as well as the quantile-based statistics proposed by McCulloch (1986) in order to better capture tail behavior. The suggested criteria compare empirical goodness-of-fit or quantile-based measures with their hypothesized values. Since the distributions involved are quite complex and non-standard, the relevant hypothetical measures are approximated by simulation, and pp-values are obtained using Monte Carlo (MC) test techniques. The properties of the proposed procedures are investigated by simulation. In contrast with conventional wisdom, we find reliable results with sample sizes as small as 25. The proposed methodology is applied to daily electricity price data in the US over the period 2001–2006. The results show clearly that heavy kurtosis and asymmetry are prevalent in these series.  相似文献   

15.
Sample autocorrelation coefficients are widely used to test the randomness of a time series. Despite its unsatisfactory performance, the asymptotic normal distribution is often used to approximate the distribution of the sample autocorrelation coefficients. This is mainly due to the lack of an efficient approach in obtaining the exact distribution of sample autocorrelation coefficients. In this paper, we provide an efficient algorithm for evaluating the exact distribution of the sample autocorrelation coefficients. Under the multivariate elliptical distribution assumption, the exact distribution as well as exact moments and joint moments of sample autocorrelation coefficients are presented. In addition, the exact mean and variance of various autocorrelation-based tests are provided. Actual size properties of the Box–Pierce and Ljung–Box tests are investigated, and they are shown to be poor when the number of lags is moderately large relative to the sample size. Using the exact mean and variance of the Box–Pierce test statistic, we propose an adjusted Box–Pierce test that has a far superior size property than the traditional Box–Pierce and Ljung–Box tests.  相似文献   

16.
This work proves the existence of an equilibrium for an infinite horizon economy where trade takes place sequentially over time. There exist two types of agents: the first correctly anticipates all future contingent endogenous variables with complete information as in Radner [Radner, R. (1972). Existence of equilibrium of plans, prices and price expectations in a sequence of markets. Econometrica, 289–303] and the second has exogenous expectations about the future environment as in Grandmont [Grandmont, J. M. (1977). Temporary general equilibrium theory. Econometrica, 535–572] and information based on the current and past aggregate variables including those which are private knowledge. Agents with exogenous expectations may have inconsistent optimal plans but have predictive beliefs in the context of Blackwell and Dubbins [Blackwell, D., Dubins, L. (1962). Merging of opinions with increasing information. The Annals of Mathematical Statistics, 882–886] with probability transition rules based on all observed variables. We provide examples of this framework applied to models of differential information and environments exhibiting results of market selection and convergence of an equilibrium. The existence result can be used to conclude that, by adding the continuity assumption on the probability transition rules, we obtain the existence of an equilibrium for some models of differential information and incomplete markets.  相似文献   

17.
This paper introduces a rank-based test for the instrumental variables regression model that dominates the Anderson–Rubin test in terms of finite sample size and asymptotic power in certain circumstances. The test has correct size for any distribution of the errors with weak or strong instruments. The test has noticeably higher power than the Anderson–Rubin test when the error distribution has thick tails and comparable power otherwise. Like the Anderson–Rubin test, the rank tests considered here perform best, relative to other available tests, in exactly identified models.  相似文献   

18.
19.
Using a long sample of commodity spot price indexes over the period 1947–2010, we examine the out-of-sample predictability of commodity prices by means of macroeconomic and financial variables. Commodity currencies are found to have some predictive power at short (monthly and quarterly) forecast horizons, while growth in industrial production and the investment–capital ratio have some predictive power at longer (yearly) horizons. Commodity price predictability is strongest when based on multivariate approaches that account for parameter estimation error. Commodity price predictability varies substantially across economic states, being strongest during economic recessions.  相似文献   

20.
Quality function deployment (QFD) is a proven tool for process and product development, which translates the voice of customer (VoC) into engineering characteristics (EC), and prioritizes the ECs, in terms of customer's requirements. Traditionally, QFD rates the design requirements (DRs) with respect to customer needs, and aggregates the ratings to get relative importance scores of DRs. An increasing number of studies stress on the need to incorporate additional factors, such as cost and environmental impact, while calculating the relative importance of DRs. However, there is a paucity of methodologies for deriving the relative importance of DRs when several additional factors are considered. Ramanathan and Yunfeng [43] proved that the relative importance values computed by data envelopment analysis (DEA) coincide with traditional QFD calculations when only the ratings of DRs with respect to customer needs are considered, and only one additional factor, namely cost, is considered. Also, Kamvysi et al. [27] discussed the combination of QFD with analytic hierarchy process–analytic network process (AHP–ANP) and DEAHP–DEANP methodologies to prioritize selection criteria in a service context. The objective of this paper is to propose a QFD–imprecise enhanced Russell graph measure (QFD–IERGM) for incorporating the criteria such as cost of services and implementation easiness in QFD. Proposed model is applied in an Iranian hospital.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号