首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
This paper applies the minimax regret criterion to choice between two treatments conditional on observation of a finite sample. The analysis is based on exact small sample regret and does not use asymptotic approximations or finite-sample bounds. Core results are: (i) Minimax regret treatment rules are well approximated by empirical success rules in many cases, but differ from them significantly–both in terms of how the rules look and in terms of maximal regret incurred–for small sample sizes and certain sample designs. (ii) Absent prior cross-covariate restrictions on treatment outcomes, they prescribe inference that is completely separate across covariates, leading to no-data rules as the support of a covariate grows. I conclude by offering an assessment of these results.  相似文献   

2.
This paper continues the investigation of minimax regret treatment choice initiated by Manski (2004). Consider a decision maker who must assign treatment to future subjects after observing outcomes experienced in a sample. A certain scoring rule is known to achieve minimax regret in simple versions of this decision problem. I investigate its sensitivity to perturbations of the decision environment in realistic directions. They are as follows. (i) Treatment outcomes may be influenced by a covariate whose effect on outcome distributions is bounded (in one of numerous probability metrics). This is interesting because introduction of a covariate with unrestricted effects leads to a pathological result. (ii) The experiment may have limited validity because of selective noncompliance or because the sampling universe is a potentially selective subset of the treatment population. Thus, even large samples may generate misleading signals. These problems are formalized via a “bounds” approach that turns the problem into one of partial identification.In both scenarios, small but positive perturbations leave the minimax regret decision rule unchanged. Thus, minimax regret analysis is not knife-edge-dependent on ignoring certain aspects of realistic decision problems. Indeed, it recommends to entirely disregard covariates whose effect is believed to be positive but small, as well as small enough amounts of missing data or selective attrition. All findings are finite sample results derived by game theoretic analysis.  相似文献   

3.
Using Monte Carlo simulations we study the small sample performance of the traditional TSLS, the LIML and four new jackknife IV estimators when the instruments are weak. We find that the new estimators and LIML have a smaller bias but a larger variance than the TSLS. In terms of root mean square error, neither LIML nor the new estimators perform uniformly better than the TSLS. The main conclusion from the simulations and an empirical application on labour supply functions is that in a situation with many weak instruments, there still does not exist an easy way to obtain reliable estimates in small samples. Better instruments and/or larger samples is the only way to increase precision in the estimates. Since the properties of the estimators are specific to each data-generating process and sample size it would be wise in empirical work to complement the estimates with a Monte Carlo study of the estimators' properties for the relevant sample size and data-generating process believed to be applicable. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
We propose a unit root test for panels with cross-sectional dependency. We allow general dependency structure among the innovations that generate data for each of the cross-sectional units. Each unit may have different sample size, and therefore unbalanced panels are also permitted in our framework. Yet, the test is asymptotically normal, and does not require any tabulation of the critical values. Our test is based on nonlinear IV estimation of the usual augmented Dickey–Fuller type regression for each cross-sectional unit, using as instruments nonlinear transformations of the lagged levels. The actual test statistic is simply defined as a standardized sum of individual IV t-ratios. We show in the paper that such a standardized sum of individual IV t-ratios has limit normal distribution as long as the panels have large individual time series observations and are asymptotically balanced in a very weak sense. We may have the number of cross-sectional units arbitrarily small or large. In particular, the usual sequential asymptotics, upon which most of the available asymptotic theories for panel unit root models heavily rely, are not required. Finite sample performance of our test is examined via a set of simulations, and compared with those of other commonly used panel unit root tests. Our test generally performs better than the existing tests in terms of both finite sample sizes and powers. We apply our nonlinear IV method to test for the purchasing power parity hypothesis in panels.  相似文献   

5.
The generalized maximum likelihood estimator (GMLE) is derived and some of its variants are compared with the partial Abdushukurov-Cheng-Lin (PACL) and Kaplan-Meier (KM) estimators under the proportional hazards model with partially informative censoring. A comparison of small sample properties is conducted based on a simulation study. The results show that the GMLEs perform competitively with the PACL estimator.Acknowledgements.The authors are very much thankful to the referee for perceptive and illuminating comments. A substantial credit goes to the referee for an overall improvement of the paper.  相似文献   

6.
Explicit asymptotic bias formulae are given for dynamic panel regression estimators as the cross section sample size N→∞N. The results extend earlier work by Nickell [1981. Biases in dynamic models with fixed effects. Econometrica 49, 1417–1426] and later authors in several directions that are relevant for practical work, including models with unit roots, deterministic trends, predetermined and exogenous regressors, and errors that may be cross sectionally dependent. The asymptotic bias is found to be so large when incidental linear trends are fitted and the time series sample size is small that it changes the sign of the autoregressive coefficient. Another finding of interest is that, when there is cross section error dependence, the probability limit of the dynamic panel regression estimator is a random variable rather than a constant, which helps to explain the substantial variability observed in dynamic panel estimates when there is cross section dependence even in situations where N is very large. Some proposals for bias correction are suggested and finite sample performance is analyzed in simulations.  相似文献   

7.
This paper considers a Gaussian first-order autoregressive process with unknown intercept where the initial value of the variable is a known constant. Monte Carlo simulations are used to investigate the sampling distribution of the t statistic for the autoregressive parameter when its value is in the neighborhood of unity. A small sigma asymptotic result is exploited in the construction of exact non-similar tests. The powers of non-similar tests of the random walk and other hypotheses are estimated for sample sizes typical in economic applications.  相似文献   

8.
M. J. Ahsan  S. U. Khan 《Metrika》1982,29(1):71-78
The problem of allocating the sample numbers to the strata in multivariate stratified surveys, where, apart from the cost involved in enumerating the selected individuals in the sample, there is an overhead cost associated with each stratum, has been formulated as a non-linear programming problem. The variances of the posterior distributions of the means of various characters are put to restraints and the total cost is minimized. The main problem is broken into subproblems for each of which the objective function turns out to be convex. When the number of subproblems happens to be large an approach has been indicated for obtaining an approximate solution by solving only a small number of subproblems.  相似文献   

9.
This paper investigates small‐sample biases in synthetic cohort models (repeated cross‐sectional data grouped at the cohort and year level) in the context of a female labor supply model. I use the Current Population Survey to compare estimates when group sizes are extremely large to those that arise from randomly drawing subsamples of observations from the large groups. I augment this approach with Monte Carlo analysis so as to precisely quantify biases and coverage rates. In this particular application, thousands of observations per group are required before small‐sample issues can be ignored in estimation and sampling error leads to large downward biases in the estimated income elasticity. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
The paper considers n-dimensional VAR models for variables exhibiting cointegration and common cyclical features. Two specific reduced rank vector error correction models are discussed. In one, named the “strong form” and denoted by SF, the collection of all coefficient matrices of a VECM has rank less than n, in the other, named the “weak form” and denoted by WF, the collection of all coefficient matrices except the matrix of coefficient of error correction terms has rank less than n. The paper explores the theoretical connections between these two forms, suggests asymptotic tests for each form and examines the small sample properties of these tests by Monte Carlo simulations.  相似文献   

11.
ML–estimation of regression parameters with incomplete covariate information usually requires a distributional assumption regarding the concerned covariates that implies a source of misspecification. Semiparametric procedures avoid such assumptions at the expense of efficiency. In this paper a simulation study with small sample size is carried out to get an idea of the performance of the ML–estimator under misspecification and to compare it with the semiparametric procedures when the former is based on a correct assumption. The results show that there is only a little gain by correct parametric assumptions, which does not justify the possibly large bias when the assumptions are not met. Additionally, a simple modification of the complete case estimator appears to be nearly semiparametric efficient.  相似文献   

12.
This paper proposes new ?1‐penalized quantile regression estimators for panel data, which explicitly allows for individual heterogeneity associated with covariates. Existing fixed‐effects estimators can potentially suffer from three limitations which are overcome by the proposed approach: (i) incidental parameters bias in nonlinear models with large N and small T ; (ii) lack of efficiency; and (iii) inability to estimate the effects of time‐invariant regressors. We conduct Monte Carlo simulations to assess the small‐sample performance of the new estimators and provide comparisons of new and existing penalized estimators in terms of quadratic loss. We apply the technique to an empirical example of the estimation of consumer preferences for nutrients from a demand model using a large transaction‐level dataset of household food purchases. We show that preferences for nutrients vary across the conditional distribution of expenditure and across genders, and emphasize the importance of fully capturing consumer heterogeneity in demand modeling. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Ten empirical models of travel behavior are used to measure the variability of structural equation model goodness-of-fit as a function of sample size, multivariate kurtosis, and estimation technique. The estimation techniques are maximum likelihood, asymptotic distribution free, bootstrapping, and the Mplus approach. The results highlight the divergence of these techniques when sample sizes are small and/or multivariate kurtosis high. Recommendations include using multiple estimation techniques and, when sample sizes are large, sampling the data and reestimating the models to test both the robustness of the specifications and to quantify, to some extent, the large sample bias inherent in the χ 2 test statistic.  相似文献   

14.
This paper develops a very simple test for the null hypothesis of no cointegration in panel data. The test is general enough to allow for heteroskedastic and serially correlated errors, unit‐specific time trends, cross‐sectional dependence and unknown structural breaks in both the intercept and slope of the cointegrated regression, which may be located at different dates for different units. The limiting distribution of the test is derived, and is found to be normal and free of nuisance parameters under the null. A small simulation study is also conducted to investigate the small‐sample properties of the test. In our empirical application, we provide new evidence concerning the purchasing power parity hypothesis.  相似文献   

15.
A Monte Carlo study of the small sample properties of various estimators of the linear regression model with first-order autocorrelated errors. When independent variables are trended, estimators using Ttransformed observations (Prais-Winsten) are much more efficient than those using T–1 (Cochrane–Orcutt). The best of the feasible estimators isiterated Prais-Winsten using a sum-of-squared-error minimizing estimate of the autocorrelation coefficient ?. None of the feasible estimators performs well in hypothesis testing; all seriously underestimate standard errors, making estimated coefficients appear to be much more significant than they actually are.  相似文献   

16.
Abstract  The problem considered here, is that of finding suitable conditions for dynamic economic systems that exclude the existence of observationally equivalent structures. Here observational equivalence refers to equality of distributions or first and second moments of a small finite sample from the observable process. It is shown, that under these conditions we may act as if the lagged endogenous variables are nonrandom exogenous variables, when global identifiability is investigated.  相似文献   

17.
The performance on small and medium-size samples of several techniques to solve the classification problem in discriminant analysis is investigated. The techniques considered are two widely used parametric statistical techniques (Fisher's linear discriminant function and Smith's quadratic function), and a class of recently proposed nonparametric estimation techniques based on mathematical programming (linear and mixed-integer programming). A simulation study is performed, analyzing the relative performance of the above techniques in the two-group case, for various small sample sizes, moderate group overlap and across six different data conditions. Training samples as well as validation samples are used to assess the classificatory performance of the techniques. The degree of group overlap and sample sizes selected for analysis in this paper are of interest in practice because they closely reflect conditions of many real data sets. The results of the experiment show that Smith's nonlinear quadratic function tends to be superior on the training samples and validation samples when the variances–covariances across groups are heterogeneous, while the mixed-integer technique performs best on the training samples when the variances–covariances are equal, and on validation samples with equal variances and discrete uniform independent variables. The mixed-integer technique and the quadratic discriminant function are also found to be more sensitive than the other techniques to the sample size, giving disproportionally inaccurate results on small samples.  相似文献   

18.
This paper examines the small‐sample performance of several information based criteria that can be employed to facilitate data dependent endogeneity correction in estimation of cointegrated panel regressions. The Monte Carlo evidence suggests that the criteria generally perform well but that there are differences of practical importance. In particular, the evidence suggests that, although the estimators of the cointegration vectors generally perform well, the criterion with best small‐sample performance also leads to the best performing estimator.  相似文献   

19.
The technical (plant) and legal (company) units normally used in official statistics do not take into consideration the phenomenon of business groups: i.e. sets of companies controlled by the same entrepreneur. The main aims of this paper are to assess the presence of such groups in the Italian small firm manufacturing sector and to examine the causes of their formation. Two data sets are used: the first is a representative sample of Italian manufacturing firms while the second is a small sample of groups localized in the Region of the Marches. They show that groups are widely present among small and medium-sized enterprises (SMEs). Starting from the premise that the group is the result of the expansion of activities controlled by the same entrepreneur, this paper reports a first attempt to discriminate among three alternative propositions regarding the causes of such growth and the reasons for the adoption of the group form: (1) as the result of the firm's growth policy; (2) as the result of entrepreneurial dynamics; and (3) as the result of the capital accumulation process on the part of the entrepreneur or his/her family. The empirical analysis on the whole favours the first hypothesis.  相似文献   

20.
Small sample properties of asymptotic and bootstrap prediction regions for VAR models are evaluated and compared. Monte Carlo simulations reveal that the bootstrap prediction region based on the percentile-t method outperforms its asymptotic and other bootstrap alternatives in small samples. It provides the most accurate assessment of future uncertainty under both normal and non-normal innovations. The use of an asymptotic prediction region may result in a serious under-estimation of future uncertainty when the sample size is small. When the model is near non-stationary, the use of the bootstrap region based on the percentile-t method is recommended, although extreme care should be taken when it is used for medium to long-term forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号