首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
De Vos (1991) claims to have discovered a new example from agricultural field experimentation which shows that a simple robust spatial model may lead to inference and systematic experimental design that outperforms the inference from randomized experiments by far. In this reaction it is shown that: (1) the example is not new; (2) the gains in efficiency are exaggerated due to comparison with an inefficient randomization method; (3) the paper is over-optimistic with respect to robustness of model-based methods and throws unjustified doubts on the validity of randomization methods; (4) the choice between randomization methods and model-based methods depends on the relative importance attached to efficiency and validity.  相似文献   

2.
Randomization in the Design of Experiments   总被引:2,自引:0,他引:2  
A general review is given of the role of randomization in experimental design. Three objectives are distinguished, the avoidance of bias, the establishment of a secure base for the estimation of error in traditional designs, and the provision of formally exact tests of significance and confidence limits. The approximate randomization theory associated with analysis of covariance is outlined and conditionality considerations are used to explain the limited role of randomization in experiments with very small numbers of experimental units. The relation between the so-called design-based and model-based analyses is discussed. Corresponding results in sampling theory are mentioned briefly.  相似文献   

3.
This paper deals with the estimation of the mean of a spatial population. Under a design‐based approach to inference, an estimator assisted by a penalized spline regression model is proposed and studied. Proof that the estimator is design‐consistent and has a normal limiting distribution is provided. A simulation study is carried out to investigate the performance of the new estimator and its variance estimator, in terms of relative bias, efficiency, and confidence interval coverage rate. The results show that gains in efficiency over standard estimators in classical sampling theory may be impressive.  相似文献   

4.
"A flexible methodology for explaining interregional migration in terms of relevant socioeconomic variables is set forth in this paper. Concerned with setting observation against theory, it makes use of (1) the nested logit model as a theoretical substratum, and (2) the maximum quasi-likelihood method as a method for parameter estimation and statistical inference. Application to interprovincial migration data over a 22-year period (1961-1962 to 1982-1983) shows that, in Canada, migration does not appear to serve well as an equalizer of economic opportunities."  相似文献   

5.
M. Stone 《Metrika》1973,20(1):170-176
Summary Many experimental situations with controllable, independent design variables have an associated null hypothesis 0 of zero dependence of observations on the design variables. The necessity of randomization of the design variables for asymptotic discriminability between 0 and its complement is considered in terms of likelihood ratios. Applications are made to stochastic processes and comparative experiments.
Zusammenfassung Viele experimentelle Situationen mit kontrollierten, unabhängigen Planungsvariablen haben eine assoziierte Null-Hypothese 0, die besagt, daß die Beobachtungen nicht von den Planungsvariablen abhängen. Die Notwendigkeit der Randomisierung der Planungsvariablen für asymptotische Diskriminanz zwischen 0 und ihres Komplementes wird mit Hilfe des Likelihood-Verhältnisses untersucht. Anwendungen auf dem Gebiet stochastischer Prozesse und komparativer Experimente werden diskutiert.
  相似文献   

6.
We describe exact inference based on group-invariance assumptions that specify various forms of symmetry in the distribution of a disturbance vector in a general nonlinear model. It is shown that such mild assumptions can be equivalently formulated in terms of exact confidence sets for the parameters of the functional form. When applied to the linear model, this exact inference provides a unified approach to a variety of parametric and distribution-free tests. In particular, we consider exact instrumental variable inference, based on symmetry assumptions. The unboundedness of exact confidence sets is related to the power to reject a hypothesis of underidentification. In a multivariate instrumental variables context, generalizations of Anderson–Rubin confidence sets are considered.  相似文献   

7.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

8.
The paper takes up inference in the stochastic frontier model with gamma distributed inefficiency terms, without restricting the gamma distribution to known integer values of its shape parameter (the Erlang form). The paper shows that Gibbs sampling with data augmentation can be used in a computationally efficient way to explore the posterior distribution of the model and conduct inference regarding parameters as well as functions of interest related to technical inefficiency.  相似文献   

9.
Early survey statisticians faced a puzzling choice between randomized sampling and purposive selection but, by the early 1950s, Neyman's design-based or randomization approach had become generally accepted as standard. It remained virtually unchallenged until the early 1970s, when Royall and his co-authors produced an alternative approach based on statistical modelling. This revived the old idea of purposive selection, under the new name of “balanced sampling”. Suppose that the sampling strategy to be used for a particular survey is required to involve both a stratified sampling design and the classical ratio estimator, but that, within each stratum, a choice is allowed between simple random sampling and simple balanced sampling; then which should the survey statistician choose? The balanced sampling strategy appears preferable in terms of robustness and efficiency, but the randomized design has certain countervailing advantages. These include the simplicity of the selection process and an established public acceptance that randomization is “fair”. It transpires that nearly all the advantages of both schemes can be secured if simple random samples are selected within each stratum and a generalized regression estimator is used instead of the classical ratio estimator.  相似文献   

10.
The present work develops a basic classification scheme for distortion in the framework of classical statistical inference. In particular, it emphasizes the still outstanding and consequent distinction between data contamination and model deviation. It is explored when different implications on the performance of statistical inference procedures under the two types of distortion are possible and how these can be detected. A critical review of some important approaches in the robustness and diagnostics literature finally indicates which of them is aimed at data contamination and which at model deviation (independently from what has been claimed originally). The paper raises awareness of the above problem through a constructive discussion – it is not meant to introduce new methodology  相似文献   

11.
This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002). In this partially identified model, we propose a new estimator based on MT’s modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the density-weighted MMS estimator converges at a nearly cube-root-n rate. We propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure.  相似文献   

12.
The conditional bias has been proposed by Moreno Rebollo et al. (1999) as an influence diagnostic in survey sampling, when the inference is based on the randomization distribution generated by a random sampling. The conditional bias is a population parameter. So, from an applied point of view, it must be estimated. In this paper, we propose an estimator of the conditional bias and we study conditions that guarantee its unbiasedness. The results are applied in a Simple Random Sampling and in a Proportional Probability Aggregated Size Sampling, when the ratio estimator is used. Received October 2000  相似文献   

13.
This paper views empirical research as a search for illustrations of interesting possibilities which have occurred, and the exploration of the variety of such possibilities in a sample or universe. This leads to a definition of illustrative inference (in contrast to statistical inference), which, we argue, is of considerable importance in many fields of inquiry – ranging from market research and qualitative research in social science, to cosmology. Sometimes, it may be helpful to model illustrative inference quantitatively, so that the size of a sample can be linked to its power (for illustrating possibilities): we outline one model based on probability theory, and another based on a resampling technique.  相似文献   

14.
This paper proposes exact distribution-free permutation tests for the specification of a non-linear regression model against one or more possibly non-nested alternatives. The new tests may be validly applied to a wide class of models, including models with endogenous regressors and lag structures. These tests build on the well-known J test developed by Davidson and MacKinnon [1981. Several tests for model specification in the presence of alternative hypotheses. Econometrica 49, 781–793] and their exactness holds under broader assumptions than those underlying the conventional J test. The J-type test statistics are used with a randomization or Monte Carlo resampling technique which yields an exact and computationally inexpensive inference procedure. A simulation experiment confirms the theoretical results and also shows the performance of the new procedure under violations of the maintained assumptions. The test procedure developed is illustrated by an application to inflation dynamics.  相似文献   

15.
In this paper, we propose a fixed design wild bootstrap procedure to test parameter restrictions in vector autoregressive models, which is robust in cases of conditionally heteroskedastic error terms. The wild bootstrap does not require any parametric specification of the volatility process and takes contemporaneous error correlation implicitly into account. Via a Monte Carlo investigation, empirical size and power properties of the method are illustrated for the case of white noise under the null hypothesis. We compare the bootstrap approach with standard ordinary least squares (OLS)-based, weighted least squares (WLS) and quasi-maximum likelihood (QML) approaches. In terms of empirical size, the proposed method outperforms competing approaches and achieves size-adjusted power close to WLS or QML inference. A White correction of standard OLS inference is satisfactory only in large samples. We investigate the case of Granger causality in a bivariate system of inflation expectations in France and the United Kingdom. Our evidence suggests that the former are Granger causal for the latter while for the reverse relation Granger non-causality cannot be rejected.  相似文献   

16.
Through Monte Carlo experiments the effects of a feedback mechanism on the accuracy in finite samples of ordinary and bootstrap inference procedures are examined in stable first- and second-order autoregressive distributed-lag models with non-stationary weakly exogenous regressors. The Monte Carlo is designed to mimic situations that are relevant when a weakly exogenous policy variable affects (and is affected by) the outcome of agents’ behaviour. In the parameterizations we consider, it is found that small-sample problems undermine ordinary first-order asymptotic inference procedures irrespective of the presence and importance of a feedback mechanism. We examine several residual-based bootstrap procedures, each of them designed to reduce one or several specific types of bootstrap approximation error. Surprisingly, the bootstrap procedure which only incorporates the conditional model overcomes the small sample problems reasonably well. Often (but not always) better results are obtained if the bootstrap also resamples the marginal model for the policymakers’ behaviour.  相似文献   

17.
A report is given of a discussion by De Leeuw, Molenaar, and audience on the role of models and generalization in statistical inference.  相似文献   

18.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

19.
Discussion     
I thoroughly enjoyed reading the article by Bhadra et. al. (2020) and convey my congratulations to the authors for providing a comprehensive and coherent review of horseshoe-based regularization approaches for machine learning models. I am thankful to the editors for providing this opportunity to write a discussion on this useful article, which I expect will turn out to be a good guide in the future for statisticians and practitioners alike. It is quite amazing to see the rapid progress and the magnitude of work advancing the horseshoe regularization approach since the seminal paper by Carvalho et al. (2010). The current review article is a testimony for this. While I have been primarily working with continuous spike and slab priors for high-dimensional Bayesian modeling, I have been following the literature on horseshoe regularization with a keen interest. For my comments on this article, I will focus on some comparisons between these two approaches particularly in terms of model building and methodology and some computational considerations. I would like to first provide some comments on performing valid inference based on the horsheshoe prior framework.  相似文献   

20.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号