首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Supersaturated designs are an important class of factorial designs in which the number of factors is larger than the number of runs. These designs supply an economical method to perform and analyze industrial experiments. In this paper, we consider generalized Legendre pairs and their corresponding matrices to construct E(s 2)-optimal two-level supersaturated designs suitable for screening experiments. Also, we provide some general theorems which supply several infinite families of E(s 2)-optimal two-level supersaturated designs of various sizes.   相似文献   

2.
In this paper I present a general method forconstructing confidence intervals for predictionsfrom the generalized linear model in sociologicalresearch. I demonstrate that the method used forconstructing confidence intervals for predictions inclassical linear models is indeed a special case ofthe method for generalized linear models. I examinefour such models – the binary logit, the binaryprobit, the ordinal logit, and the Poissonregression model – to construct confidence intervalsfor predicted values in the form of probability,odds, Z score, or event count. The estimatedconfidence interval for an event prediction, whenapplied judiciously, can give the researcher usefulinformation and an estimated measure of precisionfor the prediction so that interpretation ofestimates from the generalized linear model becomeseasier.  相似文献   

3.
Supersaturated design is a form of fractional factorial design in which the number of columns is greater than the number of experimental runs. Construction methods of supersaturated design have been mainly focused on two levels cases. Much practical experience, however, indicates that two-level may sometimes be inadequate. This paper proposed a construction method of mixed-level supersaturated designs consisting of two-level and three-level columns. The χ2 statistic is used for a measure of dependency of the design columns. The dependency properties for the newly constructed designs are derived and discussed. It is shown that these new designs have low dependencies and thus can be useful for practical uses.  相似文献   

4.
This paper analyzes the higher-order asymptotic properties of generalized method of moments (GMM) estimators for linear time series models using many lags as instruments. A data-dependent moment selection method based on minimizing the approximate mean squared error is developed. In addition, a new version of the GMM estimator based on kernel-weighted moment conditions is proposed. It is shown that kernel-weighted GMM estimators can reduce the asymptotic bias compared to standard GMM estimators. Kernel weighting also helps to simplify the problem of selecting the optimal number of instruments. A feasible procedure similar to optimal bandwidth selection is proposed for the kernel-weighted GMM estimator.  相似文献   

5.
We study the generalized bootstrap technique under general sampling designs. We focus mainly on bootstrap variance estimation but we also investigate the empirical properties of bootstrap confidence intervals obtained using the percentile method. Generalized bootstrap consists of randomly generating bootstrap weights so that the first two (or more) design moments of the sampling error are tracked by the corresponding bootstrap moments. Most bootstrap methods in the literature can be viewed as special cases. We discuss issues such as the choice of the distribution used to generate bootstrap weights, the choice of the number of bootstrap replicates, and the potential occurrence of negative bootstrap weights. We first describe the generalized bootstrap for the linear Horvitz‐Thompson estimator and then consider non‐linear estimators such as those defined through estimating equations. We also develop two ways of bootstrapping the generalized regression estimator of a population total. We study in greater depth the case of Poisson sampling, which is often used to select samples in Price Index surveys conducted by national statistical agencies around the world. For Poisson sampling, we consider a pseudo‐population approach and show that the resulting bootstrap weights capture the first three design moments of the sampling error. A simulation study and an example with real survey data are used to illustrate the theory.  相似文献   

6.
This article studies a simple, coherent approach for identifying and estimating error‐correcting vector autoregressive moving average (EC‐VARMA) models. Canonical correlation analysis is implemented for both determining the cointegrating rank, using a strongly consistent method, and identifying the short‐run VARMA dynamics, using the scalar component methodology. Finite‐sample performance is evaluated via Monte Carlo simulations and the approach is applied to modelling and forecasting US interest rates. The results reveal that EC‐VARMA models generate significantly more accurate out‐of‐sample forecasts than vector error correction models (VECMs), especially for short horizons. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
A broad class of generalized linear mixed models, e.g. variance components models for binary data, percentages or count data, will be introduced by incorporating additional random effects into the linear predictor of a generalized linear model structure. Parameters are estimated by a combination of quasi-likelihood and iterated MINQUE (minimum norm quadratic unbiased estimation), the latter being numerically equivalent to REML (restricted, or residual, maximum likelihood). First, conditional upon the additional random effects, observations on a working variable and weights are derived by quasi-likelihood, using iteratively re-weighted least squares. Second, a linear mixed model is fitted to the working variable, employing the weights for the residual error terms, by iterated MINQUE. The latter may be regarded as a least squares procedure applied to squared and product terms of error contrasts derived from the working variable. No full distributional assumptions are needed for estimation. The model may be fitted with standardly available software for weighted regression and REML.  相似文献   

8.
This paper develops an approach to select models that can make the best use of limited micro-level data sets to estimate production function parameters. Since production is often the core of the agricultural and environment policy analyses, we evaluate the models using criteria that reflect the objectives of policy analysis. We argue that policy production models should optimize the precision of policy response predictions, but also incorporate sufficient heterogeneity to allow policy makers to consider the distributional consequences of policies. Hence we develop a series of quantitative metrics of both precision and heterogeneity to compare model performance. Our approach consists of two steps. We first combine the method of generalized maximum entropy and data envelopment analysis and simultaneously estimate the production frontier and technical inefficiency parameters. With a set of household level data, we estimate production models at three different levels. The province-level model restricts the production technology parameters to be the same for all households. The county-level models allow production technology parameters to vary by county but restrict them to be equal across communities within the same county. The community-level models allow production technology parameters to vary by community. In the second step, we use the disaggregated information gain, percentage absolute prediction error and the Theil??s U statistic to evaluate these models.  相似文献   

9.
An estimation procedure will be presented for a class of threshold models for ordinal data. These models may include both fixed and random effects with associated components of variance on an underlying scale. The residual error distribution on the underlying scale may be rendered greater flexibility by introducing additional shape parameters, e.g. a kurtosis parameter or parameters to model heterogeneous residual variances as a function of factors and covariates. The estimation procedure is an extension of an iterative re-weighted restricted maximum likelihood procedure, originally developed for generalized linear mixed models. This procedure will be illustrated with a practical problem involving damage to potato tubers and with data from animal breeding and medical research from the literature.  相似文献   

10.
In-depth data analysis plus statistical modeling can produce inferentialcausal models. Their creation thus combines aspects of analysis by close inspection,that is, reason analysis and cross-tabular analysis, with statistical analysis procedures,especially those that are special cases of the generalized linear model (McCullaghand Nelder, 1989; Agresti, 1996; Lindsey, 1997). This paper explores some of the roots of this combined method and suggests some new directions. An exercise clarifies some limitations of classic reason analysis by showing how the cross tabulation of variables with controls for test factors may produce better inferences. Then, given the cross tabulation of several variables, by explicating Coleman effect parameters, logistic regressions, and Poisson log-linear models, it shows how generalized linear models provide appropriate measures of effects and tests of statistical significance. Finally, to address a weakness of reason analysis, a case-control design is proposed and an example is developed.  相似文献   

11.
In this paper, we are presenting general classes of factor screening designs for identifying a few important factors from a list of m (≥ 3) factors each at three levels. A design is a subset of 3m possible runs. The problem of finding designs with small number of runs is considered here. A main effect plan requires at least (2m + 1) runs for estimating the general mean, linear and quadratic effects of m factors. An orthogonal main effect plan requires, in addition, the number of runs as a multiple of 9. For example, when m=5, a main effect plan requires at least 11 runs and an orthogonal main effect plan requires 18 runs. Two general factor screening designs presented here are nonorthogonal designs with (2m− 1) runs. These designs, called search designs permit us to search for and identify at most two important factors out of m factors under the search linear model introduced in Srivastava (1975). For example, when m=5, the two new plans given in this paper have 9 runs, which is a significant improvement over an orthogonal main effect plan with 18 runs in terms of the number of runs and an improvement over a main effect plan with at least 11 runs. We compare these designs, for 4≤m≤ 10, using arithmetic and geometric means of the determinants, traces, and maximum characteristic roots of certain matrices. Two designs D1 and D2 are identical for m=3 and this design is an optimal design in the class of all search designs under the six criteria discussed above. Designs D1 and D2 are also identical for m=4 under some row and column permutations. Consequently, D1 and D2 are equally good for searching and identifying one important factor out of m factors when m=4. The design D1 is marginally better than the design D2 for searching and identifying one important factor out of m factors when m=5, … , 10. The design D1 is marginally better than the D2 for searching and identifying two important factors out of m factors when m=5, 7, 9. The design D2 is somewhat better than the design D1 for m=6, 8. For m=10, D1 is marginally better than D2 w.r.t. the geometric mean and D2 is marginally better than D1 w.r.t. the arithmetic mean of the maximum characteristic roots.  相似文献   

12.
Suppose that the econometrician is interested in comparing two misspecified moment restriction models, where the comparison is performed in terms of some chosen measure of fit. This paper is concerned with describing an optimal test of the Vuong (1989) and Rivers and Vuong (2002) type null hypothesis that the two models are equivalent under the given measure of fit (the ranking may vary for different measures). We adopt the generalized Neyman–Pearson optimality criterion, which focuses on the decay rates of the type I and II error probabilities under fixed non-local alternatives, and derive an optimal but practically infeasible test. Then, as an illustration, by considering the model comparison hypothesis defined by the weighted Euclidean norm of moment restrictions, we propose a feasible approximate test statistic to the optimal one and study its asymptotic properties. Local power properties, one-sided test, and comparison under the generalized empirical likelihood-based measure of fit are also investigated. A simulation study illustrates that our approximate test is more powerful than the Rivers–Vuong test.  相似文献   

13.
黄金  秦江涛 《物流科技》2008,31(9):132-134
针对分销商综合评价中客观存在的多因素之间相互联系及制约性,将信息论中的熵权理论引入该评价体系。提出了基于信息熵的分销商选择方法,该方法是在专家调查法的基础上,考虑各指标间的相互影响,与评价数据的内在特征紧密相关,避免因人为认识的差别而对评价结果造成影响,从而提高了决策的准确性和科学性。利用熵权法对多个评价指标赋权,使得权值的分配有了一定的理论依据。最后,通过实例计算,得出该方法直观、可行。  相似文献   

14.
The entropy valuation of option (Stutzer, 1996) provides a risk-neutral probability distribution (RND) as the pricing measure by minimizing the Kullback–Leibler (KL) divergence between the empirical probability distribution and its risk-neutral counterpart. This article establishes a unified entropic framework by developing a class of generalized entropy pricing models based upon Cressie-Read (CR) family of divergences. The main contributions of this study are: (1) this unified framework can readily incorporate a set of informative risk-neutral moments (RNMs) of underlying return extracted from the option market which accurately captures the characteristics of the underlying distribution; (2) the classical KL-based entropy pricing model is extended to a unified entropic pricing framework upon a family of CR divergences. For each of the proposed models under the unified framework, the optimal RND is derived by employing the dual method. Simulations show that, compared to the true price, each model of the proposed family can produce high accuracy for option pricing. Meanwhile, the pricing biases among the models are different, and we hence conduct theoretical analysis and experimental investigations to explore the driving causes.  相似文献   

15.
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

16.
The existing methods for feature screening focus mainly on the mean function of regression models. The variance function, however, plays an important role in statistical theory and application. We thus investigate feature screening for mean and variance functions with multiple-index framework in high dimensional regression models. Notice that some information about predictors can be known in advance from previous investigations and experience, for example, a certain set of predictors is related to the response. Based on the conditional information, together with empirical likelihood, we propose conditional feature screening procedures. Our methods can consistently estimate the sets of active predictors in the mean and variance functions. It is interesting that the proposed screening procedures can avoid estimating the unknown link functions in the mean and variance functions, and moreover, can work well in the case of high correlation among the predictors without iterative algorithm. Therefore, our proposal is of computational simplicity. Furthermore, as a conditional method, our method is robust to the choice of the conditional set. The theoretical results reveal that the proposed procedures have sure screening properties. The attractive finite sample performance of our method is illustrated in simulations and a real data application.  相似文献   

17.
We develop analytical results on the second-order bias and mean squared error of estimators in time-series models. These results provide a unified approach to developing the properties of a large class of estimators in linear and nonlinear time-series models and they are valid for both normal and nonnormal samples of observations, and where the regressors are stochastic. The estimators included are the generalized method of moments, maximum likelihood, least squares, and other extremum estimators. Our general results are applied to four time-series models. We investigate the effects of nonnormality on the second-order bias results for two of these models, while for all four models, the second-order bias and mean squared error results are given under normality. Numerical results for some of these models are also presented.  相似文献   

18.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

19.
This paper studies the Type I error rate obtained using the Breslow-Day (BD) test to detect Nonuniform Differential Item Functioning (NUDIF) in a short test when the average ability of one group is significantly higher than that of the other. The performance is compared with the logistic regression (LR) and the standard Mantel-Haenszel procedure (MH). Responses to a 20-item test were simulated without Differential Item Functioning (DIF) according to the three-parameter logistic model. The manipulated factors were sample size and item parameters. The design yielded 40 conditions that were replicated 50 times and the false positive rate at a 5% significance level obtained with the three methods was recorded for each condition. In most cases, BD performed better than LR and MH in terms of proneness to Type I error. With the BD test, the Type I error rate was similar to the nominal one when the item with the highest discrimination and difficulty parameters in the case of equally sized groups was excluded from the goodness-of-fit to the binomial distribution (number of false positives among the fifty replications of a Bernoulli variable with parameter equal to 0.05).  相似文献   

20.
Weak and strong mean square error tests of restrictions presented in Wallace (1972) are generalized to apply to singular linear models. The singularity necessitates a slight change in the strong m.s.e. criterion and the requirement that the restrictions be estimable, but otherwise the tests are applied in a fashion analogous to the non-singular case. Use of those tests implies that the solution for the linear model parameter vector is contingent on a test result. The risk behavior of these contingent solutions is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号