首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This article provides an overview of the recent literature on the design of blocked and split-plot experiments with quantitative experimental variables. A detailed literature study introduces the ongoing debate between an optimal design approach to constructing blocked and split-plot designs and approaches where the equivalence of ordinary least squares and generalized least squares estimates are envisaged. Examples where the competing design strategies lead to totally different designs are given, as well as examples in which the optimal experimental designs are orthogonally blocked or equivalent-estimation split-plot designs.  相似文献   

2.
In this paper we present a new stochastic characterization of the Loewner optimality design criterion. The result is obtained by proving a generalization to the well known corollary of Anderson's theorem. Certain connections between the Loewner optimality and the stochastic distance optimality design criterion are showed. We also present applications and generalizations of the main result. Received: 9 August 2000  相似文献   

3.
Within the framework of classical linear regression model integral optimal design criteria of stochastic nature are considered and their properties are established. Their limit behaviour generalizes that of the distance stochastic optimality criterion. As an example a line fit model is taken. Acknowledgement. I would like to thank the referees for their constructive comments which improved the paper.  相似文献   

4.
In the simple errors-in-variables model the least squares estimator of the slope coefficient is known to be biased towards zero for finite sample size as well as asymptotically. In this paper we suggest a new corrected least squares estimator, where the bias correction is based on approximating the finite sample bias by a lower bound. This estimator is computationally very simple. It is compared with previously proposed corrected least squares estimators, where the correction aims at removing the asymptotic bias or the exact finite sample bias. For each type of corrected least squares estimators we consider the theoretical form, which depends on an unknown parameter, as well as various feasible forms. An analytical comparison of the theoretical estimators is complemented by a Monte Carlo study evaluating the performance of the feasible estimators. The new estimator proposed in this paper proves to be superior with respect to the mean squared error.  相似文献   

5.
F. Brodeau 《Metrika》1999,49(2):85-105
This paper is devoted to the study of the least squares estimator of f for the classical, fixed design, nonlinear model X (t i)=f(t i)+ε(t i), i=1,2,…,n, where the (ε(t i))i=1,…,n are independent second order r.v.. The estimation of f is based upon a given parametric form. In Brodeau (1993) this subject has been studied in the homoscedastic case. This time we assume that the ε(t i) have non constant and unknown variances σ2(t i). Our main goal is to develop two statistical tests, one for testing that f belongs to a given class of functions possibly discontinuous in their first derivative, and another for comparing two such classes. The fundamental tool is an approximation of the elements of these classes by more regular functions, which leads to asymptotic properties of estimators based on the least squares estimator of the unknown parameters. We point out that Neubauer and Zwanzig (1995) have obtained interesting results for connected subjects by using the same technique of approximation. Received: February 1996  相似文献   

6.
J. Agulló 《Metrika》2002,55(1-2):3-16
We propose an exchange algorithm (EA) for computing the least quartile difference estimate in a multiple linear regression model. Empirical results suggest that the EA is faster and more accurate than the usual p-subset algorithm.  相似文献   

7.
Boutahar  Mohamed  Deniau  Claude 《Metrika》1996,43(1):57-67
Here we study the least squares estimates in some regression models. We assume that the evolution of the parameter is linearly explosive (i.e. polynomial), or stable (i.e. sinusoidal). We prove the strong consistency, and establish the rate of convergence.  相似文献   

8.
本文采用偏最小二乘回归模型(PLS),以泰国菠萝贸易为例,通过变量投影重要性准则筛选自变量,由交叉有效性提取主成分,进而建立偏最小二乘回归模型。深入分析了各指标对泰国菠萝出口贸易的影响。研究表明泰国菠萝出口与原料价格及工厂生产加工速度密切相关,并且偏最小二乘回归的拟合效果优于普通最小二乘回归。  相似文献   

9.
On the selection of forecasting models   总被引:5,自引:0,他引:5  
It is standard in applied work to select forecasting models by ranking candidate models by their prediction mean squared error (PMSE) in simulated out-of-sample (SOOS) forecasts. Alternatively, forecast models may be selected using information criteria (IC). We compare the asymptotic and finite-sample properties of these methods in terms of their ability to mimimize the true out-of-sample PMSE, allowing for possible misspecification of the forecast models under consideration. We show that under suitable conditions the IC method will be consistent for the best approximating model among the candidate models. In contrast, under standard assumptions the SOOS method, whether based on recursive or rolling regressions, will select overparameterized models with positive probability, resulting in excessive finite-sample PMSEs.  相似文献   

10.
The approximate theory of optimal linear regression design leads to specific convex extremum problems for numerical solution. A conceptual algorithm is stated, whose concrete versions lead us from steepest descent type algorithms to improved gradient methods, and finally to second order methods with excellent convergence behaviour. Applications are given to symmetric multiple polynomial models of degree three or less, where invariance structures are utilized. A final section is devoted to the construction of efficientexact designs of sizeN from the optimal approximate designs. For the multifactor cubic model and some of the most popular optimality criteria (D-, A-, andI-criteria) fairly efficient exact designs are obtained, even for small sample sizeN. AMS Subject Classification: 62K05.Abbreviated Title: Algorithms for Optimal Design.Invited paper presented at the International Conference on Mathematical Statistics,ProbaStat '94, Smolenice, Slovakia.  相似文献   

11.
吕敏红  张惠玲 《价值工程》2012,31(20):301-302
近年来,半参数模型是处理回归问题的有力工具,进年来,已经成为当今回归分析的热点,引起了众多学者的关注。文章研究了具有AR(p)误差的半参数回归模型,首先对其误差的相关性进行了消除,然后将模型转变成为经典的半参数回归模型,运用惩罚最小二乘估计方法对模型参数进行了估计。  相似文献   

12.
It is shown how to implement an EM algorithm for maximum likelihood estimation of hierarchical nonlinear models for data sets consisting of more than two levels of nesting. This upward–downward algorithm makes use of the conditional independence assumptions implied by the hierarchical model. It cannot only be used for the estimation of models with a parametric specification of the random effects, but also to extend the two-level nonparametric approach – sometimes referred to as latent class regression – to three or more levels. The proposed approach is illustrated with an empirical application.  相似文献   

13.
The paper demonstrates how the E-stability principle introduced by Evans and Honkapohja [2001. Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, NJ] can be applied to models with heterogeneous and private information in order to assess the stability of rational expectations equilibria under learning. The paper extends already known stability results for the Grossman and Stiglitz [1980. On the impossibility of informationally efficient markets. American Economic Review 70, 393–408] model to a more general case with many differentially informed agents and to the case where information is endogenously acquired by optimizing agents. In both cases it turns out that the rational expectations equilibrium of the model is inherently E-stable and thus locally stable under recursive least squares learning.  相似文献   

14.
Iterated weighted least squares (IWLS) is investigated for estimating the regression coefficients in a linear model with symmetrically distributed errors. The variances of the errors are not specified; it is not assumed that they are unknown functions of the explanatory variables nor that they are given in some parametric way.
IWLS is carried out in a random number of steps, of which the first one is OLS. In each step the error variance at time t is estimated with a weighted sum of m squared residuals in the neighbourhood of t and the coefficients are estimated using WLS. Furthermore an estimate of the co-variance matrix is obtained. If this estimate is minimal in some way the iteration process is stopped.
Asymptotic properties of IWLS are derived for increasing sample size n . Some particular cases show that the asymptotic efficiency can be increased by allowing more than two steps. Even asymptotic efficiency with respect to WLS with the true error variances can be obtained if m is not fixed but tends to infinity with n and if the heteroskedasticity is smooth.  相似文献   

15.
A fundamental statistical problem is to indicate which of two hypotheses is better supported by the data. Statistics designed for this purpose are called weight of evidence. In this paper we study the problem of robust weights of evidence, optimal in their performance while robust in the infinitesimal sense of the influence function.  相似文献   

16.
The focus of this paper is to characterize regulatory mechanisms for natural monopolies to provide for optimal technical progress when information is asymmetric. We model a Bayesian-Nash game where the monopolist has private knowledge of the cost-reducing effects of R&D investment to generate process innovations. In the first case, a price-regulated, profit-maximizing firm whose R&D level is unobservable sets its R&D level efficiently to maximize profits at the output level chosen by the firm. However, the level of technical progress achieved by the firm in this case is too high from the regulator's point of view since, in the second-best regulated solution of interest, the regulator has to provide for the R&D expenditures, assumed sunk, as well as for information rents transferred to the firm. In a second case, it can be shown that if the regulator can observe and set limits on the firm's investment in R&D, social welfare is improved, even though the regulated investment level is no longer efficient at the output level chosen by the firm. The reason for the welfare improvement is that losses in consumer surplus due to a decrease in output and an increase in the price are offset by a decrease in information rents and R&D costs transferred, causing the social costs of public funds to fall. Received: 31 July 1994 / Accepted: 15 January 1999  相似文献   

17.
A highly accurate demand forecast is fundamental to the success of every revenue management model. As is often required in both practice and theory, we aim to forecast the accumulated booking curve, as well as the number of reservations expected for each day in the booking horizon. To reduce the dimensionality of this problem, we apply singular value decomposition to the historical booking profiles. The forecast of the remaining part of the booking horizon is dynamically adjusted to the earlier observations using the penalized least squares and historical proportion methods. Our proposed updating procedure considers the correlation and dynamics of bookings both within the booking horizon and between successive product instances. The approach is tested on real hotel reservation data and shows a significant improvement in forecast accuracy.  相似文献   

18.
Time series data are often subject to statistical adjustments needed to increase accuracy, replace missing values and/or facilitate data analysis. The most common adjustments made to original observations are signal extraction (e.g. smoothing), benchmarking, interpolation and extrapolation. In this article, we present a general dynamic stochastic regression model, from which most of these adjustments can be performed, and prove that the resulting generalized least square estimator is minimum variance linear unbiased. We extend current methods to include those cases where the signal follows a mixed model (deterministic and stochastic components) and the errors are autocorrelated and heteroscedastic.  相似文献   

19.
In this paper, we are presenting general classes of factor screening designs for identifying a few important factors from a list of m (≥ 3) factors each at three levels. A design is a subset of 3m possible runs. The problem of finding designs with small number of runs is considered here. A main effect plan requires at least (2m + 1) runs for estimating the general mean, linear and quadratic effects of m factors. An orthogonal main effect plan requires, in addition, the number of runs as a multiple of 9. For example, when m=5, a main effect plan requires at least 11 runs and an orthogonal main effect plan requires 18 runs. Two general factor screening designs presented here are nonorthogonal designs with (2m− 1) runs. These designs, called search designs permit us to search for and identify at most two important factors out of m factors under the search linear model introduced in Srivastava (1975). For example, when m=5, the two new plans given in this paper have 9 runs, which is a significant improvement over an orthogonal main effect plan with 18 runs in terms of the number of runs and an improvement over a main effect plan with at least 11 runs. We compare these designs, for 4≤m≤ 10, using arithmetic and geometric means of the determinants, traces, and maximum characteristic roots of certain matrices. Two designs D1 and D2 are identical for m=3 and this design is an optimal design in the class of all search designs under the six criteria discussed above. Designs D1 and D2 are also identical for m=4 under some row and column permutations. Consequently, D1 and D2 are equally good for searching and identifying one important factor out of m factors when m=4. The design D1 is marginally better than the design D2 for searching and identifying one important factor out of m factors when m=5, … , 10. The design D1 is marginally better than the D2 for searching and identifying two important factors out of m factors when m=5, 7, 9. The design D2 is somewhat better than the design D1 for m=6, 8. For m=10, D1 is marginally better than D2 w.r.t. the geometric mean and D2 is marginally better than D1 w.r.t. the arithmetic mean of the maximum characteristic roots.  相似文献   

20.
We assess the marginal predictive content of a large international dataset for forecasting GDP in New Zealand, an archetypal small open economy. We apply “data-rich” factor and shrinkage methods to efficiently handle hundreds of predictor series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. We find that exploiting a large international dataset can improve forecasts relative to data-rich approaches based on a large national dataset only, and also relative to more traditional approaches based on small datasets. This is in spite of New Zealand’s business and consumer confidence and expectations data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best in handling the international data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号