首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Postulating a linear regression of a variable of interest on an auxiliary variable with values of the latter known for all units of a survey population, we consider appropriate ways of choosing a sample and estimating the regression parameters. Recalling Thomsen’s (1978) results on non-existence of ‘design-cum-model’ based minimum variance unbiased estimators of regression coefficients we apply Brewer’s (1979) ‘asymptotic’ analysis to derive ‘asymptotic-design-cummodel’ based optimal estimators assuming large population and sample sizes. A variance estimation procedure is also proposed.  相似文献   

2.
Liu  Shuangzhe  Neudecker  Heinz 《Metrika》1997,45(1):53-66
Extending Scheffé’s simplex-centroid design for experiments with mixtures, we introduce aweighted simplex-centroid design for a class of mixture models. Becker’s homogeneous functions of degree one belong to this class. By applying optimal design theory, we obtainA-, D- andI-optimal allocations of observations for Becker’s models.  相似文献   

3.
We analyse the state of the art in the field of life cycle portfolio choice, a recent strand of the literature on intertemporal portfolio selection. Life cycle models are designed to identify optimal savings and portfolio policies over the lifetime of investors. They can help to improve pension schemes by showing how these could be specifically tailored to the individual employee’s circumstances to overcome the ‘one-size-fits-all’ philosophy still prevailing in parts of the mandatory retirement savings system. To facilitate comparison, we first describe set-up, solution method and characteristic results for a basic model and then derive a general framework to classify existing contributions. We highlight the models’ strengths and weaknesses and assess their ability to resolve existing portfolio puzzles. Lessons from the literature are summarized and promising areas for further research identified. JEL classifications G11, D14, D91, H55  相似文献   

4.
In this paper, we employed SAS PROC NLMIXED (Nonlinear mixed model procedure) to analyze three example data having inflated zeros. Examples used are data having covariates and no covariates. The covariates utilized in this article have binary outcomes to simplify our analysis. Of course the analysis can readily be extended to situations with several covariates having multiple levels. Models fitted include the Poisson (P), the negative binomial (NB), the generalized Poisson (GP), and their zero-inflated variants, namely the ZIP, the ZINB and the ZIGP models respectively. Parameter estimates as well as the appropriate goodness-of-fit statistic (the deviance D) in this case are computed and in some cases, the Pearson’s X 2 statistic, that is based on the variance of the relevant model distribution is also computed. Also obtained are the expected frequencies for the models and GOF tests are conducted based on the rule established by Lawal (Appl Stat 29:292–298, 1980). Our results extend previous results on the analysis of the chosen data in this example. Further, results obtained are very consistent with previous analyses on the data sets chosen for this article. We also present an hierarchical figure relating all the models employed in this paper. While we do not pretend that the results obtained are entirely new, however, the analyses give opportunities to researchers in the field the much needed means of implementing these models in SAS without having to resort to S-PLUS, R or Stata.  相似文献   

5.
In this paper, we study the consumption, labor supply, and portfolio decisions of an infinitely lived individual who receives a certain wage rate and income from investment into a risky asset and a risk-free bond. Uncertainty about labor income arises endogenously, because labor supply evolves randomly over time in response to changes in financial wealth. We derive closed-form solutions for optimal consumption, labor supply and investment strategy. We find that deferring the retirement age stimulates optimal consumption over time and discourages optimal labor supply during the working life. We also find explicitly that optimal portfolio allocation becomes more ‘conservative’ when the individual approaches his prescribed retirement age. The effects of risk-aversion coefficients on optimal decisions are examined.  相似文献   

6.
We consider the ability to detect interaction structure from data in a regression context. We derive an asymptotic power function for a likelihood-based test for interaction in a regression model, with possibly misspecified alternative distribution. This allows a general investigation of different types of interactions which are poorly or well detected via data. Principally we contrast pairwise-interaction models with ‘diffuse interaction models’ as introduced in Gustafson et al. (Stat Med 24:2089–2104, 2005).  相似文献   

7.
The paper is a preliminary research report and presents a method for generating new records using an evolutionary algorithm (close to but different from a genetic algorithm). This method, called Pseudo-Inverse Function (in short P-I Function), was designed and implemented at Semeion Research Centre (Rome). P-I Function is a method to generate new (virtual) data from a small set of observed data. P-I Function can be of aid when budget constraints limit the number of interviewees, or in case of a population that shows some sociologically interesting trait, but whose small size can seriously affect the reliability of estimates, or in case of secondary analysis on small samples. The applicative ground is given by research design with one or more dependent and a set of independent variables. The estimation of new cases takes place according to the maximization of a fitness function and outcomes a number as large as needed of ‘virtual’ cases, which reproduce the statistical traits of the original population. The algorithm used by P-I Function is known as Genetic Doping Algorithm (GenD), designed and implemented by Semeion Research Centre; among its features there is an innovative crossover procedure, which tends to select individuals with average fitness values, rather than those who show best values at each ‘generation’. A particularly thorough research design has been put on: (1) the observed sample is half-split to obtain a training and a testing set, which are analysed by means of a back propagation neural network; (2) testing is performed to find out how good the parameter estimates are; (3) a 10% sample is randomly extracted from the training set and used as a reduced training set; (4) on this narrow basis, GenD calculates the pseudo-inverse of the estimated parameter matrix; (5) ‘virtual’ data are tested against the testing data set (which has never been used for training). The algorithm has been proved on a particularly difficult ground, since the data set used as a basis for generating ‘virtual’ cases counts only 44 respondents, randomly sampled from a broader data set taken from the General Social Survey 2002. The major result is that networks trained on the ‘virtual’ resample show a model fit as good as the one of the observed data, though ‘virtual’ and observed data differ on some features. It can be seen that GenD ‘refills’ the joint distribution of the independent variables, conditioned by the dependent one. This paper is the result of deep collaboration among all authors. Cinzia Meraviglia wrote § 1, 3, 4, 6, 7 and 8; Giulia Massini wrote §5; Daria Croce performed some elaborations with neural networks and linear regression; Massimo Buscema wrote §2.  相似文献   

8.
A. I. Khuri 《Metrika》1990,37(1):217-231
Summary Box and Draper (1965) introduced a criterion for the estimation of parameters from a multiresponse model. This criterion can lead to misleading results in the presence of linear relationships among the responses. Box et al. (1973) proposed a procedure for detecting the existence of such relationships when the multiresponse data are subject to round-off errors. The procedure is applicable only when the round-off errors are identically distributed as uniform random variates. In many situations, however, the round-off errors can vary considerably. For example, the responses may have distinct physical meanings, and hence distinct scales of measurement with possibly widely different orders of magnitude. This article extends Box et al.’s procedure to cover these situations. Two examples are given to illustrate the implementation of the proposed methodology.  相似文献   

9.
Do Sun Bai  Min Koo Lee 《Metrika》1996,44(1):53-69
Economic designs of single and double screening procedures for improving outgoing product quality based on two screening variables are presented for the case of two-sided specification limits. Two screening variables are observed simultaneously in the single screening procedure. In the double screening procedure, one variable is used first to make one of three decisions — accept, reject, or undecided — and after the first screening, the second variable is employed to screen the undecided items. It is assumed that the performance and the two screening variables are jointly normally distributed, and the deviation of the performance variable from the ‘ideal’ value causes dissatisfication to the consumers. Two quality cost functions — constant and quadratic — are considered. Cost models are constructed which involve screening inspection cost, and costs of accepted and rejected item. Methods of finding the optimal cutoff values are presented and a numerical example is given.  相似文献   

10.
This paper discusses ways forward in terms of making efficiency measurement in the area of health care more useful. Options are discussed in terms of the potential introduction of guidelines for the undertaking of studies in this area, in order to make them more useful to policy makers and those involved in service delivery. The process of introducing such guidelines is discussed using the example of the development of guidelines in economic evaluation of health technologies. This presents two alternative ways forward—‘revolution’, the establishment of a panel to establish initial guidelines, or ‘evolution’—the more gradual development of such guidelines over time. The third alternative of ‘status quo’, representing the current state of play, is seen as the base case scenario. It is concluded that although we are quite a way on in terms of techniques and publications, perhaps revolution, followed by evolution is the way forward.  相似文献   

11.
Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering ‘randomized response’ (RR) technique (RRT), useful in unbiasedly estimating the proportion of people bearing a sensitive characteristic, is based exclusively on simple random sampling (SRS) with replacement (WR). Numerous developments that follow from it also use SRSWR and employ in estimation the mean of the RR’s yielded by each person everytime he/she is drawn in the sample. We examine how the accuracy in estimation alters when instead we use the mean of the RR’s from each distinct person sampled and also alternatively if the Horvitz and Thompson’s (HT, J Am Stat Assoc 47:663–685, 1952) method is employed for an SRSWR eliciting only, a single RR from each. Arnab’s (1999) approach of using repeated RR’s from each distinct person sampled is not taken up here to avoid algebraic complexity. Pathak’s (Sánkhya 24:287–302, 1962) results for direct response (DR) from SRSWR are modified with the use of such RR’s. Through simulations we present relative levels in accuracy in estimation by the above three alternative methods.  相似文献   

12.
In hazard models, it is assumed that all heterogeneity is captured by a set of theoretically relevant covariates. In many applications however, there are ample reasons for unobserved heterogeneity due to omitted or unmeasured factors. If there is unmeasured frailty, the hazard will not only be a function of the covariates but also of the unmeasured frailty. This paper discusses the implications of unobserved heterogeneity on parameter estimates with application to the analysis of infant death on subsequent birth timing in Ghana and Kenya using DHS data. Using Lognormal Accelerated Failure Time models with and without frailty, we found that standard models that do not control for unobserved heterogeneity produced biased estimates by overstating the degree of positive dependence and underestimating the degree of negative dependence. The implications of the findings are discussed.  相似文献   

13.
Summary A natural conjugate prior distribution for the parameters involved in the noncentral chi-square leads to many known distributions. The applications of the distributions thus obtained are briefly pointed out in evaluating the ‘kill’ probability in the analysis of weapon systems effectiveness. The ‘kill’ probabilities or the expected coverage are obtained associated with a gamma prior distribution and compared with those obtained byMcnolty. This paper is read in a symposium on Mathematical Sciences held under the auspices of Delhi University, Delhi im January 1966.  相似文献   

14.
Abstract

This paper develops a unified framework for fixed effects (FE) and random effects (RE) estimation of higher-order spatial autoregressive panel data models with spatial autoregressive disturbances and heteroscedasticity of unknown form in the idiosyncratic error component. We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define both an RE and an FE spatial generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators and derive their joint asymptotic distribution, which is robust to heteroscedasticity of unknown form in the idiosyncratic error component. Finally, we derive a robust Hausman test of the spatial random against the spatial FE model.  相似文献   

15.
The Optimality of Single-group Designs for Certain Mixed Models   总被引:1,自引:0,他引:1  
Thomas Schmelter 《Metrika》2007,65(2):183-193
In this paper optimal designs for the estimation of the fixed effects (population parameters) in a certain class of mixed models are investigated. Two classes of designs are compared: the class of single-group designs, where all individuals are observed under the same approximate design, and the class of more-group designs with the same mean number of observations per individual as before, where each individual can be observed under a different approximate design. It is shown that any design that is Φ-optimal in the class of single-group designs is also Φ-optimal in the larger class of more-group designs. The considered optimality criteria only have to satisfy mild assumptions, which is eg the case for the D-criterion and all linear criteria.  相似文献   

16.
Min-Hsiao Tsai 《Metrika》2009,70(3):355-367
Consider the problem of discriminating between two rival response surface models and estimating parameters in the identified model. To construct designs serving for both model discrimination and parameter estimation, the M γ-optimality criterion, which puts weight γ (0≤γ≤1) for model discrimination and 1 − γ for parameter estimation, is adopted. The corresponding M γ-optimal product design is explicitly derived in terms of canonical moments. With the application of the maximin principle on the M γ-efficiency of any M γ'-optimal product design, a criterion-robust optimal product design is proposed.  相似文献   

17.
Reversed hazard rates are found to be very useful in survival analysis and reliability especially in study on parallel systems and in the analysis of left censored lifetime data. In this paper, we derive a class of bivariate distributions having marginal proportional reversed hazard rates. We, then, introduce a class of proportional reversed hazard rates frailty models and propose a multivariate correlated gamma frailty model. Bivariate reversed hazard rates and association measure are discussed in terms of frailty parameters.  相似文献   

18.
Estimation of the scale matrix of a multivariate t-model under entropy loss   总被引:7,自引:0,他引:7  
This paper deals with the estimation of the scale matrix of a multivariatet-model with unknown location vector and scale matrix to improve upon the usual estimators based on the sample sum of product matrix. The well-known results of the estimation of the scale matrix of the multivariate normal model under the assumption of entropy loss function have been generalized to that of a multivariatet-model. The paper is based on the first author’s unpublished Ph.D. dissertation ‘Estimation of the Scale Matrix of a Multivariate T-model’, University of Western Ontario, Canada. Present address: School of Mathematics and Statistics, The University of Sydney, NSW 2006, Australia.  相似文献   

19.
Chen  Shun-Hsing 《Quality and Quantity》2012,46(4):1279-1296
This study is based on the SERVQUAL model and the Plan-Do-Check-Action (P-D-C-A) cycle of TQM to establish a higher education quality management system. In this system, it includes ‘Plan’ and ‘Do’ dimensions to execute ten factors, each dimension complements with each other. The ‘Check’ dimension has four factors and the ‘Action’ dimension has three factors. Each execution factors of the quality management system could reduce the occurrence of five gaps in the SERVQUAL model and facilitate education providers to provide a better service quality. Education providers strengthen an education system by carefully planning and implementation of quality auditing and continuous improvement. The desired result of this study is to possess a more explicit framework for high education industry and provide a proper service for students.  相似文献   

20.
We use daily price data from the Egyptian stock market and a Loser portfolio of 20 IPOs from the late 1990s that experienced dramatic 1-day price falls in the period 2004 to 2007 to estimate a 2-way fixed effects model of CARs. Observable covariates are company size and turnover growth and unobservables company and period fixed effects. Our results provide evidence of significant price reversal over the first 40 post-event days. Firm size is negatively correlated with post-event CARs, consistently with the argument that small firms have a stronger tendency to price-reverse due to greater informational opacity. But permanent, unobservable company-specific factors, account for a much larger percentage of post-event variation in stock prices and indicate an underlying heterogeneity in investor responses to initial price falls not uncovered before in the literature. Strong negative company effects following a price fall are found to presage reinforcing ‘long term’ price falls and strong positive company effects to presage countervailing ‘long term’ price reversals. At the extremes these company effects are sufficiently large to suggest that a trading strategy based on them would be profitable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号