首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Survey statisticians use either approximate or optimisation‐based methods to stratify finite populations. Examples of the former are the cumrootf (Dalenius & Hodges, 1957 ) and geometric (Gunning & Horgan, 2004 ) methods, while examples of the latter are Sethi ( 1963 ) and Kozak ( 2004 ) algorithms. The approximate procedures result in inflexible stratum boundaries; this lack of flexibility results in non‐optimal boundaries. On the other hand, optimisation‐based methods provide stratum boundaries that can simultaneously account for (i) a chosen allocation scheme, (ii) overall sample size or required reliability of the estimator of a studied parameter and (iii) presence or absence of a take‐all stratum. Given these additional conditions, optimisation‐based methods will result in optimal boundaries. The only disadvantage of these methods is their complexity. However, in the second decade of 21st century, this complexity does not actually pose a problem. We illustrate how these two groups of methods differ by comparing their efficiency for two artificial populations and a real population. Our final point is that statistical offices should prefer optimisation‐based over approximate stratification methods; such a decision will help them either save much public money or, if funds are already allocated to a survey, result in more precise estimates of national statistics.  相似文献   

2.
A General Algorithm for Univariate Stratification   总被引:1,自引:0,他引:1  
This paper presents a general algorithm for constructing strata in a population using  X , a univariate stratification variable known for all the units in the population. Stratum  h  consists of all the units with an  X  value in the interval   [ b h −1,  bh )   . The stratum boundaries   { bh }   are obtained by minimizing the anticipated sample size for estimating the population total of a survey variable  Y  with a given level of precision. The stratification criterion allows the presence of a take-none and of a take-all stratum. The sample is allocated to the strata using a general rule that features proportional allocation, Neyman allocation, and power allocation as special cases. The optimization can take into account a stratum-specific anticipated non-response and a model for the relationship between the stratification variable  X  and the survey variable  Y . A loglinear model with stratum-specific mortality for  Y  given  X  is presented in detail. Two numerical algorithms for determining the optimal stratum boundaries, attributable to Sethi and Kozak, are compared in a numerical study. Several examples illustrate the stratified designs that can be constructed with the proposed methodology. All the calculations presented in this paper were carried out with stratification , an R package that will be available on CRAN (Comprehensive R Archive Network).  相似文献   

3.
Optimal allocation of the sample size to strata under box constraints   总被引:1,自引:1,他引:0  
In stratified random sampling without replacement boundary conditions, such as the sample sizes within strata shall not exceed the population sizes in the respective strata, have to be considered. Stenger and Gabler (Metrika, 61:137–156, 2005) have shown a solution that satisfies upper boundaries of sample fractions within the strata. However, in modern applications one may wish to guarantee also minimal sampling fractions within strata in order to allow for reasonable separate estimations. Within this paper, an optimal allocation in the Neyman-Tschuprov sense is developed which satisfies upper and lower bounds of the sample sizes within strata. Further, a stable algorithm is given which ensures optimality. The resulting sample allocation enables users to bound design weights within stratified random sampling while considering optimality in allocation.  相似文献   

4.
Summary This Paper reviews and summarizes recent contributions on some important aspects of stratified sample design, namely the determination of optimum stratification points and the choice of the number of strata. After a brief discussion of the fundamental theoretical contribution byDalenius, a number of approximate methods or rules which have been proposed as a result of the heavy computational work involved in solving the theoretical equations to obtain the optimum points of stratification were critically examined. Two empirical studies carried out to evaluate the performance of some of these approximate techniques are given. Some suggestions are made on the direction in which further research is needed.  相似文献   

5.
We consider several approximations to n-copulas: the checkmin, checkerboard, Bernstein, and shuffle of min approximations. The checkerboard, Bernstein, and shuffle of min approximations have been studied in the n = 2 case. We investigate these constructions in arbitrary finite dimensions and consider some of the ways in which they converge or fail to converge to the original copula.  相似文献   

6.
State space models play an important role in macroeconometric analysis and the Bayesian approach has been shown to have many advantages. This paper outlines recent developments in state space modelling applied to macroeconomics using Bayesian methods. We outline the directions of recent research, specifically the problems being addressed and the solutions proposed. After presenting a general form for the linear Gaussian model, we discuss the interpretations and virtues of alternative estimation routines and their outputs. This discussion includes the Kalman filter and smoother, and precision-based algorithms. As the advantages of using large models have become better understood, a focus has developed on dimension reduction and computational advances to cope with high-dimensional parameter spaces. We give an overview of a number of recent advances in these directions. Many models suggested by economic theory are either non-linear or non-Gaussian, or both. We discuss work on the particle filtering approach to such models as well as other techniques that use various approximations – to either the time state and measurement equations or to the full posterior for the states – to obtain draws.  相似文献   

7.
Excess volatility and regression tests have resulted in apparent rejections of the present-value relation when ex-post price approximations are employed. These approximations are based upon a sample terminal condition for prices, are not ergodic time-series, and do not result in statistics with readily calculable standard errors. Kleidon (1986a) has demonstrated that ex-post price approximations can subtly affect the reliability of certain volatility tests. We use a bootstrapped cointegration model to demonstrate some of these same effects in Mankiw, Romer and Shapiro's (1985) volatility statistics. The volatility statistics rarely have positive expected value in finite samples and still do not reject the presentvalue relation. Approximations based upon a ‘rolling’ terminal condition result in volatility statistics which have calculable large-sample errors, but even these standard errors greatly overstate the accuracy of volatility statisics in small samples. Regression tests of the present value relation are also affected by the price approximations.  相似文献   

8.
When two surveys carried out separately in the same population have common variables, it might be desirable to adjust each survey's weights so that they give equal estimates for the common variables. This problem has been studied extensively and has often been referred to as alignment or numerical consistency. We develop a design-based empirical likelihood approach for alignment and estimation of complex parameters defined by estimating equations. We focus on a general case when a single set of adjusted weights, which can be applied to both common and non-common variables, is produced for each survey. The main contribution of the paper is to show that the impirical log-likelihood ratio statistic is pivotal in the presence of alignment constraints. This pivotal statistic can be used to test hypotheses and derive confidence regions. Hence, the empirical likelihood approach proposed for alignment possesses the self-normalisation property, under a design-based approach. The proposed approach accommodates large sampling fractions, stratification and population level auxiliary information. It is particularly well suited for inference about small domains, when data are skewed. It includes implicit adjustments when the samples considerably differ in size. The confidence regions are constructed without the need for variance estimates, joint-inclusion probabilities, linearisation and re-sampling.  相似文献   

9.
We evaluate the performance of several volatility models in estimating one-day-ahead Value-at-Risk (VaR) of seven stock market indices using a number of distributional assumptions. Because all returns series exhibit volatility clustering and long range memory, we examine GARCH-type models including fractionary integrated models under normal, Student-t and skewed Student-t distributions. Consistent with the idea that the accuracy of VaR estimates is sensitive to the adequacy of the volatility model used, we find that AR (1)-FIAPARCH (1,d,1) model, under a skewed Student-t distribution, outperforms all the models that we have considered including widely used ones such as GARCH (1,1) or HYGARCH (1,d,1). The superior performance of the skewed Student-t FIAPARCH model holds for all stock market indices, and for both long and short trading positions. Our findings can be explained by the fact that the skewed Student-t FIAPARCH model can jointly accounts for the salient features of financial time series: fat tails, asymmetry, volatility clustering and long memory. In the same vein, because it fails to account for most of these stylized facts, the RiskMetrics model provides the least accurate VaR estimation. Our results corroborate the calls for the use of more realistic assumptions in financial modeling.  相似文献   

10.
《Statistica Neerlandica》1948,2(1-2):40-54
Summary  (A necessary correction of the control chart limits for averages of samples in the case of stratified sampling).
Application of stratified sampling results in smaller fluctuations of sampling than where the same total number of individuals is drawn at random from the superposed strata.
The proportion of the standard errors of the averages obtained by these two sampling methods may be expressed by a factor φ (o ≤φ≤ 1). The probability limits, between which the random sampling results would be fluctuating normally, should be corrected according to this factor.
A few properties of φ are discussed. This is graphically illustrated. Remarks have been added about the relation between the shape of the population of the sampling averages and the population from which the individuals are drawn, and about the difficulties which arise when the populations are non-Gaussian.  相似文献   

11.
M. J. Ahsan  S. U. Khan 《Metrika》1982,29(1):71-78
The problem of allocating the sample numbers to the strata in multivariate stratified surveys, where, apart from the cost involved in enumerating the selected individuals in the sample, there is an overhead cost associated with each stratum, has been formulated as a non-linear programming problem. The variances of the posterior distributions of the means of various characters are put to restraints and the total cost is minimized. The main problem is broken into subproblems for each of which the objective function turns out to be convex. When the number of subproblems happens to be large an approach has been indicated for obtaining an approximate solution by solving only a small number of subproblems.  相似文献   

12.
Lawrence Lessner 《Socio》2008,42(4):271-285
As the AIDS epidemic developed in the United States, emphasis turned from estimates of HIV incidence in male populations to estimates of HIV prevalence in the larger population, and to estimates of prevalence in sentinel populations. The concern was that heterosexual contact with male intravenous drug users was responsible for increased HIV prevalence among young women. Increases in HIV infection in women of child bearing age would result in increases in the number of HIV infected children. Thus, women of child bearing age would become an especially important sentinel population.The Newborn Seroprevalence Project was part of a national population-based survey to assess HIV prevalence among women of child bearing age. Blood from every newborn was obtained and tested for the HIV antibody. The observed proportion of seropositive births obtained during a given year in a given age/race/region stratum was used to estimate HIV prevalence of all women in that stratum.The purpose of this paper is to estimate the stratum and population-specific HIV prevalence and prevalence proportions for New York City women of child bearing age in 1991, as well as to estimate the bias that results from using the sample proportion in these efforts.  相似文献   

13.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

14.
李鹭 《价值工程》2022,(6):68-70
全断面硬岩地层具有较强的自稳性,盾构在该地层掘进过程中,施工安全风险较小,但是在施工质量方面的管控难度大幅度增加,尤其在富含地下水的全断面硬岩地层,传统的同步注浆质量无法得到有效保障,从而导致成型隧道渗漏、上浮等质量问题.本文结合深圳地铁14号线共建管廊工程施工案例,针对盾构在富含地下水的全断面硬岩地层掘进时同步注浆技...  相似文献   

15.
Sample surveys are widely used to obtain information about totals, means, medians and other parameters of finite populations. In many applications, similar information is desired for subpopulations such as individuals in specific geographic areas and socio‐demographic groups. When the surveys are conducted at national or similarly high levels, a probability sampling can result in just a few sampling units from many unplanned subpopulations at the design stage. Cost considerations may also lead to low sample sizes from individual small areas. Estimating the parameters of these subpopulations with satisfactory precision and evaluating their accuracy are serious challenges for statisticians. To overcome the difficulties, statisticians resort to pooling information across the small areas via suitable model assumptions, administrative archives and census data. In this paper, we develop an array of small area quantile estimators. The novelty is the introduction of a semiparametric density ratio model for the error distribution in the unit‐level nested error regression model. In contrast, the existing methods are usually most effective when the response values are jointly normal. We also propose a resampling procedure for estimating the mean square errors of these estimators. Simulation results indicate that the new methods have superior performance when the population distributions are skewed and remain competitive otherwise.  相似文献   

16.
A bstract . Intertemporal cost of living variability is analyzed for households with differing income levels and family characteristics. These indexes are based upon the parameter estimates of a comprehensive system of expenditure equations, the quadratic expenditure system. Despite considerable differences in the group-specific share parameters as well as nonlinearities in the Engle curves for each group, little variation occurs in these indexes for several U.S. price series over the 1967–1984 time period. As a result, we find little evidence that group-specific fixed weight indexes are better cost of living approximations than a general Consumer Price Index even though all substitution bias estimates, by income and household type, are quite small.  相似文献   

17.
王业强 《价值工程》2021,40(3):102-103
本文结合马来西亚东海岸铁路项目某标段采用旋挖钻机进行嵌岩桩施工,针对施工单位、监理单位和分包单位对中风化岩石岩面判定存在分歧的问题,提出采用多种中风化岩石判定的方法综合判定,包括参考地质勘探孔,旋挖钻机钻进速度,分析桩内捞出的岩样,捞取岩样进行点荷载试验,使最终的判定结果更加具有说服力。在复杂的地层中,时常会出现孤石、软弱夹层,溶洞等地质情况,所以嵌岩桩中风化岩面的判定显得尤为重要。并结合本工程嵌岩桩施工,详细介绍了中风化岩层的判定过程。  相似文献   

18.
Sequential city growth: Empirical evidence   总被引:3,自引:0,他引:3  
Using two comprehensive datasets on populations of cities and metropolitan areas for a large set of countries, I present three new empirical facts about the evolution of city growth. First, the distribution of cities’ growth rates is skewed to the right in most countries and decades. Second, within a country, the average rank of each decade’s fastest-growing cities tends to rise over time. Finally, this rank increases faster in periods of rapid growth in urban population. These facts can be interpreted as evidence in favor of the hypothesis that historically, urban agglomerations have followed a sequential growth pattern: Within a country, the initially largest city is the first to grow rapidly for some years. At some point, the growth rate of this city slows down and the second-largest city then becomes the fastest-growing one. Eventually, the third-largest city starts growing fast as the two largest cities slow down, and so on.  相似文献   

19.
Urban economists and location theorists have long employed land use models with a continuum of agents distributed over a continuum of locations. However, these continous models have been criticized on behavioral grounds in that individual households can consume only zero amounts of land in equilibrium. Hence the central purpose of this paper is to propose an alternative interpretation of these continous models as limiting approximations of discrete population models. In particular, it is shown that for large population sizes, the population distributions of the classical continuous model uniformly approximate the equilibrium population distributions generated by an appropriately defined class of discrete population models.  相似文献   

20.
In this paper we model Value‐at‐Risk (VaR) for daily asset returns using a collection of parametric univariate and multivariate models of the ARCH class based on the skewed Student distribution. We show that models that rely on a symmetric density distribution for the error term underperform with respect to skewed density models when the left and right tails of the distribution of returns must be modelled. Thus, VaR for traders having both long and short positions is not adequately modelled using usual normal or Student distributions. We suggest using an APARCH model based on the skewed Student distribution (combined with a time‐varying correlation in the multivariate case) to fully take into account the fat left and right tails of the returns distribution. This allows for an adequate modelling of large returns defined on long and short trading positions. The performances of the univariate models are assessed on daily data for three international stock indexes and three US stocks of the Dow Jones index. In a second application, we consider a portfolio of three US stocks and model its long and short VaR using a multivariate skewed Student density. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号