首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 419 毫秒
1.
王超群 《价值工程》2021,(3):188-189
在统计过程控制(SPC)中,对多元数据的监测仍然是一个重要且具有挑战性的问题。当缺乏或有限的关于潜在过程分布的认知时,特别是当过程测量是多变量的时候,非参数控制图在统计过程控制(SPC)中是有用的。文章基于Wilcoxon秩和检验结合广义加权移动平均(GWMA)控制方案来制定图表统计量,得到一个新的多元非参数控制图,用于监测多元数据的位置参数变化。文章的理论和数值研究表明,所提出的控制图能够为任意数据分布位置偏移检测提供令人满意的性能。  相似文献   

2.
A variance-weighted Kuiper statistic for goodness of fit is studied. The exact finite sample distribution can be obtained through modification of Noé's (1972) algorithm. Asymptotic distribution theory for the statistic is available from Jaeschke (1979) and Eicker (1979), but this theory does not lead to useful approximations with finite sample sizes less than 100. Monte Carlo power studies demonstrate that the weighted Kuiper statistic is especially sensitive to alternatives that are not stochastically ordered relative to the postulated null distribution.  相似文献   

3.
The American Psychological Association Task Force recommended that researchers always report and interpret effect sizes for quantitative data. However, no such recommendation was made for qualitative data. Thus, the first objective of the present paper is to provide a rationale for reporting and interpreting effect sizes in qualitative research. Arguments are presented that effect sizes enhance the process of verstehen/hermeneutics advocated by interpretive researchers. The second objective of this paper is to provide a typology of effect sizes in qualitative research. Examples are given illustrating various applications of effect sizes. For instance, when conducting typological analyses, qualitative analysts only identify emergent themes; yet, these themes can be quantitized to ascertain the hierarchical structure of emergent themes. The final objective is to illustrate how inferential statistics can be utilized in qualitative data analyses. This can be accomplished by treating words arising from individuals, or observations emerging from a particular setting, as sample units of data that represent the total number of words/observations existing from that sample member/context. Heuristic examples are provided to demonstrate how inferential statistics can be used to provide more complex levels of verstehen than is presently undertaken in qualitative research.  相似文献   

4.
We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445–458]. The test is based on the general Pareto–Koopmans notion of efficiency, and does not require price data. Statistical inference is based on the sampling distribution of the L norm of errors. The test statistic can be computed using a simple enumeration algorithm. The finite sample properties of the test are analyzed by means of a Monte Carlo simulation using real-world data of large EU commercial banks.  相似文献   

5.
Deep and persistent disadvantage is an important, but statistically rare, phenomenon in the population, and sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. Survey samples are typically designed to produce estimates of population characteristics of planned areas. The sample sizes are calculated so that the survey estimator for each of the planned areas is of a desired level of precision. However, in many instances, estimators are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This has led to the development of a class of indirect estimators that make use of information from related areas through modelling. A model is used to link similar areas to enhance the estimation of unplanned areas; in other words, they borrow strength from the other areas. Doing so improves the precision of estimated characteristics in the small area, especially in areas with smaller sample sizes. Social science researchers have increasingly employed small area estimation to provide localised estimates of population characteristics from surveys. We explore how to extend this approach within the context of deep and persistent disadvantage in Australia. We find that because of the unique circumstances of the Australian population distribution, direct estimates of disadvantage have substantial variation, but by applying small area estimation, there are significant improvements in precision of estimates.  相似文献   

6.
We consider exact procedures for testing the equality of means (location parameters) of two Laplace populations with equal scale parameters based on corresponding independent random samples. The test statistics are based on either the maximum likelihood estimators or the best linear unbiased estimators of the Laplace parameters. By conditioning on certain quantities we manage to express their exact distributions as mixtures of ratios of linear combinations of standard exponential random variables. This allows us to find their exact quantiles and tabulate them for several sample sizes. The powers of the tests are compared either numerically or by simulation. Exact confidence intervals for the difference of the means corresponding to those tests are also constructed. The exact procedures are illustrated via a real data example.  相似文献   

7.
Sample design and sample allocation methods are developed for random digit dialling in household telephone surveys. The proposed method is based on a two-way stratification of telephone numbers. A weighted probability proportional to size sample allocation technique is used, with auxiliary variables about the telephone coverage rates, within local telephone exchanges of each substrata. This makes the sampling design nearly “self-weighting” in residential numbers when the prior information is well assigned. A computer program generates random numbers for the local areas within the existing phone capacities. A simulation study has shown greater sample allocation gain by the weighted probabilities proportional to size measures over other sample allocation methods. The amount of dialling required to obtain the sample is less than for proportional allocation. A decrease is also observed on the gain in sample allocation for some methods through the increasing sample sizes.  相似文献   

8.
Bootstrapping Financial Time Series   总被引:2,自引:0,他引:2  
It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate.  相似文献   

9.
In this paper, we provide a segmentation procedure for mean-nonstationary time series. The segmentation is obtained by casting the problem into the framework of detecting structural breaks in trending regression models in which the regressors are generated by suitably smooth functions. As test statistics we propose to use the maximally selected likelihood ratio statistics and a related statistics based on partial sums of weighted residuals. The main theoretical contribution of the paper establishes the extreme value distribution of these statistics and their consistency. To circumvent the slow convergence to the extreme value limit, we propose to employ a version of the circular bootstrap. This procedure is completely data-driven and does not require knowledge of the time series structure. In an empirical part, we show in a simulation study and applications to air carrier traffic and S&P 500 data that the finite sample performance is very satisfactory.  相似文献   

10.
Bernhard Klar 《Metrika》2000,52(3):237-252
Smooth tests are frequently used for testing the goodness of fit of a parametric family of distributions. One reason for the popularity of the smooth tests are the diagnostic properties commonly attributed to them. In recent years, however, it has been realized that these tests are strictly non-diagnostic when used conventionally. The paper examines how the smooth test statistics must be rescaled in order to obtain procedures having diagnostic properties at least for large sample sizes. Received: September 1999  相似文献   

11.
In this paper, we present a practical methodology for variance estimation for multi‐dimensional measures of poverty and deprivation of households and individuals, derived from sample surveys with complex designs and fairly large sample sizes. The measures considered are based on fuzzy representation of individuals' propensity to deprivation in monetary and diverse non‐monetary dimensions. We believe this to be the first original contribution for estimating standard errors for such fuzzy poverty measures. The second objective is to describe and numerically illustrate computational procedures and difficulties in producing reliable and robust estimates of sampling error for such complex statistics. We attempt to identify some of these problems and provide solutions in the context of actual situations. A detailed application based on European Union Statistics on Income and Living Conditions data for 19 NUTS2 regions in Spain is provided.  相似文献   

12.
Ansgar Steland 《Metrika》1998,47(1):251-264
The bootstrap, which provides powerful approximations for many classes of statistics, is studied for simple linear rank statistics employing bounded and smooth score functions. To verify consistency we view a rank statistic as a statistic induced by a statistical functional ψ which is evaluated at a pair of dependent signed measures. Thus, we can apply the von Mises method to verify asymptotic results for the bootstrap. The strong consistency of the bootstrap distribution estimator is derived for the bootstrap based on resampling from the original data. Further, the residual bootstrap is studied. The accuracy of the bootstrap approximations for small sample sizes is studied by simulations. The simulations indicate that the bootstrap provides better results than a normal approximation.  相似文献   

13.
Between 1982 and 1988 a growth study was carried out at the Division of Pediatric Oncology of the University Hospital of Groningen. A special feature of the project was that sample sizes are small and that ages at entry may be very different. In addition the intended design was not fully complied with. This paper highlights some aspects of the statistical analysis which is based on (1) reference scores, (2) statistical procedures allowing for an irregular pattern of measurement times caused by missing data and shifted measurement times.  相似文献   

14.
Nonparametric estimation and inferences of conditional distribution functions with longitudinal data have important applications in biomedical studies. We propose in this paper an estimation approach based on time-varying parametric models. Our model assumes that the conditional distribution of the outcome variable at each given time point can be approximated by a parametric model, but the parameters are smooth functions of time. Our estimation is based on a two-step smoothing method, in which we first obtain the raw estimators of the conditional distribution functions at a set of disjoint time points, and then compute the final estimators at any time by smoothing the raw estimators. Asymptotic properties, including the asymptotic biases, variances and mean squared errors, are derived for the local polynomial smoothed estimators. Applicability of our two-step estimation method is demonstrated through a large epidemiological study of childhood growth and blood pressure. Finite sample properties of our procedures are investigated through simulation study.  相似文献   

15.
Multilevel structural equation modeling (multilevel SEM) has become an established method to analyze multilevel multivariate data. The first useful estimation method was the pseudobalanced method. This method is approximate because it assumes that all groups have the same size, and ignores unbalance when it exists. In addition, full information maximum likelihood (ML) estimation is now available, which is often combined with robust chi‐squares and standard errors to accommodate unmodeled heterogeneity (MLR). In addition, diagonally weighted least squares (DWLS) methods have become available as estimation methods. This article compares the pseudobalanced estimation method, ML(R), and two DWLS methods by simulating a multilevel factor model with unbalanced data. The simulations included different sample sizes at the individual and group levels and different intraclass correlation (ICC). The within‐group part of the model posed no problems. In the between part of the model, the different ICC sizes had no effect. There is a clear interaction effect between number of groups and estimation method. ML reaches unbiasedness fastest, then the two DWLS methods, then MLR, and then the pseudobalanced method (which needs more than 200 groups). We conclude that both ML(R) and DWLS are genuine improvements on the pseudobalanced approximation. With small sample sizes, the robust methods are not recommended.  相似文献   

16.
Two stochastic nonparametric procedures are developed to evaluate the significance of violations of weak separability. When the data have measurement error, we show that the necessary and sufficient weak separability conditions of Varian [Varian, H., 1983. Nonparametric tests of consumer behavior. Review of Economic Studies 50, 99–110] must also satisfy the Afriat inequalities. The tests detect weak separability with high probability for weakly separable data. In addition, the procedures correctly reject weak separability for both nonseparable and random utility simulated data sets. The tests also fail to reject weak separability for a monetary and consumption data set which suggests that measurement error may be the source of the observed violations.  相似文献   

17.
This paper studies goodness-of-fit tests for the bivariate Poisson distribution. Specifically, we propose and study several Cramér–von Mises type tests based on the empirical probability generating function. They are consistent against fixed alternatives for adequate choices of the weight function involved in their definition. They are also able to detect local alternatives converging to the null at a certain rate. The bootstrap can be used to consistently estimate the null distribution of the test statistics. A simulation study investigates the goodness of the bootstrap approximation and compares their powers for finite sample sizes. Extensions for testing goodness-of-fit for the multivariate Poisson distribution are also discussed.  相似文献   

18.
Modeling conditional distributions in time series has attracted increasing attention in economics and finance. We develop a new class of generalized Cramer–von Mises (GCM) specification tests for time series conditional distribution models using a novel approach, which embeds the empirical distribution function in a spectral framework. Our tests check a large number of lags and are therefore expected to be powerful against neglected dynamics at higher order lags, which is particularly useful for non-Markovian processes. Despite using a large number of lags, our tests do not suffer much from loss of a large number of degrees of freedom, because our approach naturally downweights higher order lags, which is consistent with the stylized fact that economic or financial markets are more affected by recent past events than by remote past events. Unlike the existing methods in the literature, the proposed GCM tests cover both univariate and multivariate conditional distribution models in a unified framework. They exploit the information in the joint conditional distribution of underlying economic processes. Moreover, a class of easy-to-interpret diagnostic procedures are supplemented to gauge possible sources of model misspecifications. Distinct from conventional CM and Kolmogorov–Smirnov (KS) tests, which are also based on the empirical distribution function, our GCM test statistics follow a convenient asymptotic N(0,1) distribution and enjoy the appealing “nuisance parameter free” property that parameter estimation uncertainty has no impact on the asymptotic distribution of the test statistics. Simulation studies show that the tests provide reliable inference for sample sizes often encountered in economics and finance.  相似文献   

19.
In recent decades several methods have been developed for detecting differential item functioning (DIF), and many studies have aimed to identify both the conditions under which items may or may not be adequate and the factors which affect their power and Type I error. This paper describes a Monte Carlo experiment that was carried out in order to analyse the effect of reference group sample size, focal group sample size and the interaction of the two on the power and Type I error of the Mantel–Haenszel (MH) and Logistic regression (LR) procedures. The data were generated using a three-parameter logistic model, the design was fully-crossed factorial with 12 experimental conditions arising from the crossing of the two main factors, and the dependent variables were power and the rate of false positives calculated across 100 replications. The results enabled the significant factors to be identified and the two statistics to be compared. Practical recommendations are made regarding use of the procedures by psychologists interested in the development and analysis of psychological tests.  相似文献   

20.
We characterize the robustness of subsampling procedures by deriving a formula for the breakdown point of subsampling quantiles. This breakdown point can be very low for moderate subsampling block sizes, which implies the fragility of subsampling procedures, even when they are applied to robust statistics. This instability arises also for data driven block size selection procedures minimizing the minimum confidence interval volatility index, but can be mitigated if a more robust calibration method can be applied instead. To overcome these robustness problems, we introduce a consistent robust subsampling procedure for M-estimators and derive explicit subsampling quantile breakdown point characterizations for MM-estimators in the linear regression model. Monte Carlo simulations in two settings where the bootstrap fails show the accuracy and robustness of the robust subsampling relative to the subsampling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号