首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Since measurement errors have strong effects in all relationships (statistical or otherwise) studied, there is an increasing interest in the data quality, which is the major justification for this research. This paper aims to present a new measurement procedure, the letter scale, which avoids many of the problems connected with the response modalities traditionally used in attitudinal research, especially the ordinal categorical scales. This paper analyzes the error composition of the scores obtained with this new measurement procedure. The validity of the procedure is also analyzed and the observed variance is assessed to determine which part of the observed variance is “valid”, which part is random error (attenuating relationships) and which is correlated error (magnifying relationships). Structural equation models will be used to provide estimates of the measurement quality: (i) Reliability, (ii) Construct validity, method effect and residual variance. In addition, this letter scale is evaluated under another different perspective, Information Theory measures are also used to assess the amount of information transmitted. The relative merits of this new measurement procedure as opposed to other common response modalities will be discussed in both cases.  相似文献   

2.
When screening a production process for nonconforming items the objective is to improve the average outgoing quality level. Due to measurement errors specification limits cannot be checked directly and hence test limits are required, which meet some given requirement, here given by a prescribed bound on the consumer loss. Classical test limits are based on normality, both for the product characteristic and for the measurement error. In practice, often nonnormality occurs for the product characteristic as well as for the measurement error. Recently, nonnormality of the product characteristic has been investigated. In this paper attention is focussed on the measurement error. Firstly, it is shown that nonnormality can lead to serious failure of the test limit. New test limits are therefore derived, which have the desired robustness property: a small loss under normality and a large gain in case of nonnormality when compared to the normal test limit. Monte Carlo results illustrate that the asymptotic theory is in agreement with moderate sample behaviour.  相似文献   

3.
Ensuing the recognition of indirect measurement of so-called non-measurable data in sociological research, many kinds of indices and scales have been constructed in order to grasp the essence of complex reality. Especially in the realm of attitude research the use of e.g. the GUTTMAN scale has been fruitful. Notwithstanding many critical remarks, new developments in scaling theory and scaling techniques warrant an optimistic view of sociology as an exact empirical science. More than up to now statistical thinking and methods will have to play a part in this development.  相似文献   

4.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

5.
The focus of this study is on the fit between the item content of scales measuring humorous coping and basic concepts of stress and coping theory. To investigate this fit 81 items from seven currently available humorous coping scales have been subjected to a facet analysis, using the tool of a mapping sentence. Three facets derived from stress and coping theory were part of this mapping sentence: external demands, humorous responses and coping aims. Because of the claim that humorous coping may be related to physical health dimensions, special attention has been paid to two health-related coping aims: cognitive reappraisal and response-focused coping responses. Five raters categorized the facets and their respective categories. Some humorous coping scales showed an underrepresentation of “external demands” and “humorous responses” and only a few scales covered the “aims” facet adequately. Reliability and agreement parameters varied considerably among scales, both on facet level and on category level. The Waterloo University Humor Inventory (WUHI) was a positive exception to this pattern. Findings are discussed in the light of specific characteristics of the scales included. Possible improvements of humorous coping measurement in health-related research are proposed, as well as adaptations to the rating procedures.  相似文献   

6.
Conformity testing is a systematic examination of the extent to which an entity conforms to specified requirements. Such testing is performed in industry as well as in regulatory agencies in a variety of fields. In this paper we discuss conformity testing under measurement or sampling uncertainty. Although the situation has many analogies to statistical testing of a hypothesis concerning the unknown value of the measurand there are no generally accepted rules for handling measurement uncertainty when testing for conformity. Usually the objective of a test for conformity is to provide assurance of conformity. We therefore suggest that an appropriate statistical test for conformity should be devised such that there is only a small probability of declaring conformity when in fact the entity does not conform. An operational way of formulating this principle is to require that whenever an entity has been declared to be conforming, it should not be possible to alter that declaration, even if the entity was investigated with better (more precise) measuring instruments, or measurement procedures. Some industries and agencies designate specification limits under consideration of the measurement uncertainty. This practice is not invariant under changes of measurement procedure. We therefore suggest that conformity testing should be based upon a comparison of a confidence interval for the value of the measurand with some limiting values that have been designated without regard to the measurement uncertainty. Such a procedure is in line with the recently established practice of reporting measurement uncertainty as “an interval of values that could reasonably be attributed to the measurand”. The price to be paid for a reliable assurance of conformity is a relatively large risk that the procedure will fail to establish conformity for entities that only marginally conform. We suggest a two‐stage procedure that may improve this situation and provide a better discriminatory ability. In an example we illustrate the determination of the power function of such a two‐stage procedure.  相似文献   

7.
When comparing the prognosis of more than two groups in clinical trials, researchers may use multiple comparison procedures to determine which treatments actually differ from one another. Methods of controlling the Family Wise Error (FWE) rate for multiple comparisons of survival curves have received attention in the statistical literature. Adjustments such as Bonferroni, Holm's, Steele's and the closed procedure based on the logrank test have been studied. If hazards cross, the adjustments based on the logrank test may not be the most appropriate. Chi (2005) developed multiple testing procedures based on weighted Kaplan–Meier statistics as these statistics may perform better than the logrank for non‐proportional hazards alternatives. The aim of this research is to propose multiple testing procedures based on the Lin and Wang (2004) statistic for all pairwise comparisons. Simulation studies have shown this statistic can be more powerful than the logrank for certain crossing hazards. Through simulation, the FWE rate and power of the Bonferroni and Holm's adjustments based on the Lin and Wang statistic will be studied. For comparison purposes, the same adjustment procedures based on the logrank and Wilcoxon will be included in the simulations. These methods are applied to data from the Bone marrow transplant registry.  相似文献   

8.
Given that the use of Likert scales is increasingly common in the field of social research it is necessary to determine which methodology is the most suitable for analysing the data obtained; although, given the categorization of these scales, the results should be treated as ordinal data it is often the case that they are analysed using techniques designed for cardinal measures. One of the most widely used techniques for studying the construct validity of data is factor analysis, whether exploratory or confirmatory, and this method uses correlation matrices (generally Pearson) to obtain factor solutions. In this context, and by means of simulation studies, we aim to illustrate the advantages of using polychoric rather than Pearson correlations, taking into account that the latter require quantitative variables measured in intervals, and that the relationship between these variables has to be monotonic. The results show that the solutions obtained using polychoric correlations provide a more accurate reproduction of the measurement model used to generate the data.  相似文献   

9.
Traditional estimation theory generally starts from point estimators, and based on them confidence regions with given confidence level are constructed. However, this approach works only in some special cases and, even more severe, it is based on the unrealistic but mathematical necessary assumption of a generally unbounded parameter space.  The procedures derived in this paper, start from a bounded measurement range which contains the potential values of the parameter of interest. For given measurement range and given reliability requirement measurement procedures including a point estimator are developed. The result are complete measurement procedures for distribution parameters. Most precise procedures are derived and called complete Neyman measurement procedures.  相似文献   

10.
Data about personal networks and their characteristics are increasingly used in social science research, especially in research about the quality of life, social support and similar topics (Fischer, 1982; Marsden, 1987; van der Poel, 1993b). Since all data about a persons social network are usually obtained from the respondent himself, the quality of such measurements is a very important issue. Among other factors, the type of social support can affect the quality of social network measurement (Ferligoj and Hlebec, 1998, 1999). Differences in the stability of measurement between the core and extended personal network have also been found (Marsden, 1990; Morgan et al., 1997). The closer and the more important an alter is, the more likely it is that (s)he will be named in any measurement (Hoffmeyer-Zlotnik, 1990; Van Groenou et al., 1990; Morgan et al., 1997). In this paper the results of a recent study on the quality of measurement of tie characteristics in different personal subnetworks are presented. The Multitrait-multimethod (MTMM) approach was used for estimating reliability and validity. A meta analysis of reliability and validity estimates was done by hierarchical clustering. The data were collected in the year 2000 by computer assisted face-to-face and telephone interviews from a random sample of 1033 residents of Ljubljana.  相似文献   

11.
In this paper, we attempt to characterize parametric families of functions such that the statement “a function is an element of the parametric family” is meaningful with respect to a given scale of measurement (a statement is said to be meaningful if its truth or falsity is unchanged when admissible transformations are applied to all of the scales in the statement). A few special cases of the problem are solved for nominal, ordinal, and some quantitative scales. As economic applications, axiomatizations of homothetic production functions and the Cobb–Douglas production function are given.  相似文献   

12.
对山东省部分企业质量管理情况进行了初步调查,调查结果表明:山东省企业的管理者比较重视质量,普遍设置了质量管理部门和质量检验部门,比较重视质量体系建设,较好地开展了QC小组活动。但职工质量热情不高,质量责任制不健全,没有有效地利用统计技术,尚需加大质量工作的力度。  相似文献   

13.
《Journal of econometrics》2002,111(2):169-194
The impact of response measurement error in duration data is investigated using small parameter asymptotic approximations and compared with the effect of hazard function heterogeneity. The approximations lead to a specification test to detect measurement error which is shown to be related to the class of Information Matrix tests. It is shown that in a commonly used class of models the test statistic is exactly pivotal. The second order asymptotic properties of the alternative forms of the test statistic are derived and the quality of the approximations and the performance of the test are investigated via Monte Carlo experimentation.  相似文献   

14.
Consider a linear regression model and suppose that our aim is to find a confidence interval for a specified linear combination of the regression parameters. In practice, it is common to perform a Durbin–Watson pretest of the null hypothesis of zero first‐order autocorrelation of the random errors against the alternative hypothesis of positive first‐order autocorrelation. If this null hypothesis is accepted then the confidence interval centered on the ordinary least squares estimator is used; otherwise the confidence interval centered on the feasible generalized least squares estimator is used. For any given design matrix and parameter of interest, we compare the confidence interval resulting from this two‐stage procedure and the confidence interval that is always centered on the feasible generalized least squares estimator, as follows. First, we compare the coverage probability functions of these confidence intervals. Second, we compute the scaled expected length of the confidence interval resulting from the two‐stage procedure, where the scaling is with respect to the expected length of the confidence interval centered on the feasible generalized least squares estimator, with the same minimum coverage probability. These comparisons are used to choose the better confidence interval, prior to any examination of the observed response vector.  相似文献   

15.
Retrospective reports in survey interviews and questionnaires are subject to many types of recall error, which affect completeness, consistency, and dating accuracy. Concerns about this problem have led to the development of so-called calendar instruments, or timeline techniques. These aided recall procedures have been designed to help respondents gain better access to long-term memory by providing a graphical time frame in which life history information can be represented. In order to obtain more insights into the potential benefits of calendar methodology, this paper presents a review of the application of calendar instruments, their design characteristics and effects on data quality. Calendar techniques are currently used in a variety of fields, including life course research, epidemiology and family planning studies. Despite the growing interest in these new methods, their application often lacks sufficient theoretical foundation and little attention has been paid to their effectiveness. Several recent studies however, have demonstrated that in comparison to more traditional survey methods, calendar techniques can improve some aspects of data quality. While calendar instruments have been shown to be potentially beneficial to retrospective data quality, there is an apparent need for methodological research that generates more systematic knowledge about their application in social surveys.  相似文献   

16.
《Journal of econometrics》2002,108(2):317-342
This paper proposes the use of the bootstrap for the most commonly applied procedures in inequality, mobility and poverty measurement. In addition to simple inequality index estimation the scenarios considered are inequality difference tests for correlated data, decompositions by sub-group or income source, decompositions of inequality changes, and mobility index and poverty index estimation. Besides showing the consistency of the bootstrap for these scenarios, the paper also develops simple ways to deal with longitudinal correlation and panel attrition or non-response. In principle, all the proposed procedures can be handled by the δ-method, but Monte Carlo evidence suggests that the simplest possible bootstrap procedure should be the preferred method in practice, as it achieves the same accuracy as the δ-method and takes into account the stochastic dependencies in the data without explicitly having to deal with its covariance structure. If a variance estimate is available, then the studentized version of the bootstrap may lead to an improvement in accuracy, but substantially so only for relatively small sample sizes. All results incorporate the possibility that different observations have different sampling weights.  相似文献   

17.
In this paper we consider 100% inspection of a product of which several characteristics have to satisfy given specification limits. A 100% inspection procedure may be necessary to bring the percentage of nonconforming items down to a level acceptable for the consumer. If one can observe the actual values of the characteristics, it would be possible to bring this percentage down to zero. However, quite often this is not possible, as a measurement error occurs in measuring the characteristics. Therefore, it is common practice to inspect each characteristic by comparing its measurement to a test limit which is slightly more strict than the corresponding specification limit. An item then is accepted if for each characteristic the measurement conforms to the corresponding test limit. However, instead of inspecting an individual characteristic merely using its own measurement, it is (much) more efficient to use the measurements of the other characteristics as well, especially if some of the characteristics are highly correlated. In this paper it is shown how the measurements of all the characteristics can be used to test whether an item is conforming.  相似文献   

18.
John Odland  John Jakubs 《Socio》1977,11(5):265-271
A method of estimating preference functions for alternative urban travel modes using non-metric scaling and conjoint measurement is introduced. The method treats travel alternatives as alternative collections of generic attributes and disaggregates preference orderings for alternative modes into components associated with the generic attributes. Preference functions are fitted for individual respondents and alternative methods of estimating collective preference functions for the group of respondents are examined. Particular attention is given to the error associated with aggregating individual responses. The methods are designed to be effective with relatively modest quantities of survey data.  相似文献   

19.
王光生 《价值工程》2012,(12):35-36
由于乔木具有在短时间可以绿化街道、改变城市景观等功能,在现代城市园林绿化工程中得到广泛应用。乔木由于枝体高大,移植后不易成活,在乔木的移植过程中若不规范操作易出现乔木移植后枯萎死亡等问题。移植后的养护也对树木能否成活影响很大。本文主要介绍园林乔木移植的一般程序和注意问题以及移植后的养护技术。  相似文献   

20.
Since the work of Cliff and Ord (1973), increasing attention has been paid to the unique statistical and econometric problems associated with the use of spatial or areal data. In this paper it will be shown that the way in which spatial data is aggregated, or gerrymandered, will alter the estimation results of a model. Specifically, a well known model developed by Kain to measure the loss in black jobs in a metropolitan area resulting from residential segregation will be estimated. It will be shown that by alternative areal aggregation, or gerrymandering, of the data it is possible to reach diametrically opposed conclusions (i.e., blacks either gain or lose jobs as a result of residential segregation using the same model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号