首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 47 毫秒
1.
The American Psychological Association Task Force recommended that researchers always report and interpret effect sizes for quantitative data. However, no such recommendation was made for qualitative data. Thus, the first objective of the present paper is to provide a rationale for reporting and interpreting effect sizes in qualitative research. Arguments are presented that effect sizes enhance the process of verstehen/hermeneutics advocated by interpretive researchers. The second objective of this paper is to provide a typology of effect sizes in qualitative research. Examples are given illustrating various applications of effect sizes. For instance, when conducting typological analyses, qualitative analysts only identify emergent themes; yet, these themes can be quantitized to ascertain the hierarchical structure of emergent themes. The final objective is to illustrate how inferential statistics can be utilized in qualitative data analyses. This can be accomplished by treating words arising from individuals, or observations emerging from a particular setting, as sample units of data that represent the total number of words/observations existing from that sample member/context. Heuristic examples are provided to demonstrate how inferential statistics can be used to provide more complex levels of verstehen than is presently undertaken in qualitative research.  相似文献   

2.
We present some general results on Fisher information (FI) contained in upper (or lower) record values and associated record times generated from a sequence of i.i.d. continuous variables. For the record data obtained from a random sample of fixed size, we establish an interesting relationship between its FI content and the FI in the data consisting of sequential maxima. We also consider the record data from an inverse sampling plan (Samaniego and Whitaker, 1986). We apply the general results to evaluate the FI in upper as well as lower records data from the exponential distribution for both sampling plans. Further, we discuss the implication of our results to statistical inference from these record data. Received: December 2001 Acknowledgements. This research was supported by Fondo Nacional de Desarrollo Cientifico y Tecnologico (FONDECYT) grants 7990089 and 1010222 of Chile. We would like to thank the Department of Statistics at the University of Concepción for its hospitality during the stay of H. N. Nagaraja in Chile in March of 2000, when the initial work was done. We are grateful to the two referees for various comments that let to improvements in the paper.  相似文献   

3.
Teams are pervasive in today's world, and rightfully so as we need them. Drawing upon the existing extensive body of research surrounding the topic of teamwork, we delineate nine “critical considerations” that serve as a practical heuristic by which HR leaders can determine what is needed when they face situations involving teamwork. Our heuristic is not intended to be the definitive set of all considerations for teamwork, but instead consolidates key findings from a vast literature to provide an integrated understanding of the underpinnings of teamwork—specifically, what should be considered when selecting, developing, and maintaining teams. This heuristic is designed to help those in practice diagnose team‐based problems by providing a clear focus on relevant aspects of teamwork. To this end, we first define teamwork and its related elements. Second, we offer a high‐level conceptualization of and justification for the nine selected considerations underlying the heuristic, which is followed by a more in‐depth synthesis of related literature as well as empirically‐driven practical guidance. Third, we conclude with a discussion regarding how this heuristic may best be used from a practical standpoint, as well as offer areas for future research regarding both teamwork and its critical considerations. © 2014 Wiley Periodicals, Inc.  相似文献   

4.
Recent survey literature shows an increasing interest in survey designs that adapt data collection to characteristics of the survey target population. Given a specified quality objective function, the designs attempt to find an optimal balance between quality and costs. Finding the optimal balance may not be straightforward as corresponding optimisation problems are often highly non‐linear and non‐convex. In this paper, we discuss how to choose strata in such designs and how to allocate these strata in a sequential design with two phases. We use partial R‐indicators to build profiles of the data units where more or less attention is required in the data collection. In allocating cases, we look at two extremes: surveys that are run only once, or infrequent, and surveys that are run continuously. We demonstrate the impact of the sample size in a simulation study and provide an application to a real survey, the Dutch Crime Victimisation Survey.  相似文献   

5.
Health Effects of Air Pollution: A Statistical Review   总被引:2,自引:0,他引:2  
We critically review and compare epidemiological designs and statistical approaches to estimate associations between air pollution and health. More specifically, we aim to address the following questions:
  • 1 Which epidemiological designs and statistical methods are available to estimate associations between air pollution and health?
  • 2 What are the recent methodological advances in the estimation of the health effects of air pollution in time series studies?
  • 3 What are the the main methodological challenges and future research opportunities relevant to regulatory policy?
In question 1, we identify strengths and limitations of time series, cohort, case‐crossover and panel sampling designs. In question 2, we focus on time series studies and we review statistical methods for: 1) combining information across multiple locations to estimate overall air pollution effects; 2) estimating the health effects of air pollution taking into account of model uncertainties; 3) investigating the consequences of exposure measurement error in the estimation of the health effects of air pollution; and 4) estimating air pollution‐health exposure‐response curves. Here, we also discuss the extent to which these statistical contributions have addressed key substantive questions. In question 3, within a set of policy‐relevant‐questions, we identify research opportunities and point out current data limitations.  相似文献   

6.
abstract The paper identifies some common problems encountered in quantitative methodology and provides information on current best practice to resolve these problems. We first discuss issues pertaining to variable measurement and concerns regarding the underlying relationships among variables. We then highlight several advances in estimation methodology that may circumvent issues encountered in common practice. Finally, we discuss approaches that move beyond existing research designs, including the development and use of datasets that embody linkages across levels of analysis, or combine qualitative and quantitative methods.  相似文献   

7.
Berthold Heiligers 《Metrika》2002,54(3):191-213
E-optimality of approximate designs in linear regression models is paired with a dual problem of nonlinear Chebyshev approximation. When the regression functions form a totally positive system, then the information matrices of designs for subparameters turn out to be “almost” totally positive, a property which allows to solve the nonlinear Chebyshev problem. Thereby we obtain explicit formulae for E-optimal designs in terms of equi-oscillating generalized polynomials. The considerations unify and generalize known results on E-optimality for particular regression setups.  相似文献   

8.
A typology of mixed methods research designs   总被引:2,自引:0,他引:2  
The mixed methods paradigm is still in its adolescence, and, thus, is still relatively unknown and confusing to many researchers. In general, mixed methods research represents research that involves collecting, analyzing, and interpreting quantitative and qualitative data in a single study or in a series of studies that investigate the same underlying phenomenon. Over the last several years, a plethora of research designs have been developed. However, the number of designs that currently prevail leaves the doctoral student, the beginning researcher, and even the experienced researcher who is new to the field of mixed methods research with the challenge of selecting optimal mixed methods designs. This paper presents a three-dimensional typology of mixed methods designs that represents an attempt to rise to the challenge of creating an integrated typology of mixed methods designs. An example for each design is included as well as a notation system that fits our eight-design framework. This paper won the James E. McLean outstanding paper award.  相似文献   

9.
Carlos N. Bouza 《Metrika》2009,70(3):267-277
This paper is devoted to the analysis of the estimation of the mean of a sensitive variable. The use of a randomized response (RR) procedure gives confidence to the interviewed that his privacy is protected. We consider that a simple random sampling with replacement design is used for selecting a sample. The behavior of the RR procedure, when ranked set sampling is the design used, is developed under three different ranking criteria. The usual gain in accuracy associated with the use of ranked set sampling is exhibited only by one of the designs. The behavior of the models is illustrated using data provided by a study of samples of persons infected with AIDS.  相似文献   

10.

This study was conducted to explain the contextual factors associated with total fertility rate (TFR) decline to help policymakers. A qualitative approach and Leichter contextual analysis framework were applied to conduct this study. The participants were selected using purposive sampling method, and also the interviews continued until data saturation was reached. Individuals with knowledge and perspectives on population policies were included in the study to improve the research credibility. The data validity was achieved by applying the maximum variety in selecting the sample. The results were classified into four groups, including situational, structural, cultural, and environmental factors. Situational factors included political sanctions, drought, and road accidents. Structural factors involved government policies, the absence of monitoring, paying no attention to the required conditions, housing status, employment status, economic status, and other issues. Cultural factors were classified into the seven categories, including divorce, socio-economic development, women's employment, marriage age, urbanization, and other issues and factors included international treaties, and the western influence. Policymakers and administrators in the field of demographic policies can make more accurate strategies to increase TFR by recognizing the causes that reduce fertility with the help of providing the possibility to understand better the factors affecting the TFR decline.

  相似文献   

11.
The most common form of data for socio-economic studies comes from survey sampling. Often the designs of such surveys are complex and use stratification as a method for selecting sample units. A parametric regression model is widely employed for the analysis of such survey data. However the use of a parametric model to represent the relationship between the variables can be inappropriate. A natural alternative is to adopt a nonparametric approach. In this article we address the problem of estimating the finite population mean under stratified sampling. A new stratified estimator based on nonparametric regression is proposed for stratification with proportional allocation, optimum allocation and post-stratification. We focus on an educational and labor-related context with natural populations to test the proposed nonparametric method. Simulated populations have also been considered to evaluate the practical performance of the proposed method.  相似文献   

12.
Counting the number of units is not always practical during the sampling of particulate materials: it is often much easier to sample a fixed volume or fixed mass of particles. Hence, a class of sampling designs is proposed which leads to samples that have approximately a constant mass or a constant volume. For these sampling designs, estimators were derived which are a ratio of arbitrary sample totals. A Taylor expansion was used to obtain a first-order approximation for the expected value and variance in the limit of a large batch-to-sample size ratio. Furthermore, a π -estimator for a ratio of batch totals was found by deriving expressions for the first- and second-order inclusion probabilities. Practical application of the π -estimator is limited because it requires inaccessible batch information. However, when the denominator of the estimated batch ratio is the batch size, the π -estimator becomes equal to a sample total divided by the sample size in the limit of a large sample-to-particle size ratio. As a consequence, the obtained sample ratio becomes an unbiased estimator for the corresponding batch ratio. Retaining unbiasedness, the Horvitz–Thompson estimator for the variance, which also contains inaccessible batch information, is replaced by an estimator containing sample information only. Practical application of this estimator is illustrated for the sampling of slag, produced during the production of steel.  相似文献   

13.
Abstract  In this paper a sampling scheme for selecting a sample of size two is formulated where the inclusion probability of a population unit is proportional to its size.  相似文献   

14.
In-depth interviewing is a promising method. Alas, traditional in-depth interview sample designs prohibit generalizing. Yet, after acknowledging this limitation, in-depth interview studies generalize anyway. Generalization appears unavoidable; thus, sample design must be grounded in plausible ontological and epistemological assumptions that enable generalization. Many in-depth interviewers reject such designs. The paper demonstrates that traditional sampling for in-depth interview studies is indefensible given plausible ontological conditions, and engages the epistemological claims that purportedly justify traditional sampling. The paper finds that the promise of in-depth interviewing will go unrealized unless interviewers adopt ontologically plausible sample designs. Otherwise, in-depth interviewing can only provide existence proofs, at best.  相似文献   

15.
The sampling designs used in organizational research have been less than consistent across different reported studies. In this analysis we examine the reported relationships among several key organizational variables in ten separate previously published studies to determine the degree to which major differences in sample designs have influenced findings. We isolate that portion of reported associations due to unique characteristics of the samples and report the association these ‘sample design effects’ have with particular sample designs. Results indicate homogeneous samples of organizations inflate reported relationships yet leave significant sources of variation uncontrolled in sample selection. an alternative to sampling of homogeneous organizations is suggested by the fact that larger and probabilitistically selected samples are also associated with larger reported relationships among organizational variables.  相似文献   

16.
Auditing Quality of Research in Social Sciences   总被引:2,自引:0,他引:2  
A growing body of studies involves complex research processes facing many interpretations and iterations during the analyses. Complex research generally has an explorative in-depth qualitative nature. Because these studies rely less on standardized procedures of data gathering and analysis, it is often not clear how quality was insured or assured. However, one can not easily find techniques that are suitable for such complex research processes to assess the quality of the study. In this paper, we discuss and present a suitable validation procedure. We first discuss how ‘diagnosing’ quality involves three generic criteria. Next, we present findings of previous research in possible procedures to assure the quality of research in social sciences. We introduce the audit procedure designed by Halpern [(1983) Auditing Naturalistic Inquiries: The Development and Application of a Model. Unpublished doctoral dissertation, Indiana University] we found an appropriate starting point for a suitable procedure for quality judgment. Subsequently, we will present a redesign of the original procedure, with according guidelines for the researcher (the auditee) and for the evaluator of the quality of the study (the auditor). With that design, we aim to enable researchers to bring forward their explorative qualitative studies as stronger and more equally valuable to studies that can rely on standardized procedures.  相似文献   

17.
p‐Values are commonly transformed to lower bounds on Bayes factors, so‐called minimum Bayes factors. For the linear model, a sample‐size adjusted minimum Bayes factor over the class of g‐priors on the regression coefficients has recently been proposed (Held & Ott, The American Statistician 70(4), 335–341, 2016). Here, we extend this methodology to a logistic regression to obtain a sample‐size adjusted minimum Bayes factor for 2 × 2 contingency tables. We then study the relationship between this minimum Bayes factor and two‐sided p‐values from Fisher's exact test, as well as less conservative alternatives, with a novel parametric regression approach. It turns out that for all p‐values considered, the maximal evidence against the point null hypothesis is inversely related to the sample size. The same qualitative relationship is observed for minimum Bayes factors over the more general class of symmetric prior distributions. For the p‐values from Fisher's exact test, the minimum Bayes factors do on average not tend to the large‐sample bound as the sample size becomes large, but for the less conservative alternatives, the large‐sample behaviour is as expected.  相似文献   

18.
Post‐stratification weighting is a technique used in public opinion polling to minimize discrepancies between population parameters and realized sample characteristics. The current paper provides a weighting tutorial to organizational surveyors who may otherwise be unfamiliar with the rationale behind the practice as well as “when and how to do” such weighting. The primary reasons to weight include: [1] reducing the effect of frame, sampling, and nonresponse bias in point estimates, and, relatedly, (2) correcting for aggregation error resulting from over‐ and underrepresentation of constituent groups. We briefly compare and contrast traditions within public opinion and organizational polling contexts and present a hybrid taxonomy of sampling procedures that organizational surveyors may find useful in situating their survey efforts within a methodological framework. Next, we extend the existing HRM literature focused on survey nonresponse to a broader lens concerned with population misrepresentation. It is from this broadened methodological framework that we introduce the practice of weighting as a remedial strategy for misrepresentation. We then provide sample weighting algorithms and standard error corrections that can be applied to organizational survey data and make our data and procedures available to individuals who may wish to use our examples as they learn “how to weight.” © 2018 Wiley ­Periodicals, Inc.  相似文献   

19.
The purpose of this mixed methods case study was to examine the generalization practices in qualitative research published in a reputable qualitative journal. In order to accomplish this, all qualitative research articles published in Qualitative Report since its inception in 1990 (n =  273) were examined. A quantitative analysis of the all 125 empirical qualitative research articles revealed that a significant proportion (i.e., 29.6%) of studies involved generalizations beyond the underlying sample that were made inappropriately by the author(s). A qualitative analysis identified the types of over-generalizations that occurred, which included making general recommendations for future practice and providing general policy implications based only on a few cases. Thus, a significant proportion of articles published in Qualitative Report lack what we call interpretive consistency.  相似文献   

20.
The advent of massive amounts of textual, audio, and visual data has spurred the development of econometric methodology to transform qualitative sentiment data into quantitative sentiment variables, and to use those variables in an econometric analysis of the relationships between sentiment and other variables. We survey this emerging research field and refer to it as sentometrics, which is a portmanteau of sentiment and econometrics. We provide a synthesis of the relevant methodological approaches, illustrate with empirical results, and discuss useful software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号