首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This research note addresses the challenge of how to optimally measure acquiescence response style (ARS) and extreme response style (ERS). This is of crucial importance in assessing results from studies that have tried to identify antecedents of response styles (such as age, education level, national culture). Using survey data from the Netherlands, a comparison is made between the traditional method and a more recently proposed method of measuring ARS and ERS (i.e., the convergent validity across both methods is assessed). The traditional method is based on an ad hoc set of related items. The alternative method uses a set of randomly sampled items to optimize heterogeneity and representativeness of the items. It is found that the traditional method may lead to response style measures that are suboptimal for estimating levels of ARS and ERS as well as relations of ARS and ERS with other variables (like hypothesized antecedents). Recommendations on how to measure response styles are provided.  相似文献   

2.
In survey research, acquiescence response style/set (ARS) and extreme response style/set (ERS) may distort the measurement of attitudes. How response bias is evoked is still subject of research. A key question is whether it may be evoked by external factors (e.g. test conditions or fatigue) or whether it could be the result of internal factors (e.g. personality or social characteristics). In the first part of this study we explore whether scale length—the manipulated test condition—influences the occurrence of ERS and/or ARS, by varying scale length from 5 till 11 categories. In pursuit of this we apply a latent class factor model that allows for diagnosing and correcting for ERS and ARS simultaneously. Results show that ERS occurs regardless of scale length. Furthermore, we find only weak evidence of ARS. In a second step we check whether ERS might reflect an internal personal style by (a) linking it to external measures of ERS, and by (b) correlating it with a personality profile and socio-demographic characteristics. Results show that ERS is reasonably stable over questionnaires and that it is associated with the selected personality profile and age.  相似文献   

3.
All organisations is unique and thus practise different cultural values in the context of an organisation. Determination of values that constitute to the organisation is of challenging task. Therefore, this paper expounds the value-based culture that is related to the performance of the organisation based on the literature review and expert’s view. This study aimed at testing the psychometric properties of the questionnaire of value-based culture items of performance measurement based on the value-based total performance excellence model. 400 questionnaires were distributed to the selected Institution of Higher Learning (IHL) i.e, the public university of Malaysia. The data collected was analysed using predictive analytics software and analysis of moment structures software, both version 18. Structural Equation Modeling (SEM) technique i.e., the confirmatory factor analysis (CFA) approach was employed to test the 6-factor hypothesized model of the value-based cultures which consisted values of citizenship, consultation, caring, trust, respect and quality. The result suggested that 2 core values i.e., consultation and trust that dominantly explained the university’s culture in the selected university from the proposed of 6 core values in the hypothesized model. The findings also paved the way forward for empowering value-based cultures especially at the IHL. However, future research should be conducted to reaffirm this model to represent the values embraced by the universities in Malaysia.  相似文献   

4.
This paper describes Mokken scale analysis as a method for assessing the unidimensionality of a set of items. As a nonparametric stochastic version of Guttman scale analysis, the Mokken model provides a useful starting point in scale construction since it does not impose severe restrictions on the functional form of the item trace lines. It requires only that the item trace lines are monotonically increasing and that they do not cross. After describing the Mokken method, we illustrate it by analyzing six abortion items from the 1975–1984 NORC General Social Surveys. In contrast to earlier parametric analyses of these items (regular and probit factor analyses), we find that these items form a single dimension. We argue that the two-dimension solution of these earlier analyses is an artifact of the differences in the difficulty of the items.  相似文献   

5.
A supersaturated design is a factorial design in which the number of effects to be estimated is greater than the number of runs. It is used in many experiments, for screening purpose, i.e., for studying a large number of factors and identifying the active ones. In this paper, we propose a method for screening out the important factors from a large set of potentially active variables through the symmetrical uncertainty measure combined with the information gain measure. We develop an information theoretical analysis method by using Shannon and some other entropy measures such as Rényi entropy, Havrda–Charvát entropy, and Tsallis entropy, on data and assuming generalized linear models for a Bernoulli response. This method is quite advantageous as it enables us to use supersaturated designs for analyzing data on generalized linear models. Empirical study demonstrates that this method performs well giving low Type I and Type II error rates for any entropy measure we use. Moreover, the proposed method is more efficient when compared to the existing ROC methodology of identifying the significant factors for a dichotomous response in terms of error rates.  相似文献   

6.
Appropriate modelling of Likert‐type items should account for the scale level and the specific role of the neutral middle category, which is present in most Likert‐type items that are in common use. Powerful hierarchical models that account for both aspects are proposed. To avoid biased estimates, the models separate the neutral category when modelling the effects of explanatory variables on the outcome. The main model that is propagated uses binary response models as building blocks in a hierarchical way. It has the advantage that it can be easily extended to include response style effects and non‐linear smooth effects of explanatory variables. By simple transformation of the data, available software for binary response variables can be used to fit the model. The proposed hierarchical model can be used to investigate the effects of covariates on single Likert‐type items and also for the analysis of a combination of items. For both cases, estimation tools are provided. The usefulness of the approach is illustrated by applying the methodology to a large data set.  相似文献   

7.
To avoid the response set phenomenon, in handbooks it is suggested to compose item batteries of both positively and negatively formulated items. However, while doing a factor analysis, researchers usually neglect the effect this may have on the fit of the factor solution. In that case, the fit between a plausibly interpretable factor solution and the observed correlations may prove to be very unsatisfactory. In our opinion, however when both positively and negatively formulated items are used, it seems unrealistic to base the factor analysis on the content of the items only; also the difference in phrasing may influence the intercorrelations of the items. In this paper we discuss some possible additional assumptions in consequence of this consideration and we examine to what extent the deriving analysis decisions may improve the fit.  相似文献   

8.
This study investigated the performance of multiple imputations with Expectation-Maximization (EM) algorithm and Monte Carlo Markov chain (MCMC) method in missing data imputation. We compared the accuracy of imputation based on some real data and set up two extreme scenarios and conducted both empirical and simulation studies to examine the effects of missing data rates and number of items used for imputation. In the empirical study, the scenario represented item of highest missing rate from a domain with fewest items. In the simulation study, we selected a domain with most items and the item imputed has lowest missing rate. In the empirical study, the results showed there was no significant difference between EM algorithm and MCMC method for item imputation, and number of items used for imputation has little impact, either. Compared with the actual observed values, the middle responses of 3 and 4 were over-imputed, and the extreme responses of 1, 2 and 5 were under-represented. The similar patterns occurred for domain imputation, and no significant difference between EM algorithm and MCMC method and number of items used for imputation has little impact. In the simulation study, we chose environmental domain to examine the effect of the following variables: EM algorithm and MCMC method, missing data rates, and number of items used for imputation. Again, there was no significant difference between EM algorithm and MCMC method. The accuracy rates did not significantly reduce with increase in the proportions of missing data. Number of items used for imputation has some contribution to accuracy of imputation, but not as much as expected.  相似文献   

9.
Item response theory (IRT) has recently been proposed as a framework to measure deprivation. It allows a latent measure of deprivation to be derived from a set of dichotomous items indicating deprivation, and the determinants of deprivation to be analysed. We investigate further the use of IRT models in the field of deprivation measurement. First, the paper emphasises the importance of item selection and the Mokken Scale Procedure is applied to select the items to be included in the scale of deprivation. Second, we apply the one- and the two-parameter probit IRT models for dichotomous items to two different sets of items, in order to highlight different empirical results. Finally, we introduce a graphical tool, the Item Characteristic Curve (ICC), and analyse the determinants of deprivation in Luxembourg. The empirical illustration is based on the fourth wave of the “Liewen zu Lëtzebuerg” Luxembourg socioeconomic panel (PSELL-3).  相似文献   

10.
The present paper considers some new models for the analysis of multidimensional contigency tables. Although the theoretical background used here appeared already in Haberman (1974), prescribed conditional interaction (PCIN) models were introduced by Rudas (1987) and their mathematical properties were worked out by Leimer and Rudas (1988). These models are defined by prescribing the values of certain conditional interactions in the contingency table. Conditional interaction is defined here as the logarithm of an appropriately defined conditional odds ratio. This conditional odds ratio is a conditional version of a generalization of the well known odds ratio of a 2×2 table and that of the three factor interaction term of a 2×2×2 table and applies to any number of dimensions and any number of categories of the variables. The well known log-linear (LL) models are special PCIN models. Estimated frequencies under PCIN models and tests of fit can be computed using existing statistical software (e.g. BMDP). The paper describes the class of PCIN models and compares it to the class of association models of Goodman (1981). As LL models are widely used in the analysis of social mobility tables, application of more general PCIN models is illustrated.  相似文献   

11.
A ‘trilemma’ procedure is introduced for collecting ‘dominance data’ (i.e. rankings of a set of items along a scale of relevance, preference, etc.). Trilemmas are three-way forced choices where the three items comprising each trilemma are selected on the basis of a multidimensional scaling solution (MDS) for the item set, ensuring that each choice is as stark and informative as possible. A questionnaire designed on this principle is easily understood and rapidly administered. The data are convenient to record and show less fluctuation among informants than existing techniques. We demonstrate the procedure with a set of 45 short generalisations about behaviour, designed for assessing child attachment. A three-dimensional ‘map’ of these items was obtained by applying MDS to multiple sets of similarity data. The same structure emerged from English-language and Japanese translations of the items. Thirty trilemmas based on this map were used to rank the items by degree of association with the Japanese concept of amae, characterising the concept in terms of its behavioural correlates.  相似文献   

12.
This article explores which leadership qualities public managers regard as important for public innovation. It is based on a survey of 365 senior public managers in Copenhagen, Rotterdam and Barcelona. Five perspectives on leadership were identified and tested using a number of items. Some of these proved to be more robust than others. Analysis of the three cities reveals a nuanced set of leadership styles, which include a transformational style, and one that is more dedicated to motivating employees, risk-taking and including others in decision-making. This suggests the need for more research on leadership and public-sector innovation.  相似文献   

13.
Standard randomized response (RR) models deal primarily with surveys which usually require a yes or a no response to a sensitive question, or a choice for responses from a set of nominal categories. As opposed to that, Eichhorn and Hayre (1983) have considered survey models involving a quantitative response variable and proposed an RR technique for it. Such models are very useful in studies involving a measured response variable which is highly sensitive in its nature. Eichhorn and Hayre obtained an unbiased estimate for the expectation of the quantitative response variable of interest. In this note we propose a procedure which uses a design parameter (controlled by the experimenter) that generalizes Eichhorn and Hayres results. Such a procedure yields an estimate for the desired expectation which has a uniformly smaller variance.Acknowledgements We are grateful to two referees for their valuable and constructive comments.  相似文献   

14.
Consider n sets of objects, each set consisting of m distinct types (for instance n place settings each made up of m distinct dishes and silverware pieces.) s items are drawn at random from the mn items. The distribution of the number of complete sets (each consisting of all m items) in the sample of s is asymptotically Poisson distributed with parameter (a /m )m if s = an 1–1 and n →∞. This fact can be interpreted in terms of a certain limit theorem for a sequence of i.i.d Bernoulli rv's.  相似文献   

15.
We consider revenue-optimal mechanism design for the case with one buyer and two items, when the buyer’s valuations are independent and additive. We obtain two sets of structural results of the optimal mechanisms, which can be summarized in one conclusion: under certain distributional conditions, the optimal mechanisms have simple menus.The first set of results states that, under a condition that requires that the types are concentrated on lower values, the optimal menu can be sorted in ascending order. Applying the theorem, we derive a revenue-monotonicity theorem which states that stochastically dominated distributions yield less revenue.The second set of results states that, under certain conditions which require that types are distributed more evenly or are concentrated on higher values, the optimal mechanisms have a few menu items. Our first result states that, for certain such distributions, the optimal menu contains at most 4 menu items. The condition admits power density functions. Our second result works for a weaker condition, under which the optimal menu contains at most 6 menu items. Our last result in this set works for the unit-demand setting, it states for uniform distributions, the optimal menu contains at most 5 items.  相似文献   

16.
A statistical process control chart named the mixture cumulative count control chart (MCCC-chart) is suggested in this study, motivated by an existing control chart named cumulative count control chart (CCC-chart). The MCCC-chart is constructed based on the distribution function of a two component mixture of geometric distributions using the number of items inspected until a defective item is observed ‘n’ as plotting statistics. We have observed that the MCCC-chart has the ability to perform equivalent to the CCC-chart when number of defective items follows geometric distribution and better than the CCC-chart when the number of defective items produced by a process follows a mixture geometric model. The MCCC-chart may be considered as a generalized version of CCC-chart.  相似文献   

17.
苏连存 《价值工程》2012,31(1):197-198
许多拓扑指标都与分子的某些物理化学性质密切相关,而不同的分子拓扑指标反映该分子的不同性能。其中Hosoya指标是较为重要的拓扑指标之一。设G一个分子结构图模型,即为一个具有n个顶点的连通图,则图G的Hosoya指标Z(G),是指这样一些子集的个数,每一个子集中的任意两条边在图G中都没有公共的顶点,即图G中匹配的个数,包括空集。本文讨论了一类四叶树的Hosoya指标序列。  相似文献   

18.
This paper provides a framework to test the validity of static cost minimizing equilibrium assumptions that form the basis for much of the empirical literature on industrial production. The point of departure in our model is to allow the observed technology to be at a short-run equilibrium where firms minimize variable costs while being constrained by the utilization levels of quasi-fixed factors. The long-run equilibrium is then inferred by minimizing total costs with the optimal adjustment of quasi-fixed factor levels.We use results from the optimization problem to form tests with respect to quantities and prices. In the quantity space version, departures between the actual and the optimal long-run levels of quasi-fixed factors are tested for statistical significance. A significant non-zero departure implies the rejection of a static equilibrium specification. In the price space version, the test is cast as a comparison of the market price and the long-run shadow value of a quasi-fixed factor. Although the two versions would give identical results in the non-stochastic case, the rejection powers of these two tests are found to depend on the particular functional form chosen to represent the production process (i.e. cost function).In an application based on aggregate U.S. manufacturing where capital is taken to be quasi-fixed, we were able to reject the static equilibrium specification. These results cast doubt on the validity of a number of previous empirical studies.  相似文献   

19.
Factor modelling of a large time series panel has widely proven useful to reduce its cross-sectional dimensionality. This is done by explaining common co-movements in the panel through the existence of a small number of common components, up to some idiosyncratic behaviour of each individual series. To capture serial correlation in the common components, a dynamic structure is used as in traditional (uni- or multivariate) time series analysis of second order structure, i.e. allowing for infinite-length filtering of the factors via dynamic loadings. In this paper, motivated from economic data observed over long time periods which show smooth transitions over time in their covariance structure, we allow the dynamic structure of the factor model to be non-stationary over time by proposing a deterministic time variation of its loadings. In this respect we generalize the existing recent work on static factor models with time-varying loadings as well as the classical, i.e. stationary, dynamic approximate factor model. Motivated from the stationary case, we estimate the common components of our dynamic factor model by the eigenvectors of a consistent estimator of the now time-varying spectral density matrix of the underlying data-generating process. This can be seen as a time-varying principal components approach in the frequency domain. We derive consistency of this estimator in a “double-asymptotic” framework of both cross-section and time dimension tending to infinity. The performance of the estimators is illustrated by a simulation study and an application to a macroeconomic data set.  相似文献   

20.
We analyze the effect of Wal-Mart's entry into the grocery market using a unique store-level price panel data set. We use ordinary least squares and two instrumental-variables specifications to estimate the effect of Wal-Mart's entry on competitors' prices of 24 grocery items across several categories. Wal-Mart's price advantage over competitors for these products averages approximately 10%. On average, competitors' response to entry by a Wal-Mart Supercenter is a price reduction of 1–1.2%, mostly due to smaller-scale competitors; the response of the "Big Three" supermarket chains (Albertson's, Safeway, and Kroger) is less than half that size. Low-end grocery stores, which compete more directly with Wal-Mart, cut their prices more than twice as much as higher-end stores. We confirm our results using a falsification exercise, in which we test for Wal-Mart's effect on prices of services that it does not provide, such as movie tickets and dry-cleaning services.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号