首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Gower and Blasius (Quality and Quantity, 39, 2005) proposed the notion of multivariate predictability as a measure of goodness-of-fit in data reduction techniques which is useful for visualizing and screening data. For quantitative variables this leads to the usual sums-of-squares and variance accounted for criteria. For categorical variables, and in particular for ordered categorical variables, they showed how to predict the levels of all variables associated with every point (case). The proportion of predictions which agree with the true category-levels gives the measure of fit. The ideas are very general; as an illustration they used nonlinear principal components analysis. An example of the method is described in this paper using data drawn from 23 countries participating in the International Social Survey Program (1995), paying special attention to two sets of variables concerned with Regional and National Identity. It turns out that the predictability criterion suggests that the fits are rather better than is indicated by “percentage of variance accounted for”.  相似文献   

2.
In multivariate analysis, the measure of variance accounted for plays a central role. In this paper, we show that an alternative approach, distance-based multivariate analysis, also yields solutions that can be summarized by a ratio of variances. For classical multivariate analysis, this ratio is equal to the variance accounted for (VAF) and in distance-based multivariate analysis it equals distance accounted for (DAF). We show that DAF in distance-based multivariate analysis can always be made higher than VAF in classical multivariate analysis. This property is illustrated for principal components analysis, multiple correspondence analysis, multiple regression, and analysis of variance.  相似文献   

3.
Statistical agencies often release a masked or perturbed version of survey data to protect the confidentiality of respondents' information. Ideally, a perturbation procedure should provide confidentiality protection without much loss of data quality, so that the released data may practically be treated as original data for making inferences. One major objective is to control the risk of correctly identifying any respondent's records in released data, by matching the values of some identifying or key variables. For categorical key variables, we propose a new approach to measuring identification risk and setting strict disclosure control goals. The general idea is to ensure that the probability of correctly identifying any respondent or surveyed unit is at most ξ, which is pre‐specified. Then, we develop an unbiased post‐randomisation procedure that achieves this goal for ξ>1/3. The procedure allows substantial control over possible changes to the original data, and the variance it induces is of a lower order of magnitude than sampling variance. We apply the procedure to a real data set, where it performs consistently with the theoretical results and quite importantly, shows very little data quality loss.  相似文献   

4.
Several papers have estimated the parameters of Pareto distributions for city sizes in different countries, but only one has attempted to explain the differing magnitudes of these parameters with a set of country-specific explanatory variables. While it is reassuring that there has been some research which advances beyond simple “curve-fitting” to explore the determinants of city size distributions, the existing research uses a two-stage OLS method which yields invalid second-stage standard errors (and, consequently, questionable hypothesis tests). In this paper, we develop candidate one-stage structural models with normal and non-normal errors which accommodate truncated size distributions, potentially Pareto-like shapes, and city-level variables. In general, these new models are nonlinear in parameters. We illustrate with data on U.S. urban areas.  相似文献   

5.
Summary This paper reviews research situations in medicine, epidemiology and psychiatry, in psychological measurement and testing, and in sample surveys in which the observer(rater or interviewer) can be an important source of measurement error. Moreover, most of the statistical literature in observer variability is surveyed with attention given to a notational unification of the various models proposed. In the continuous data case, the usual analysis of variance (ANOVA) components of variance models are presented with an emphasis on the intraclass correlation coefficient as a measure of reliability. Other modified ANOVA models, response error models in sample surveys, and related multivariate extensions are also discussed. For the categorical data case, special attention is given to measures of agreement and tests of hypotheses when the data consist of dichotomous responses. In addition, similarities between the dichotomous and continous cases are illustrated in terms of intraclass correlation coefficients. Finally, measures of agreement, such as kappa and weighted-kappa, are discussed in the context of nominal and ordinal data. A proposed unifying framework for the categorical data case is given in the form of concluding remarks.  相似文献   

6.
A fuzzy-QFD approach to supplier selection   总被引:5,自引:0,他引:5  
This article suggests a new method that transfers the house of quality (HOQ) approach typical of quality function deployment (QFD) problems to the supplier selection process. To test its efficacy, the method is applied to a supplier selection process for a medium-to-large industry that manufactures complete clutch couplings.The study starts by identifying the features that the purchased product should have (internal variables “WHAT”) in order to satisfy the company's needs, then it seeks to establish the relevant supplier assessment criteria (external variables “HOW”) in order to come up with a final ranking based on the fuzzy suitability index (FSI). The whole procedure was implemented using fuzzy numbers; the application of a fuzzy algorithm allowed the company to define by means of linguistic variables the relative importance of the “WHAT”, the “HOWWHAT” correlation scores, the resulting weights of the “HOW” and the impact of each potential supplier.Special attention is paid to the various subjective assessments in the HOQ process, and symmetrical triangular fuzzy numbers are suggested to capture the vagueness in people's verbal assessments.  相似文献   

7.
In this paper, we investigate certain operational and inferential aspects of invariant Post‐randomization Method (PRAM) as a tool for disclosure limitation of categorical data. Invariant PRAM preserves unbiasedness of certain estimators, but inflates their variances and distorts other attributes. We introduce the concept of strongly invariant PRAM, which does not affect data utility or the properties of any statistical method. However, the procedure seems feasible in limited situations. We review methods for constructing invariant PRAM matrices and prove that a conditional approach, which can preserve the original data on any subset of variables, yields invariant PRAM. For multinomial sampling, we derive expressions for variance inflation inflicted by invariant PRAM and variances of certain estimators of the cell probabilities and also their tight upper bounds. We discuss estimation of these quantities and thereby assessing statistical efficiency loss from applying invariant PRAM. We find a connection between invariant PRAM and creating partially synthetic data using a non‐parametric approach, and compare estimation variance under the two approaches. Finally, we discuss some aspects of invariant PRAM in a general survey context.  相似文献   

8.
In this paper, some new indices for ordinal data are introduced. These indices have been developed so as to measure the degree of concentration on the “small” or the “large” values of a variable whose level of measurement is ordinal. Their advantage in relation to other approaches is that they ascribe unequal weights to each class of values. Although, they constitute a useful tool in various fields of applications, the focus here is on their use in sample surveys and specifically in situations where one is interested in taking into account the “distance” of the responses from the “neutral” category in a given question. The properties of these indices are examined and methods for constructing confidence intervals for their actual values are discussed. The performance of these methods is evaluated through an extensive simulation study.  相似文献   

9.
Using telecommuting as a case study, we demonstrate that definitions, measurement instruments, sampling and sometimes vested interests affect the quality and utility even of seemingly objective and “measurable” data. Little consensus exists with respect to the definition of telecommuting, or to possible distinctions from related terms such as teleworking. Such a consensus is unlikely, since the “best” definition of telecommuting depends on one’s point of reference and purpose. However, differing definitions confound efforts to measure the amount of telecommuting and how it is changing over time. This paper evaluates estimates of the amounts of telecommuting occurring in the U.S. obtained from several different sources: the U.S. Census, the American Housing Survey, several Work at Home supplements to the Current Population Survey, a series of market research surveys, and the trade association-sponsored Telework America surveys. Many of the issues raised here are transferable to other contexts, and indirectly serve as suggestions for improving data collection in the future.  相似文献   

10.
The paper proposes a general framework for modeling multiple categorical latent variables (MCLV). The MCLV models extend latent class analysis or latent transition analysis to allow flexible measurement and structural components between endogenous categorical latent variables and exogenous covariates. Therefore, modeling frameworks in conventional structural equation models, for example, CFA and MIMIC models are feasible in the MCLV circumstances. Parameter estimations for the MCLV models are performed by using generalized expectation–maximization (E–M) algorithm. In addition, the adjusted Bayesian information criterion provides help for model selections. A substantive study of reading development is analyzed to illustrate the feasibility of MCLV models.  相似文献   

11.
Akihiro  Takeshi  Shoko   《Socio》2009,43(4):263-273
This paper presents a Data Envelopment Analysis/Malmquist index (DEA/MI) analysis of the change in quality-of-life (QOL), which is defined as the state of a social system as measured by multiple social-indicators. Applying panel data from Japan's 47 prefectures for the period 1975–2002, we identify significant movement in the country's overall QOL using a “cumulative” frontier shift index. Results suggest that Japan's QOL rose during the so-called “bubble economy years” (second half of the 1980s), and then dropped in the succeeding “lost-decade” (1990s). We also identify those prefectures considered most “responsible” for the shift(s) in QOL. Moreover, the use of both upper- and lower-bound DEAs enabled an evaluation of both “good” and “bad” movements in QOL.  相似文献   

12.
Increasing human and social capital by applying job embeddedness theory   总被引:4,自引:0,他引:4  
Most modern lives are complicated. When employees feel that their organization values the complexity of their entire lives and tries to do something about making it a little easier for them to balance all the conflicting demands, the employees tend to be more productive and stay with those organizations longer. Job embeddedness captures some of this complexity by measuring both the on-the-job and off-the-job components that most contribute to a person's staying. Research evidence as well as ample anecdotal evidence (discussed here and other places) supports the value of using the job embeddedness framework for developing a world-class retention strategy based on corporate strengths and employee preferences.To execute effectively their corporate strategy, different organizations require different knowledge, skills and abilities from their people. And because of occupational, geographic, demographic or other differences, these people will have needs that are different from other organizations. For that reason, the retention program of the week from international consultants won’t always work. Instead, organizations need to carefully assess the needs/desires of their unique employee base. Then, these organizations need to determine which of these needs/desires they can address in a cost effective fashion (confer more benefits than the cost of the program). Many times this requires an investment that will pay off over a longer term – not just a quarter or even year. Put differently, executives will need to carefully understand the fully loaded costs of turnover (loss of tacit knowledge, reduced customer service, slowed production, lost contracts, lack of internal candidates to lead the organization in the future, etc., in addition to the obvious costs like recruiting, selecting and training new people). Then, these executives need to recognize the expected benefits of various retention practices. Only then can leaders make informed decisions about strategic investments in human and social capital.

Selected bibliography

A number of articles have influenced our thinking about the importance of connecting employee retention strategies to business strategies:
• R. W. Beatty, M. A. Huselid, and C. E. Schneier. “New HR Metrics: Scoring on the Business Scorecard,” Organizational Dynamics, 2003, 32 (2), 107–121.
• Bradach. “Organizational Alignment: The 7-S Model,” Harvard Business Review, 1998.
• J. Pfeffer. “Producing Sustainable Competitive Advantage Through the Effective Management of People,” Academy of Management Executive, 1995 (9), 1–13.
• C. J. Collins, and K. D. Clark. “Strategic Human Resources Practices and Top Management Team Social Networks: An Examination of the Role of HR Practices in Creating Organizational Competitive Advantage,” Academy of Management Journal, 2003, 46, 740–752.
The theoretical development and empirical support for the Unfolding Model of turnover are captured in the following articles:
• T. Lee, and T. Mitchell. “An Alternative Approach: The Unfolding Model of Voluntary Employee Turnover,” Academy of Management Review, 1994, 19, 57–89.
• B. Holtom, T. Mitchell, T. Lee, and E.Inderrieden. “Shocks as Causes of Turnover: What They Are and How Organizations Can Manage Them,” Human Resource Management, 2005, 44(3), 337–352.
The development of job embeddedness theory is captured in the following articles:
• T. Mitchell, B. Holtom, T. Lee, C. Sablynski, and M. Erez. “Why People Stay: Using Job Embeddedness to Predict Voluntary Turnover,” Academy of Management Journal, 2001, 44, 1102–1121.
• T. Mitchell, B. Holtom, and T. Lee. “How To Keep Your Best employees: The Development Of An Effective Retention Policy,” Academy of Management Executive, 2001, 15(4), 96–108.
• B. Holtom, and E. Inderrieden. “Integrating the Unfolding Model and Job Embeddedness To Better Understand Voluntary Turnover,” Journal of Managerial Issues, in press.
• D.G. Allen. “Do Organizational Socialization Tactics Influence Newcomer Embeddedness and Turnover?” Journal of Management, 2006, 32, 237–257.
Executive SummaryEmployee turnover is costly to organizations. Some of the costs are obvious (e.g., recruiting, selecting, and training expenses) and others are not so obvious (e.g., diminished customer service ability, lack of continuity on key projects, and loss of future leadership talent). Understanding the value inherent in attracting and keeping excellent employees is the first step toward investing systematically to build the human and social capital in an organization. The second step is to identify retention practices that align with the organization's strategy and culture. Through extensive research, we have developed a framework for creating this alignment. We call this theory job embeddedness. Across multiple industries, we have found that job embeddedness is a stronger predictor of important organizational outcomes, such as employee attendance, retention and performance than the best, well-known and accepted psychological explanations (e.g., job satisfaction and organizational commitment). The third step is to implement the ideas. Throughout this article we discuss examples from the Fortune 100 Best Companies to Work For and many others to demonstrate how job embeddedness theory can be used to build human and social capital by increasing employee retention.  相似文献   

13.
This paper proposes a new method for combining forecasts based on complete subset regressions. For a given set of potential predictor variables we combine forecasts from all possible linear regression models that keep the number of predictors fixed. We explore how the choice of model complexity, as measured by the number of included predictor variables, can be used to trade off the bias and variance of the forecast errors, generating a setup akin to the efficient frontier known from modern portfolio theory. In an application to predictability of stock returns, we find that combinations of subset regressions can produce more accurate forecasts than conventional approaches based on equal-weighted forecasts (which fail to account for the dimensionality of the underlying models), combinations of univariate forecasts, or forecasts generated by methods such as bagging, ridge regression or Bayesian Model Averaging.  相似文献   

14.
Dynamic stochastic general equilibrium (DSGE) models have recently become standard tools for policy analysis. Nevertheless, their forecasting properties have still barely been explored. In this article, we address this problem by examining the quality of forecasts of the key U.S. economic variables: the three-month Treasury bill yield, the GDP growth rate and GDP price index inflation, from a small-size DSGE model, trivariate vector autoregression (VAR) models and the Philadelphia Fed Survey of Professional Forecasters (SPF). The ex post forecast errors are evaluated on the basis of the data from the period 1994–2006. We apply the Philadelphia Fed “Real-Time Data Set for Macroeconomists” to ensure that the data used in estimating the DSGE and VAR models was comparable to the information available to the SPF.Overall, the results are mixed. When comparing the root mean squared errors for some forecast horizons, it appears that the DSGE model outperforms the other methods in forecasting the GDP growth rate. However, this characteristic turned out to be statistically insignificant. Most of the SPF's forecasts of GDP price index inflation and the short-term interest rate are better than those from the DSGE and VAR models.  相似文献   

15.
The paper reviews the literature on maintenance management, integrates key dimensions of maintenance within a taxonomy of maintenance configurations, and explores the impact of differing configurations on contextual factors and operational performance. “Prevention”, “hard maintenance integration” and “soft maintenance integration” were identified as key maintenance variables. Data were collected from 253 Swedish manufacturing companies, and three distinct clusters were identified. “Proactive Maintainers” emphasized preventive maintenance policies. “IT Maintainers” relied on computerized and company-wide integrated information systems for maintenance. “Maintenance Laggers” emphasized all maintenance dimensions to lesser extent than the others. The importance of maintenance prevention and integration differ between contexts. There were subtle performance differences across identified configurations, but preventive and integrated maintenance were more important for companies seeking competitive process control and flexibility. There existed no group with any great emphasis on all three maintenance dimensions, but attaining truly high performance may require a rare mix of the three dimensions. This mix of variables could constitute a hypothesized “World Class Maintenance” group.  相似文献   

16.
We propose a measure of predictability based on the ratio of the expected loss of a short‐run forecast to the expected loss of a long‐run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non‐parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
A directional slacks-based measure of technical inefficiency   总被引:8,自引:0,他引:8  
Hirofumi  William L.   《Socio》2009,43(4):274-287
Radial measures of efficiency estimated using linear programming (LP) methods can be biased since slack in the constraints defining the technology suggests that at least one input can be reduced, or one output can be expanded, even though a firm is deemed to be “technically efficient.” In this paper, we propose a directional slacks-based measure of technical inefficiency to account for the potential of slack in technological constraints. When no such slacks exist, directional slacks-based inefficiency collapses to the directional technology distance function. Our proposed measure helps to generalize some of the existing slacks-based measures of inefficiency. We examine the financial services provided by Japanese cooperative Shinkin banks, and estimate their inefficiency during the period 2002–2005. This inefficiency declined slightly during the period. We thus propose that slack is an important source of inefficiency which is often not captured by the directional technology distance function.  相似文献   

18.
It is well known that when property rights to an asset are divided, individual right holders may not have adequate incentives to invest in proper maintenance. In this paper, weexamine how mortgage laws affect the nature of the lender's claim to the house, and howthat claim in turn affects the incentives of borrowers to invest in home maintenance. Thespecific law that we examine concerns the right of lenders to pursue a borrower's nonhous-ingwealth in the event of default if the value of the house is less than the mortgage balance. Most states allow lenders to collect such “deficiency judgments,” while others either prohibit, or make it difficult to obtain them. The theoretical model developed inthis paper predicts that borrowers will maintain at a higher rate when lenders are allowedto seek deficiency judgments. Intuitively, when borrowers' nonhousing wealth is at risk,they have an incentive to invest more in maintenance in order to reduce the likelihood that the value of the property will fall below the mortgage balance. We attempt to measure this effect using data on household maintenance obtained from the American Housing Survey along with information on differences in mortgage laws across states. We estimate a three-equation simultaneous system relating maintenance expenditures, house value, and mortgage rates. The results provide confirmation that variation in mortgage laws affect homeowner maintenance in the manner predicted by the theory.  相似文献   

19.
E-Leadership and Virtual Teams   总被引:1,自引:0,他引:1  
In this paper we have identified some key challenges for E-leaders of virtual teams. Among the most salient of these are the following:
• The difficulty of keeping tight and loose controls on intermediate progress toward goals
• Promoting close cooperation among teams and team members in order to integrate deliverables
• Encouraging and recognizing emergent leaders in virtual teams
• Establishing explicit processes for archiving important written documentation
• Establishing and maintaining norms and procedures early in a team’s formation and development
• Establishing proper boundaries between home and work
Virtual team environments magnify the differences between good and bad projects, organizations, teams, and leaders. The nature of such projects is that there is little tolerance for ineffective leadership. There are some specific issues and techniques for mitigating the negative effects of more dispersed employees, but these are merely extensions of good leadership—they cannot make up for the lack of it.

SELECTED BIBLIOGRAPHY

An excellent reference for research on teams is M. E. Shaw, R. M. McIntyre, and E. Salas, “Measuring and Managing for Team Performance: Emerging Principles from Complex Environments,” in R. A. Guzzo and E. Salas, eds., Team Effectiveness and Decision Making in Organizations (San Francisco: Jossey-Bass, 1995). For a fuller discussion of teleworking and performance-management issues in virtual teams, see W. F. Cascio, “Managing a Virtual Workplace,” Academy of Management Executive, 2000, 14(3), 81–90, and also C. Joinson, “Managing Virtual Teams,” HRMagazine, June 2002, 69–73. Several sources discuss the issue of trust in virtual teams: D. Coutu, “Trust in Virtual Teams,” Harvard Business Review, May–June 1998, 20–21; S. L. Jarvenpaa, K. Knoll, and D. E. Leidner, “Is Anybody Out There? Antecedents of Trust in Global Virtual Teams,” Journal of Management Information Systems, 1998, 14(4), 29–64. See also Knoll and Jarvenpaa, “Working Together in Global Virtual Teams,” in M. Igbaria and M. Tan, eds., The Virtual Workplace (Hershey, PA: Idea Group Publishing, 1998).Estimates of the number of teleworkers vary. For examples, see Gartner Group, Report R-06-6639, November 18, 1998, and also Telework America survey, news release, October 23, 2001. We learned about CPP’s approach to managing virtual work arrangements through David Krantz, personal communication, August 20, 2002, Palo Alto, CA.There are several excellent references on emergent leaders. For example, see G. Lumsden and D. Lumsden, Communicating in Groups and Teams: Sharing Leadership (Belmont, CA: Wadsworth, 1993); Lumsden and Lumsden, Groups: Theory and Experience, 4th ed. (Boston: Houghton, 1993); R. W. Napier and M. K. Gershenfeld, Groups: Theory and Experience, 4th ed. (Boston: Houghton, 1989); and M. E. Shaw, Group Dynamics: The Psychology of Small Group Behavior, 3rd ed. (New York: McGraw-Hill, 1981).An excellent source for e-mail style is D. Angell and B. Heslop, The Elements of E-mail Style: Communicate Effectively via Electronic Mail (Reading, MA: Addison-Wesley Publishing Company, 1994). To read more on the growing demand for flexible work arrangements, see “The New World of Work: Flexibility is the Watchword,” Business Week, 10 January 2000, 36.For more on individualism and collectivism, see H. C. Triandis, “Cross-cultural Industrial and Organizational Psychology,” in H. C. Triandis, M. D. Dunnette, and L. M. Hough, eds., Handbook of Industrial and Organizational Psychology, 2nd ed., vol. 4 (Palo Alto, CA: Consulting Psychologists Press, 1994, 103–172).Executive SummaryAs the wired world brings us all closer together, at the same time as we are separated by time and distance, leadership in virtual teams becomes ever more important. Information technology makes it possible to build far-flung networks of organizational contributors, although unique leadership challenges accompany their formation and operation. This paper describes the growth of virtual teams, the various forms they assume, the kinds of information and support they need to function effectively, and the leadership challenges inherent in each form. We then provide workable, practical solutions to each of the leadership challenges identified.  相似文献   

20.
This paper tests the relation between intellectual collaboration and the quality of the intellectual output using academic papers published in prestigious finance journals during 1988–2005. We use the number of authors of a paper to measure the extent of intellectual collaboration and the number of citations that a paper receives (adjusted by the number of years since the paper's publication) as a measure of its intellectual value. Based on empirical tests, we find that papers with more authors are cited more often. This relation does not hold for purely theoretical papers. Coauthoring with a prolific author leads to higher quality papers, but coauthoring with colleagues at the same institution leads to neither higher nor lower quality papers. Papers with four authors are cited most often. Overall, when it comes to intellectual collaboration, our results counter the notion that “too many cooks spoil the broth.”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号