首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Inspired by the success of geographical clustering in California, many governments pursue cluster policy in the hope to build the next Silicon Valley. In this paper we critically assess the relationship between geographical clustering and public policy. With the help of a range of theoretical insights and case study examples we show that cluster policy in fact is a risky\ venture, especially when it is tried to copy the success of regional ‘best practices’. Therefore, we advice policy makers to move away from the Silicon Valley model and to modestly start from a place-specific approach of ‘Regional Realism’.  相似文献   

2.
The paper is a preliminary research report and presents a method for generating new records using an evolutionary algorithm (close to but different from a genetic algorithm). This method, called Pseudo-Inverse Function (in short P-I Function), was designed and implemented at Semeion Research Centre (Rome). P-I Function is a method to generate new (virtual) data from a small set of observed data. P-I Function can be of aid when budget constraints limit the number of interviewees, or in case of a population that shows some sociologically interesting trait, but whose small size can seriously affect the reliability of estimates, or in case of secondary analysis on small samples. The applicative ground is given by research design with one or more dependent and a set of independent variables. The estimation of new cases takes place according to the maximization of a fitness function and outcomes a number as large as needed of ‘virtual’ cases, which reproduce the statistical traits of the original population. The algorithm used by P-I Function is known as Genetic Doping Algorithm (GenD), designed and implemented by Semeion Research Centre; among its features there is an innovative crossover procedure, which tends to select individuals with average fitness values, rather than those who show best values at each ‘generation’. A particularly thorough research design has been put on: (1) the observed sample is half-split to obtain a training and a testing set, which are analysed by means of a back propagation neural network; (2) testing is performed to find out how good the parameter estimates are; (3) a 10% sample is randomly extracted from the training set and used as a reduced training set; (4) on this narrow basis, GenD calculates the pseudo-inverse of the estimated parameter matrix; (5) ‘virtual’ data are tested against the testing data set (which has never been used for training). The algorithm has been proved on a particularly difficult ground, since the data set used as a basis for generating ‘virtual’ cases counts only 44 respondents, randomly sampled from a broader data set taken from the General Social Survey 2002. The major result is that networks trained on the ‘virtual’ resample show a model fit as good as the one of the observed data, though ‘virtual’ and observed data differ on some features. It can be seen that GenD ‘refills’ the joint distribution of the independent variables, conditioned by the dependent one. This paper is the result of deep collaboration among all authors. Cinzia Meraviglia wrote § 1, 3, 4, 6, 7 and 8; Giulia Massini wrote §5; Daria Croce performed some elaborations with neural networks and linear regression; Massimo Buscema wrote §2.  相似文献   

3.
Although the Big Five Questionnaire for children (BFQ-C) (C. Barbaranelli et al., Manuale del BFQ-C. Big Five Questionnaire Children, O.S. Organizazioni, Firenze, 1998) is an ordinal scale, its dimensionality has often been studied using factor analysis with Pearson correlations. In contrast, the present study takes this ordinal metric into account and examines the dimensionality of the scale using factor analysis based on a matrix of polychoric correlations. The sample comprised 852 subjects (40.90% boys and 59.10% girls). As in previous studies the results obtained through exploratory factor analysis revealed a five-factor structure (extraversion, agreeableness, conscientiousness, emotional instability and openness). However, the results of the confirmatory factor analysis were consistent with both a four and five-factor structure, the former showing a slightly better fit and adequate theoretical interpretation. These data confirm the need for further research as to whether the factor ‘Openness’ should be maintained as an independent factor (five-factor model), or whether it would be best to omit it and distribute its items among the factors ‘Extraversion’ and ‘Conscientiousness’ (four-factor model).  相似文献   

4.
Interpretative qualitative social science has attempted to distinguish itself from quantitative social science by rejecting traditional or ‘received’ notions of generalization. Traditional concepts of scientific generalization it is claimed are based on a misguided objectivism as to the mechanisms operating in the social world, and particularly the ability of statements to capture such mechanisms in any abstract sense. Instead they propose new versions of the generalizability concept e.g. ‘transferability’, which relies on the context dependent judgement of ‘fit’ between two or more cases instances made by a researcher. This paper argues that the transferability concept, as outlined and argued by interpretativist methodologists, is thoroughly coextensive with notions of generalizability formalized for natural science and naturalistic social science by philosophers and methodologists of science. Therefore, it may be concluded that the interpretativist claim to a break with received scientific traditions is a premature one, at least with regard to the issue of generalization.  相似文献   

5.
In the German General Survey 2000 (ALLBUS), the so-called ‘Sealed Envelope Technique’ (SET), was utilized to obtain data on an individuals’ self-admitted delinquency. The focus of this article is to discover, particularly, the reason respondents refused to fill in this confidential questionnaire in spite of the guaranteed anonymity. From a theoretical perspective of subjective expected utility, the assumption is that respondents are interested in maximizing benefits and avoiding social costs in the interview situation. Consequently, responses provided are optimal realizations of the respondents’ interest. Furthermore, the respondents’ intellectual capacity in understanding the questions, the SET applied, the interviewer characteristics, and aspects of the interview situation, were presumably responsible for refusals on sensitive questions. The ALLBUS 2000 data confirm these hypotheses. The selectivity of self-reported delinquency on matters concerning fare avoidance and tax evasion also resulted in biased model estimators of determinants regarding anticipated future delinquency. Mail survey is one supported view on improving data quality in self-admitted acts of delinquency. However, before firm conclusions can be drawn, more empirical data is needed on the processes and mechanisms involved in a respondents refusal to answer questions on delinquency. * An empirical assessment on the effectiveness of the ‘Sealed Envelope Technique’ for self-admitted delinquency through the utilization of the German General Social Survey 2000 data.  相似文献   

6.
The concept of knowledge graphs is introduced as a method to represent the state of the art in a specific scientific discipline. Next the text analysis part in the construction of such graphs is considered. Here the ‘translation’ from text to graph takes place. The method that is used here is compared to methods used in other approaches in which texts are analysed.  相似文献   

7.
8.
Based on ‘endogenous’ growth theory, the paper examines the effect of trade liberalization on long-run income per capita and economic growth in Turkey. Although the presumption must be that free trade has a beneficial effect on long run growth, counter examples can also be found. This controversy increases the importance of empirical work in this area. Using the most recent data we employ multivariate cointegration analysis to test the long run relationship among the variables in hand. In a multivariate context, the effect of determinants such as increasing returns to scale, investment in human and physical capital are also included in both theoretical and empirical works. Our causality evidence between the long run growth and a number of indicators of trade liberalizations confirms the predictions of the ‘new growth theory’. However, the overall effect of the possible breaks and/or policy change and unsustainability in the 1990s looks contradictory and deserves further investigation.  相似文献   

9.
This paper covers the main findings of the doctoral research that was concerned with seeking to extend aspects of dilemma theory. In professional practice, the Trompenaars Hampden-Turner Dilemma Reconciliation ProcessTM is a vehicle delivering dilemma theory in application. It informs a manager or leader on how to explore the dilemmas they face, how to reconcile the tensions that result, and how to structure the action steps for implementing the reconciled solutions. This vehicle forms the professional practice of the author who seeks to bring more rigor to consulting practice and thereby also contribute to theory development in the domain. The critical review of dilemma theory reveals that previous authors are inconsistent and variously invalid in their use of the terms ‘dilemma theory,’ ‘dilemma methodology,’ ‘dilemma process,’ ‘dilemma reconciliation,’ etc., and therefore an attempt is made to resolve these inconsistencies by considering whether ‘dilemmaism’ at the meta-level might be positioned as a new paradigm of inquiry for (management) research that embodies ontological, epistemological, and methodical premises that frame an approach to the resolution of real world business problems in (multi) disciplinary; (multi) functional and (multi) cultural business environments. This research offers contributions to knowledge, professional practice and theory development from the exploration of the SPID model as a way to make the elicitation of dilemmas more rigorous and structured and in the broader context of exploring ‘dilemmaism’ as a new paradigm of inquiry.  相似文献   

10.
This paper examines global recessions as a cascade phenomenon. In other words, how recessions arising within one or more countries might percolate across a network of connected economies. An agent based model is set up in which the agents are Western economies. A country has a probability of entering recession in any given year and one of emerging from it the next. In addition, the agents have a threshold propensity, which varies across time, to import a recession from the agents most closely connected to them. The agents are connected on a network, and an agent’s neighbours at any time are either in (state 1) or out (state 0) of recession. If the weighted sum exceeds the threshold, the agent also goes into recession. Annual real GDP growth for 17 Western countries 1871–2006 is used as the data set. The model is able to replicate three key features of the statistical distribution of recessions: the distribution of the number of countries in recession in any given year, the duration of recessions within the individual countries, and the distribution of ‘wait time’ between recessions i.e. the number of years between them. The network structure is important for the interacting agents to replicate the stylised facts. The country-specific probabilities of entering and emerging from recession by themselves give results which are by no means as well matched to the actual data. We are grateful to an anonymous referee for some extremely helpful comments.  相似文献   

11.
Postulating a linear regression of a variable of interest on an auxiliary variable with values of the latter known for all units of a survey population, we consider appropriate ways of choosing a sample and estimating the regression parameters. Recalling Thomsen’s (1978) results on non-existence of ‘design-cum-model’ based minimum variance unbiased estimators of regression coefficients we apply Brewer’s (1979) ‘asymptotic’ analysis to derive ‘asymptotic-design-cummodel’ based optimal estimators assuming large population and sample sizes. A variance estimation procedure is also proposed.  相似文献   

12.
This article suggests one way to systematically code textual data for research. The approach utilizes computer content analysis to examine patterns of emphasized ideas in text as well as the social context or underlying perspective reflected in the text. A conceptual dictionary is used to organize word meanings. An extensive profile of word meanings is used to characterize and discriminate social contexts. Social contexts are analyzed in relation to four reference dimensions (traditional, practical, emotional and analytic) which are used in the social science literature. The approach is illustrated with five widely varying texts, analyzed with selected comparative data. This approach has been useful in many social science investigations to system-atically score open-ended textual information. Scores can be incorporated into quantitative analysis with other data, used as a guide to qualitative studies, and to help integrate strengths of quantitative and qualitative approaches to research.  相似文献   

13.
We propose a generic model for multiple choice situations in the presence of herding and compare it with recent empirical results from a Web-based music market experiment. The model predicts a phase transition between a weak imitation phase and a strong imitation, ‘fashion’ phase, where choices are driven by peer pressure and the ranking of individual preferences is strongly distorted at the aggregate level. The model can be calibrated to reproduce the main experimental results of Salganik et al. (Science, 311, 854–856 (2006)); we show in particular that the value of the social influence parameter can be estimated from the data. In one of the experimental situation, this value is found to be close to the critical value of the model.  相似文献   

14.
In 2005, Wegmans Food Markets Inc., the family-owned supermarket chain, was awarded the number one spot on the Fortune “100 Best Companies To Work For.” Wegmans’ recognition illustrates an exemplary case of strategic human resource management embedded in an overall culture of social responsibility, amidst a highly competitive and low margin industry. We detail Wegmans’ human resource practices and its overall stakeholder orientation, arguing that the treatment of employees as strategic assets constitutes an effective approach to social responsibility. In other words, strategic human resource management can help organizations reconcile the often cited conflict between profits and principles. We therefore begin with an overview of the contemporary supermarket industry, provide a brief history of Wegmans, and showcase the supermarket chain’s human resource practices. In closing, we discuss Wegmans’ stakeholder orientation and comment on the divide between strategic human resource management and social responsibility research.  相似文献   

15.
We observe the statistical properties of blogs that are expected to reflect social human interaction. Firstly, we introduce a basic normalization preprocess that enables us to evaluate the genuine word frequency in blogs that are independent of external factors such as spam blogs, server-breakdowns, increase in the population of bloggers, and periodic weekly behaviors. After this process, we can confirm that small frequency words clearly follow an independent Poisson process as theoretically expected. Secondly, we focus on each blogger’s basic behaviors. It is found that there are two kinds of behaviors of bloggers. Further, Zipf’s law on word frequency is confirmed to be universally independent of individual activity types.  相似文献   

16.
Within the class of weighted averages of ordered measurements on criteria of ‘equal importance’, we discuss aggregators that help assess to what extent ‘most’ criteria are satisfied, or a ‘good performance across the board’ is attained. All aggregators discussed are compromises between the simple mean and the very demanding minimum score. The weights of the aggregators are described in terms of the convex polytopes as induced by the characteristics of the aggregators, such as concavity. We take the barycentres of the polytopes as representative weights.  相似文献   

17.
Summary A natural conjugate prior distribution for the parameters involved in the noncentral chi-square leads to many known distributions. The applications of the distributions thus obtained are briefly pointed out in evaluating the ‘kill’ probability in the analysis of weapon systems effectiveness. The ‘kill’ probabilities or the expected coverage are obtained associated with a gamma prior distribution and compared with those obtained byMcnolty. This paper is read in a symposium on Mathematical Sciences held under the auspices of Delhi University, Delhi im January 1966.  相似文献   

18.
The aim of this paper is to derive methodology for designing ‘time to event’ type experiments. In comparison to estimation, design aspects of ‘time to event’ experiments have received relatively little attention. We show that gains in efficiency of estimators of parameters and use of experimental material can be made using optimal design theory. The types of models considered include classical failure data and accelerated testing situations, and frailty models, each involving covariates which influence the outcome. The objective is to construct an optimal design based of the values of the covariates and associated model or indeed a candidate set of models. We consider D-optimality and create compound optimality criteria to derive optimal designs for multi-objective situations which, for example, focus on the number of failures as well as the estimation of parameters. The approach is motivated and demonstrated using common failure/survival models, for example, the Weibull distribution, product assessment and frailty models.  相似文献   

19.
In this study it will be argued that the perceived distribution of opinions among others is important for opinion research. Three different ways of measuring the perception of opinion distributions in survey research are compared: (a) by means of a questionwhat most people think about an issue, (b) by means of a questionhow many people are perceived to agree with an issue-statement, (c) by means of ‘line-production-boxes’, a special version ofmagnitude estimation. The results indicate that ‘line-production-boxes’ can improve data quality, but have also some drawbacks which will have to be dealt with. ‘Line-production-boxes’ give a wealth of information about individual differences in the forms of perceived opinion distributions. Although the normal distribution is used often, many other distribution forms are also used. The method of ‘line-production-boxes’ is compared with the method of estimating percentage points. Although high correlations suggest a good concurrent validity, some systematic differences do exist. New research directions are suggested.  相似文献   

20.
Most qualitative researchers do not recommend generalization from qualitative studies, as this research is not based on random samples and statistical controls. The objective of this study is to explore the degree to which in-service teachers understand the controversial aspects of generalization in both qualitative and quantitative educational research and as to how this can facilitate problems faced by the teachers in the classroom. The study is based on 83 participants who had registered for a 10-week course on ‘Methodology of Investigation in Education’ as part of their Master’s degree program. The course is based on 11 readings drawing on a philosophy of science perspective (positivism, constructivism, Popper, Kuhn, Lakatos). Course activities included written reports, class room discussions based on participants’ presentations, and written exams. Based on the results obtained it is concluded: (1) almost 91% of the teachers agreed that external generalization in a different social context is feasible; (2) almost 63% of the participants used a fairly inconsistent approach, that is in a theoretical context agreed that qualitative research cannot be generalized and still when asked with respect to the experience of two particular teachers, agreed that generalization was possible; (3) almost 28% of the participants used a consistent approach. Some of the reasons provided by the participants as to why generalization was feasible are discussed. An analogy is drawn with respect to Piaget’s methodology, viz., it was not based on random samples or statistical treatments and still his oeuvre has been generalized (criticisms not withstanding) in both the psychology and educational literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号