首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In data envelopment analysis (DEA), there are two principal methods for identifying and measuring congestion: Those of Färe et al. [Färe R, Grosskopf S. When can slacks be used to identify congestion. An answer to W. W. Cooper, L. Seiford, J. Zhu. Socio-Economic Planning Sciences 2001;35:1–10] and Cooper et al. [Cooper WW, Deng H, Huang ZM, Li SX. A one-model approach to congestion in data envelopment analysis. Socio-Economic Planning Sciences 2002;36:231–8]. In the present paper, we focus on the latter work in proposing a new method that requires considerably less computation. Then, by proving a selected theorem, we show that our proposed methodology is indeed equivalent to that of Cooper et al.  相似文献   

2.
3.
For reasons of methodological convenience statistical models analysing judicial decisions tend to focus on the duration of custodial sentences. These types of sentences are however quite rare (7% of the total in England and Wales), which generates a serious problem of selection bias. Typical adjustments employed in the literature, such as Tobit models, are based on questionable assumptions and are incapable to discriminate between different types of non-custodial sentences (such as discharges, fines, community orders, or suspended sentences). Here we implement an original approach to model custodial and non-custodial sentence outcomes simultaneously avoiding problems of selection bias while making the most of the information recorded for each of them. This is achieved by employing Pina-Sánchez et al. (Br J Criminol 59:979–1001, 2019) scale of sentence severity as the outcome variable of a Bayesian regression model. A sample of 7242 theft offences sentenced in the Crown Court is used to further illustrate: (a) the pervasiveness of selection bias in studies restricted to custodial sentences, which leads us to question the external validity of previous studies in the literature limited to custodial sentence length; and (b) the inadequacy of Tobit models and similar methods used in the literature to adjust for such bias.  相似文献   

4.
5.
This paper presents an interactive visualization tool for the qualitative exploration of multivariate data that may exhibit cyclic or periodic behavior. Glyphs are used to encode each multivariate data point, and linear, stacked, and spiral glyph layouts are employed to help convey both intra-cycle and inter-cycle relationships within the data. Users may interactively select glyph and layout types, modify cycle lengths and the number of cycles to display, and select the specific data dimensions to be included. We validate the usefulness of the system with case studies and describe our future plans for expanding the system's capabilities.  相似文献   

6.
高等职业教育改革要求高职学生在毕业时取得毕业证书和相应的职业资格证书.工程造价专业对应的职业资格证书是"建设工程造价员".学历教育和职业资格取证工作需要在高职教学工作中同时进行,二者是相辅相成的关系.应当在对新版建设工程造价员资格考试大纲进行分析的基础上,结合教学实际情况,有针对性地进行高职工程造价专业课程改革和实训基地建设.  相似文献   

7.
A majority of manufacturers make use of some form of enterprise systems (ES), yet on average, the financial impact of ES adoption is essentially neutral. We propose that in an ES environment of easy information access, competitive success depends, in part, on the policies regulating enterprise information use. To explore this proposition, we examine the efficient use of different types of enterprise information in the realization of strategic performance. Efficient firms will devote fewer resources to information use to achieve the same strategic performance as less efficient firms.  相似文献   

8.
9.
It is well known that the standard Breusch and Pagan (1980) LM test for cross-equation correlation in a SUR model is not appropriate for testing cross-sectional dependence in panel data models when the number of cross-sectional units (n)(n) is large and the number of time periods (T)(T) is small. In fact, a scaled version of this LM test was proposed by Pesaran (2004) and its finite sample bias was corrected by Pesaran et al. (2008). This was done in the context of a heterogeneous panel data model. This paper derives the asymptotic bias of this scaled version of the LM test in the context of a fixed effects homogeneous panel data model. This asymptotic bias is found to be a constant related to nn and TT, which suggests a simple bias corrected LM test for the null hypothesis. Additionally, the paper carries out some Monte Carlo experiments to compare the finite sample properties of this proposed test with existing tests for cross-sectional dependence.  相似文献   

10.
ABSTRACT

User-Generated Content (UGC) is becoming a powerful data source to support emergency management. Managers usually face two difficulties in practical emergency management. First, the requirement topics for emergency management are changing over time. Second, the value of the same microblog is changing over different emergency phases. The contributions of this study lie in the following aspects. First, this paper develops a multiphase dynamic assessment model. Second, an idea for the dynamic evaluation of UGC is proposed. Third, this paper presents an effective quantification method to assess the dynamic value of social media data.  相似文献   

11.
Measuring the performance of Non-Profit Organizations (NPOs) is a complicated issue: data envelopment analysis (DEA) is a popular quantitative tool in the past literature. However, the subjective opinions of NPOs could disturb their actual performance, and this problem is seldom considered. In this study, we use the qualitative DEA as a tool to find the emphasized inputs and outputs for these NPOs. Most DEA models are established by the basis of quantitative data, they are difficult to describe the qualitative performance of NPOs. This paper proposes a new perspective for computing the efficiency of a Decision Making Unit based on qualitative data by affinity Set. The DEA model for qualitative data could be traced back to the work of Cook et al. early in 1993. Our contribution prevents the identical efficiency scores from the model of Cook et al., and a combinatorial optimization technique is used to solve the new problem. Finally, we found most NPOs would like to get more resources from outside; but interestingly, they don’t like to be officially monitored. Therefore, we should use the quantitative DEA on NPOs very carefully.  相似文献   

12.
13.
Recommender systems have been extensively studied to present items, such as movies, music and books that are likely of interest to the user. Researchers have indicated that integrated medical information systems are becoming an essential part of the modern healthcare systems. Such systems have evolved to an integrated enterprise-wide system. In particular, such systems are considered as a type of enterprise information systems or ERP system addressing healthcare industry sector needs. As part of efforts, nursing care plan recommender systems can provide clinical decision support, nursing education, clinical quality control, and serve as a complement to existing practice guidelines. We propose to use correlations among nursing diagnoses, outcomes and interventions to create a recommender system for constructing nursing care plans. In the current study, we used nursing diagnosis data to develop the methodology. Our system utilises a prefix-tree structure common in itemset mining to construct a ranked list of suggested care plan items based on previously-entered items. Unlike common commercial systems, our system makes sequential recommendations based on user interaction, modifying a ranked list of suggested items at each step in care plan construction. We rank items based on traditional association-rule measures such as support and confidence, as well as a novel measure that anticipates which selections might improve the quality of future rankings. Since the multi-step nature of our recommendations presents problems for traditional evaluation measures, we also present a new evaluation method based on average ranking position and use it to test the effectiveness of different recommendation strategies.  相似文献   

14.
Individuals living in society are bound together by a social network and, in many social and economic situations, individuals learn by observing the behavior of others in their local environment. This process is called social learning. Learning in incomplete networks, where different individuals have different information, is especially challenging: because of the lack of common knowledge individuals must draw inferences about the actions others have observed, as well as about their private information. This paper reports an experimental investigation of learning in three-person networks and uses the theoretical framework of Gale and Kariv (Games Econ Behav 45:329–346, 2003) to interpret the data generated by the experiments. The family of three-person networks includes several non-trivial architectures, each of which gives rise to its own distinctive learning patterns. To test the usefulness of the theory in interpreting the data, we adapt the Quantal Response Equilibrium (QRE) model of Mckelvey and Palfrey (Games Econ Behav 10:6–38, 1995; Exp Econ 1:9–41, 1998). We find that the theory can account for the behavior observed in the laboratory in a variety of networks and informational settings. This provides important support for the use of QRE to interpret experimental data.  相似文献   

15.
16.
This study examines trends in the distribution of gross earnings in Hungary since 1988, using official household budget surveys and enterprise-based earnings surveys. We find a significant growth in inequality since 1988, to levels comparable with western Europe. There is little evidence of a serious discrepancy between the two data sources used.  相似文献   

17.
Online communities have become an important source for knowledge and new ideas. This paper considers the potential of crowdsourcing as a tool for data analysis to address the increasing problems faced by companies in trying to deal with “Big Data”. By exposing the problem to a large number of participants proficient in different analytical techniques, crowd competitions can very quickly advance the technical frontier of what is possible using a given dataset. The empirical setting of the research is Kaggle, the world?s leading online platform for data analytics, which operates as a knowledge broker between companies aiming to outsource predictive modelling competitions and a network of over 100,000 data scientists that compete to produce the best solutions. The paper follows an exploratory case study design and focuses on the efforts by Dunnhumby, the consumer insight company behind the success of the Tesco Clubcard, to find and lever the enormous potential of the collective brain to predict shopper behaviour. By adopting a crowdsourcing approach to data analysis, Dunnhumby were able to extract information from their own data that was previously unavailable to them. Significantly, crowdsourcing effectively enabled Dunnhumby to experiment with over 2000 modelling approaches to their data rather than relying on the traditional internal biases within their R&D units.  相似文献   

18.
We present a new approach for the economic analysis of countries, which we apply to the case of the Netherlands. Our study is based on a novel way to quantify exported products’ complexity and countries’ fitness which has been recently introduced in the literature. Adopting a framework in which products are clustered in sectors, we compare the different branches of the export of the Netherlands, taking into account the time evolution of their volumes, complexities and competitivenesses in the years 1995–2010. The High Tech and Life Sciences sectors share high quality products but low competitiveness; the opposite is true for Horticulture and Energy. We analyze in detail the Chemicals sector, finding a declining global complexity which is mostly driven by a shift towards products of lower quality. A growth forecast is also provided. In light of our results we suggest a differentiation in policy between the country’s self-defined industrial sectors.  相似文献   

19.
20.
In this paper we propose a new technology able to map the underlying connection scheme among several psychological variables in a single individual. Nine patients with chronic heart failure underwent at regular intervals, two electronic questionnaires to evaluate depression (STAI—short form) and anxiety (STAY-6). Individual semantic maps were developed by Auto Contractive Map, a new data mining tool based on an artificial neural networks acting on the small data set formed by questionnaires items applied serially along time. The clinical psychologist involved in the clinical evaluation of the cases was asked to score the consistency between the information emerging from the graph depicting the structure of the main relationships among items and the clinical picture resulting from the psychological colloquium. All cases reported overall judgments of a good consistency suggesting that the mathematical architecture of the system is able to capture in the dynamics of items value variations through time the underlying construct of the patient psychological status. This technology is promising in remote monitoring of patients’ psychological condition in different settings with the possibility to implement personalized psychological interventions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号