共查询到13条相似文献,搜索用时 0 毫秒
1.
多数据源数据信息交换中间件的设计与实现 总被引:1,自引:0,他引:1
网络环境下的异构数据交换技术是数据集成、电子商务应用和Web服务合成系统等实际应用的核心支撑技术。文章基于"协作者"模式,以关系数据库和XML文档为异构数据源的典型代表,给出了Web服务环境下的数据交换模型,介绍了数据交换模型的4种基本操作的算法思想和利用Web服务合成技术实现数据交换的方法,通过一个示例,验证了文章提出的数据交换方法的有效性。 相似文献
2.
《Enterprise Information Systems》2013,7(3):353-363
Product lifecycle modelling is to define and represent product lifecycle data and to maintain data interdependencies. To build a complete, reusable and highly consistent product lifecycle information model, the product lifecycle is divided into five stages: requirement analysis, conceptual design, engineering design, manufacturing, and service and support. Accordingly, five stage product models (requirement analysis model, conceptual design model, engineering design model, manufacturing model, and service and support model) are discussed. To integrate all information of a product lifecycle and support networked manufacturing mode, the key elements of product lifecycle modelling are discussed and a framework of product lifecycle modelling is proposed. Further, the relationship and evolvement of product models at different stages are described. Finally, a Web-based integration framework is proposed to support interoperability of distributed product data sources. 相似文献
3.
A general framework for frontier estimation with panel data 总被引:1,自引:0,他引:1
The main objective of the paper is to present a general framework for estimating production frontier models with panel data. A sample of firms i = 1, ..., N is observed on several time periods t = 1, ... T. In this framework, nonparametric stochastic models for the frontier will be analyzed. The usual parametric formulations of the literature are viewed as particular cases and the convergence of the obtained estimators in this general framework are investigated. Special attention is devoted to the role of N and of T on the speeds of convergence of the obtained estimators. First, a very general model is investigated. In this model almost no restriction is imposed on the structure of the model or of the inefficiencies. This model is estimable from a nonparametric point of view but needs large values of T and of N to obtain reliable estimates of the individual production functions and estimates of the frontier function. Then more specific nonparametric firm effect models are presented. In these cases, only NT must be large to estimate the common production function; but again both large N and T are needed for estimating individual efficiencies and for estimating the frontier. The methods are illustrated through a numerical example with real data. 相似文献
4.
《Enterprise Information Systems》2013,7(4):424-469
A significant challenge that faces IT management is that of aligning the IT infrastructure of an enterprise with its business goals and practices, also called business-IT alignment. A particular business-IT alignment approach, the foundation for execution approach, was well-accepted by practitioners due to a novel construct, called the operating model (OM). The OM supports business-IT alignment by directing the coherent and consistent design of business and IT components. Even though the OM is a popular construct, our previous research detected the need to enhance the OM, since the OM does not specify methods to identify opportunities for data sharing and process reuse in an enterprise. In this article, we address one of the identified deficiencies in the OM. We present a process reuse identification framework (PRIF) that could be used to enhance the OM in identifying process reuse opportunities in an enterprise. We applied design research to develop PRIF as an artefact, where the development process of PRIF was facilitated by means of the business-IT alignment model (BIAM). We demonstrate the use of the PRIF as well as report on the results of evaluating PRIF in terms of its usefulness and ease-of-use, using experimentation and a questionnaire. 相似文献
5.
《Enterprise Information Systems》2013,7(4):523-542
Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study. 相似文献
6.
This paper presents an interactive visualization tool for the qualitative exploration of multivariate data that may exhibit cyclic or periodic behavior. Glyphs are used to encode each multivariate data point, and linear, stacked, and spiral glyph layouts are employed to help convey both intra-cycle and inter-cycle relationships within the data. Users may interactively select glyph and layout types, modify cycle lengths and the number of cycles to display, and select the specific data dimensions to be included. We validate the usefulness of the system with case studies and describe our future plans for expanding the system's capabilities. 相似文献
7.
《Enterprise Information Systems》2013,7(3):272-302
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k?1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k?1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k?1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account. 相似文献
8.
9.
10.
Dengue, also known as break-bone fever, is a tropical disease transmitted by mosquitoes. If the similarity between dengue infected users can be identified, it can help government’s health agencies to manage the outbreak more effectively. To find similarity between cases affected by Dengue, user’s personal and health information are the two fundamental requirements. Identification of similar symptoms, causes, effects, predictions and treatment procedures, is important. In this paper, an effective framework is proposed which finds similar patients suffering from dengue using keyword aware domain thesaurus and case base reasoning method. This paper focuses on the use of ontology dependent domain thesaurus technique to extract relevant keywords and then build cases with the help of case base reasoning method. Similar cases can be shared with users, nearby hospitals and health organizations to manage the problem more adequately. Two million case bases were generated to test the proposed similarity method. Experimental evaluations of proposed framework resulted in high accuracy and low error rate for finding similar cases of dengue as compared to UPCC and IPCC algorithms. The framework developed in this paper is for dengue but can easily be extended to other domains also. 相似文献
11.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
《Enterprise Information Systems》2013,7(7):743-767
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost. 相似文献
12.
Alphonse Aklamanu Shlomo Y. Tarba 《International Journal of Human Resource Management》2016,27(22):2790-2822
The literature on human resource management (HRM) indicates that HRM plays an important role in merger and acquisition (M&A) integration success, but pays little attention to the mechanisms for knowledge sharing in post-M&A integration. Limited work has been carried out to provide understanding on how social capital and HRM practices influence intra-organizational knowledge sharing in M&A integration. This paper primarily focuses on the phenomenon of social capital and HRM practices – one of the primary means by which knowledge sharing can occur within firms. The main aim of this paper is to provide an alternative framework that introduces the literature on HRM and social capital to discuss how HRM practices and the various dimensions of social capital may enhance knowledge sharing in post-M&A integration. Drawing on the literature on social capital and HRM, we offer an alternative view on the issue of knowledge sharing in M&A integration by explaining how specific HRM practices that have an impact on employees’ knowledge, skills and abilities for participating in knowledge sharing activities may depend on relational, cognitive and structural social capital. We isolate a number of HRM practices and social capital variables that may enhance knowledge sharing in post-M&A integration, and develop a research model and propositions for future empirical investigation. 相似文献
13.
This article examines the impact of fixed effects production functions vis-à-vis stochastic production frontiers on technical efficiency measures. An unbalanced panel consisting of 96 Vermont dairy farmers for the 1971–1984 period was used in the analysis. The models examined incorporated both time-variant and time-invariant technical efficiency. The major source of variation in efficiency levels across models stemmed from the assumption made concerning the distribution of the one-sided term in the stochastic frontiers. In general, the fixed effects technique was found superior to the stochastic production frontier methodology. Despite the fact that the results of various statistical tests revealed the superiority of some specifications over others, the overall conclusion of the study is that the efficiency analysis was fairly consistent throughout all the models considered. 相似文献