首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, an approach for reorganizing Web sites based on user access patterns is proposed. Our goal is to build adaptive Web sites by evolving site structure to facilitate user access. The approach consists of three steps: preprocessing, page classification, and site reorganization. In preprocessing, pages on a Web site are processed to create an internal representation of the site. Page access information of its users is extracted from the Web server log. In page classification, the Web pages on the site are classified into two categories, index pages and content pages, based on the page access information. After the pages are classified, in site reorganization, the Web site is examined to find better ways to organize and arrange the pages on the site. An algorithm for reorganizing Web sites has been developed. Our experiments on a large real data set show that the approach is efficient and practical for adaptive Web sites. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
Information theory, while claiming universality, ignores civilisation and spiritual perspectives of knowledge. Moreover, the information society heralded by many as the victory of humanity over darkness is merely capitalism disguised but now commodifying selves as well. This essay argues for a more communicative approach wherein futures can be created through authentic global conversations--a gaia of civilisations. Current trends, however, do not lie in that direction. Instead, we are moving towards temporal and cultural impoverishment. Is the Web then the iron cage or can a global ohana (family, civil society) be created through cybertechnologies? Answering these and other questions are possible only when we move to layers of analysis outside conventional understandings of information and the information era and to a paradigm where communication and culture are central.  相似文献   

3.
Business-to-Business (B2B) interoperations are an important part of today's global economy. Business process standards are developed to provide a common understanding of the information shared between trading partners. These standards, however, mainly capture the syntax of the transactions and not their semantics. This paper proposes the use of ontologies as the basis for standards development and presents an ontology for the ebXML Business Process Specification Schema (ebBP) with the aim of empowering the capture and sharing of semantics embedded within B2B processes as well as enabling knowledge deduction and reasoning over the shared knowledge. The paper utilises the Ontology-based Standards Development methodology (OntoStanD) as a methodological approach for designing ontological models of standards. This research demonstrates how Semantic Web technologies can be utilised as a basis for standards development and representation in order to improve standards-based interoperability between trading partners.  相似文献   

4.
In this era of global challenges in energy policy, the importance of siting of facilities connected to development of energy system is greater than ever. At the same time, spreading of these facilities has often been controversial in surrounding communities. This article advances the debate on this phenomenon by focusing on an aspect of siting controversies that has become a game changer in recent years but has received remarkably little attention: the role of Web 2.0 in siting conflicts. To explore the impact of Web 2.0, the paper uses a case study approach, examining the influence of access to the Internet in two siting conflicts associated with shale gas prospecting in Poland in the period from 2012 to 2014. The possibilities that Web 2.0 offers to residents and other local actors in siting conflicts – access to knowledge, the ability to reframe the local debate using international resources, and mobilization of a network of support by sharing their version of the story – influence the dynamics of risk communication during siting controversies.  相似文献   

5.
Tuomo Uotila 《Futures》2007,39(9):1117-1130
A central subcategory of futures research is technology foresight. There is a concern that today's technology foresight processes do not serve technology-political decision-making and strategy processes of companies well enough. The regional level needs to be emphasized, too, and the inclusion of a wide variety of actors and organizations. There is a danger that results of foresight processes are not absorbed into regional strategy-making processes, leading to a “black hole of interpretation and implementation of foresight knowledge”. Particularly knowledge, but also data and information are crucial concepts in foresight processes. An important issue is how to transform foresight information into future-oriented innovation knowledge. Concrete tools and institutional settings to enhance data, information and knowledge quality in foresight processes and strategy work are needed. This article investigates limitations of established foresight processes and planning approaches, limitations in practical utilization of results of foresight processes, and quality of data, information and knowledge as concrete tools and as a systematic response to limitations. The article is partly based on empirical results from a technology foresight survey undertaken in Finland in 2005. The research responds to societal and academic interest by combining the fields of (i) futures research and (ii) data, information and knowledge quality. Future-oriented considerations are not routine tasks, which makes it especially challenging and important to ensure that these processes benefit from data, information and knowledge of good quality.  相似文献   

6.
Implications of Web assurance services on e-commerce   总被引:1,自引:0,他引:1  
The ongoing rapid growth in the popularity of the Internet is having a revolutionary impact on the way companies do business. Doing business online has become a necessity, not an option. However, some consumers are not completely comfortable using the Internet for transacting business because of concerns regarding security of their transactions. For these situations, consumer trust and confidence can be enhanced by a Web assurance service such as AICPA Trust Services. Building on prior studies, the study provides comprehensive information on current reporting requirements, differences among Web assurance services, and results of a recent consumer survey to obtain perceptions of Web assurance services. The theoretical foundation of the current study is based on the Assurance Gaps Model [Burke, K. G., Kovar, S. E., & Prenshaw, P. J. (2004). Unraveling the Expectations Gap: An Assurance Gaps Model and illustrative application. Advances in Accounting Behavioral Research, 7, 169–193]. E-business consumers (users of Web assurance services) can be dichotomized into older consumers and younger consumers, who have different expectations based on information asymmetries. Findings indicate that consumers value Web Assurance services, but younger consumers place greater value on these services than older consumers.  相似文献   

7.
This paper introduces the Web server log file and assesses its potential as a research instrument in measuring and interpreting the use of corporate reporting information. Measuring Investor Relations output, including annual financial reports but covering a wider range of corporate reporting and market informing activity, has proven a difficult task in the past due to a lack of truly effective research methodologies to access such activity. This paper highlights the growth in the provision of online Investor Relations information and details how online information can be measured using activity logs taken as a Web server fulfils user requests for information over the Internet. The paper analyses the limitations of this methodology for measuring the use of Investor Relations output and illustrates its possible application by drawing on data from a UK FTSE 100 company. Finally, the paper concludes that this methodology has significant potential in measuring the use of online Investor Relations information, and can therefore make valuable contributions to corporate reporting research and policy making in this area.  相似文献   

8.
采用信息可视化工具——citespace,旨在分析国内外知识转移的研究现状,以ISI Web of knowledge和中国期刊全文数据库(CNKI)收录的知识转移的相关题录信息为数据源,详细分析了知识转移研究论文的时间分布、学科分布、研究的主要内容以及在中国的基金立项情况以后,得出结论:①知识转移研究论文的时间分布曲线略呈“S”型,2009年与2010年是知识转移研究的高峰期;②知识转移的学科研究分散,但Business&Economics、Computer Science、Engineering、Operations Research & Management Science、Information Science & Library Science是知识转移研究的重点学科;③知识转移相关立项表现出了多学科交叉以及注重应用等特点;④知识转移研究内容主要集中于知识转移的模型以及影响因素,但也存在着很多不足.  相似文献   

9.
Extensible business reporting language (XBRL) is an XML‐based method for financial reporting. XBRL was developed to provide users with an efficient and effective means of preparing and exchanging financial information over the Internet. However, like other unprotected data coded in XML, XBRL (document) files (henceforth “documents") are vulnerable to threats against their integrity. Anyone can easily create and manipulate an XBRL document without authorization. In addition, business and financial information in XBRL can be misinterpreted, or used without the organization's consent or knowledge. Extensible assurance reporting language (XARL) was developed by Boritz and No (2003) to enable assurance providers to report on the integrity of XBRL documents distributed over the Internet. Providing assurance on XBRL documents using XARL could help users and companies reduce the uncertainty about the integrity of those documents and provide users with trustworthy information that they could place warranted reliance upon. A limitation of the initial conception of XARL was its tight linkage with the XBRL document and the comparatively primitive approach to codifying the XARL taxonomy. In this paper, we have reconceptualized the idea of XARL as a stand‐alone service for providing assurance on potentially any XML‐based information being shared over the Internet. While our illustrative application in this paper continues to be XBRL‐coded financial information, the code that underlies this version of XARL is a significant revision of our earlier implementation of XARL, is compatible with the latest version of XBRL, and moves XARL into the Web services arena.  相似文献   

10.
Stock index tracking requires to build a portfolio of stocks (a replica) whose behavior is as close as possible to that of a given stock index. Typically, much fewer stocks should appear in the replica than in the index, and there should be no low frequency or integrated (persistent) components in the tracking error. The latter property is not satisfied by many commonly used methods for index tracking. These are based on the in-sample minimization of a loss function, but do not take into account the dynamic properties of the index components. Moreover, most existing methods do not take into account the known structure of the index weight system. In this paper we represent the index components with a dynamic factor model. In this model the price of each stock in the index is driven by a set of common and idiosyncratic factors. Factors can be either integrated or stationary. We develop a procedure that, in a first step, builds a replica that is driven by the same persistent factors as the index. This procedure is grounded in recent results which suggest the application of principal component analysis for factor estimation even for integrated processes. In a second step, it is also possible to refine the replica so that it minimizes a specific loss function, as in the traditional approach. In both steps the replica weights depend on the existing information on the index weights system. An extended set of Monte Carlo simulations and an application to the most widely used index in the European stock market, the EuroStoxx50 index, provide substantial support for our approach.  相似文献   

11.
As an increasing number of Web sites such as e-businesses consist of an increasing number of pages, users find it more difficult to rapidly reach their own target pages. Ill-structured design of Web sites also prevents the users from rapidly accessing the target pages. In this paper, we describe two complementary approaches to Web usage mining as a key solution to these issues. First, we describe an adaptable recommendation system called the L-R system, which constructs user models by classifying the Web access logs and by extracting access patterns based on the transition probability of page accesses and recommends the relevant pages to the users based on both the user models and the Web structures. We have evaluated the prototype system and have obtained the positive effects. Second, we describe another approach to constructing user models, which clusters Web access logs based on access patterns. The user models also help to discover unexpected access paths corresponding to ill-formed Web site design. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

12.
Information professionals performing business activity related investigative analysis must routinely associate data from a diverse range of Web based general-interest business and financial information sources. XBRL has become an integral part of the financial data landscape. At the same time, Open Data initiatives have contributed relevant financial, economic, and business data to the pool of publicly available information on the Web but the use of XBRL in combination with Open Data remains at an early state of realisation. In this paper we argue that Linked Data technology, created for Web scale information integration, can accommodate XBRL data and make it easier to combine it with open datasets. This can provide the foundations for a global data ecosystem of interlinked and interoperable financial and business information with the potential to leverage XBRL beyond its current regulatory and disclosure role. We outline the uses of Linked Data technologies to facilitate XBRL consumption in conjunction with non-XBRL Open Data, report on current activities and highlight remaining challenges in terms of information consolidation faced by both XBRL and Web technologies.  相似文献   

13.
In this paper, we describe how an integrated web‐based application, code‐named FOCI (F lexible O rganizer for C ompetitive I ntelligence), can help the knowledge worker in the gathering, organizing, tracking and dissemination of competitive intelligence (CI). It combines the use of a novel user‐configurable clustering, trend analysis and visualization techniques to manage information gathered from the web. FOCI allows its users to define and personalize the organization of the information clusters according to their needs and preferences into portfolios. These personalized portfolios created are saved and can be subsequently tracked and shared with other users. The paper runs through an example to show how the use of a predefined domain template coupled with personalization can greatly enhance an organization and tracking of CI gathered from the web. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
Sam Cole   《Futures》1997,29(4-5)
Geographers deal with global and local space and their interrelationships and thus bring new insights, perspectives, and methods to global questions. This is appealing to futurists since the principle of ‘think globally-act locally’ has been an inspiration for many years. In this paper I explore how old and new approaches in geography, as well as new information technologies such as the World Wide Web (WWW) and Geographic Information Systems (GIS), might contribute to global modeling. I briefly review also the history of global economy models to discover lessons for future attempts to construct global models, not least how prevailing paradigms and institutional expediency determine the intellectual effort, and its impact. I then describe some of the new directions being undertaken by global modelers, quantitative geographers and regional scientists in the 1990s, and the possibilities and challenges for the next few years, and their contribution to the knowledge building process and its context.  相似文献   

15.
Although the accounting profession has embraced a competency-based approach in the education and training of students, some educators struggle to adapt the delivery and assessment of their accounting programmes to bring them in line with these outcomes. The challenge for all educators is to seek ways to marry the curriculum, the design and delivery of the syllabus and assessment in such a way as to maximize students’ learning in relation to priority goals. The aim of this paper is to discuss how the IFAC curriculum on the general knowledge of IT (IEPS 2) could be analysed using an alternative approach based on critical learning outcomes to develop a syllabus that would enable educators to deliver and assess it in line with the learning outcomes and competency requirements. The newly-developed syllabus should direct educators to adopt a holistic approach in the delivery and assessment of the IT course. This approach should ensure that students understand how information technology can support them as accountants in producing information in the format required by users.  相似文献   

16.
Expanding use of Web 2.0 technologies has generated complex information dynamics that are propelling organizations in unexpected directions, redrawing boundaries and shifting relationships. Using research on user-generated content, we examine online rating and ranking mechanisms and analyze how their performance reconfigures relations of accountability. Our specific interest is in the use of so-called “social media” such as TripAdvisor, where participant reviews are used to rank the popularity of services provided by the travel sector. Although ranking mechanisms are not new, they become “power-charged”—to use Donna Haraway’s term—when enacted through Web 2.0 technologies. As such, they perform a substantial redistribution of accountability. We draw on data from an on-going field study of small businesses in a remote geographical area for whom TripAdvisor has changed ‘the rules of the game,’ and we explore the moral and strategic implication of this transformation.  相似文献   

17.
This article describes the PROMOTE® approach, to define and implement a Service-Based Enterprise Knowledge Management System (E-KMS) that has been developed during the EC-funded project PROMOTE (IST-1999-11658). The aim is to define a modelling language that is used to analyse, document and implement an E-KMS on the basis of so-called Knowledge Management Processes (KMPs). KMPs define the knowledge interaction between knowledge workers in a process-oriented manner and consists of activities that are supported by knowledge management key actions (KA) like searching, categorising or storing information. These knowledge management key actions are supported by Knowledge Management Services (KM-Service). The prototype of PROMOTE® is briefly mentioned to discuss the KMP-models and the service based E-KMS. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
Jussi T. Koski   《Futures》2001,33(6):657
In this article the problem of information overflow and its consequences, both at individual and organisational levels, are discussed from a multidisciplinary and speculative perspective. It is argued that information overflow leads to the inflation of information, and causes stress and fatigue. This is why infoglut impairs knowledge productivity in organisations. It is also argued that the prevention and cure of infoglut is partly personal, and partly has to do with the development of organisational structures and cultures. In the overall framework of information glut and knowledge productivity, the following themes are also discussed: productive laziness as a precondition to creativity; intelligent organisation as a context of knowledge productivity; trust and externalisation of tacit knowledge; interconnections between creativity and knowledge productivity; and enhancing knowledge productivity through intelligent management.  相似文献   

19.
Most major corporations in the U.S. (and a growing number of companies around the world) are reporting some level of financial information on their Web sites. However, it is not clear that the stakeholders are fully satisfied with this Web-based data. The time and effort allocated to the mechanics of Web retrieval are actually increasing because of the difficulty of finding pages and specific data within the enormity of the public Web (over 1 billion pages) or of many corporate intranets. One way to deal with this vast information source would be to automate the Web search mechanics by developing and using intelligent software agents. However, developing these agents in the current Web environment is very problematic. Three factors are preconditions for effective utilization of the Web. First, appropriate metadata representation of financial reporting information on the Web is required that could improve the accuracy of searches (the resource discovery problem). Second, accounting data points within Web pages should be able to be reliably parsed (the attribute recognition problem). Third, standard mechanisms are required that will encourage or require corporations to report in a consistent fashion. The reality of the Web is that it falls far short of a reliable communication medium for accounting and financial information on all three of these factors. The eXtensible Markup Language (XML) provides a method to tag financial information to greatly improve the automation of information location and retrieval, and provides technical solutions to the resource discovery and attribute recognition problems. However, if every company were free to develop its own labels for its XML tags, then the searching for financial information would be only marginally improved. The recent development by a consortium lead by the American Institute of CPAs (AICPA) of the so-called “eXtensible Business Reporting Language” (XBRL) is an initiative to develop an XML-based Web-based business reporting specification. The widespread adoption of XBRL would mean that both humans and intelligent software agents could operate on financial information disseminated on the Web with a high degree of accuracy and reliability. XBRL provides rich research opportunities, including new taxonomies, database accounting, financial statement assurance, intelligent agents, human/computer interfaces, standard development process, adoption incentives, global adoption, and formal ontologies.  相似文献   

20.
To our knowledge, this paper is the first study on the effect of information arrival on the lead–lag relationship amongst related spot instruments. Based on a large data-set of ultra-high-frequency transaction prices time-stamped to the millisecond of the S&P500 index and its two most liquid tracking ETFs, we find that their lead–lag relationship is affected by the rate of information arrival whose proxy is the unexpected trading volume of these instruments. Specifically, when information arrives, the leadership of the leading instrument may strengthen or weaken depending on whether the leading or lagging instrument responds to that information. An increase in the unexpected volume of the leader strengthens its leadership whereas an increase in the unexpected volume of the lagger weakens this leadership. In addition to the strength of leadership, an increase in the unexpected volume in response to information arrival may also have opposite effects on the lead–lag correlation coefficient depending on whether that volume increase belongs to the leader or the lagger. Finally, we find that sophisticated investors have a more significant effect on the lead–lag relationship than non-sophisticated ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号