首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Forecasts have traditionally served as the basis for planning and executing supply chain activities. Forecasts drive supply chain decisions, and they have become critically important due to increasing customer expectations, shortening lead times, and the need to manage scarce resources. Over the last ten years, advances in technology and data collection systems have resulted in the generation of huge volumes of data on a wide variety of topics and at great speed. This paper reviews the impact that this explosion of data is having on product forecasting and how it is improving it. While much of this review will focus on time series data, we will also explore how such data can be used to obtain insights into consumer behavior, and the impact of such data on organizational forecasting.  相似文献   

2.
3.
Recent advances in information technology have led to profound changes in global manufacturing. This study focuses on the theoretical and practical challenges and opportunities arising from the Internet of Things (IoT) as it enables new ways of supply-chain operations partially based on big-data analytics and changes in the nature of industries. We intend to reveal the acting principle of the IoT and its implications for big-data analytics on the supply chain operational performance, particularly with regard to dynamics of operational coordination and optimization for supply chains by leveraging big data obtained from smart connected products (SCPs), and the governance mechanism of big-data sharing. Building on literature closely related to our focal topic, we analyze and deduce the substantial influence of disruptive technologies and emerging business models including the IoT, big data analytics and SCPs on many aspects of supply chains, such as consumers value judgment, products development, resources allocation, operations optimization, revenue management and network governance. Furthermore, we propose several research directions and corresponding research schemes in the new situations. This study aims to promote future researches in the field of big data-driven supply chain management with the IoT, help firms improve data-driven operational decisions, and provide government a reference to advance and regulate the development of the IoT and big data industry.  相似文献   

4.
HR and analytics: why HR is set to fail the big data challenge   总被引:1,自引:0,他引:1       下载免费PDF全文
The HR world is abuzz with talk of big data and the transformative potential of HR analytics. This article takes issue with optimistic accounts, which hail HR analytics as a ‘must have’ capability that will ensure HR's future as a strategic management function while transforming organisational performance for the better. It argues that unless the HR profession wises up to both the potential and drawbacks of this emerging field and engages operationally and strategically to develop better methods and approaches, it is unlikely that existing practices of HR analytics will deliver transformational change. Indeed, it is possible that current trends will seal the exclusion of HR from strategic, board‐level influence while doing little to benefit organisations and actively damaging the interests of employees.  相似文献   

5.
Through building and testing theory, the practice of research animates data for human sense-making about the world. The IS field began in an era when research data was scarce; in today's age of big data, it is now abundant. Yet, IS researchers often enact methodological assumptions developed in a time of data scarcity, and many remain uncertain how to systematically take advantage of new opportunities afforded by big data. How should we adapt our research norms, traditions, and practices to reflect newfound data abundance? How can we leverage the availability of big data to generate cumulative and generalizable knowledge claims that are robust to threats to validity? To date, IS academics have largely welcomed the arrival of big data as an overwhelmingly positive development. A common refrain in the discipline is: more data is great, IS researchers know all about data, and we are a well-positioned discipline to leverage big data in research and teaching. In our opinion, many benefits of big data will be realized only with a thoughtful understanding of the implications of big data availability and, increasingly, a deliberate shift in IS research practices. We advocate for a need to re-visit and extend traditional models that are commonly used to guide much of IS research. Based on our analysis, we propose a research approach that incorporates consideration of big data—and associated implications such as data abundance—into a classic approach to building and testing theory. We close our commentary by discussing the implications of this hybrid approach for the organization, execution, and evaluation of theory-informed research. Our recommendations on how to update one approach to IS research practice may have relevance to all theory-informed researchers who seek to leverage big data.  相似文献   

6.
Data with large dimensions will bring various problems to the application of data envelopment analysis (DEA). In this study, we focus on a “big data” problem related to the considerably large dimensions of the input-output data. The four most widely used approaches to guide dimension reduction in DEA are compared via Monte Carlo simulation, including principal component analysis (PCA-DEA), which is based on the idea of aggregating input and output, efficiency contribution measurement (ECM), average efficiency measure (AEC), and regression-based detection (RB), which is based on the idea of variable selection. We compare the performance of these methods under different scenarios and a brand-new comparison benchmark for the simulation test. In addition, we discuss the effect of initial variable selection in RB for the first time. Based on the results, we offer guidelines that are more reliable on how to choose an appropriate method.  相似文献   

7.
In this study, big data studies (01/2015–6/2018) are reviewed and several highly cited papers are identified, which indicates a growing interest in the area of big data. The papers and proceedings from international peer-reviewed journals and ranked conferences were reviewed. We employed Principal component analysis and citation and co-citation analysis to identify themes of research emanating from these studies. Citation and co-citation analysis reveals that there is cross-functional nature of big data research, which permeates different business sectors and is influenced by themes in engineering and information management.  相似文献   

8.
Quality of service (QoS) determines the service usability and utility and both of which influence the service selection process. The QoS varies from one service provider to other. Each web service has its own methodology for evaluating QoS. The lack of transparent QoS evaluation model makes the service selection challenging. Moreover, most QoS evaluation processes do not consider their historical data which not only helps in getting more accurate QoS but also helps for future prediction, recommendation and knowledge discovery. QoS driven service selection demands a model where QoS can be provided as a service to end users. This paper proposes a layered QaaS (quality as a service) model in the same line as PaaS and software as a service, where users can provide QoS attributes as inputs and the model returns services satisfying the user’s QoS expectation. This paper covers all the key aspects in this context, like selection of data sources, its transformation, evaluation, classification and storage of QoS. The paper uses server log as the source for evaluating QoS values, common methodology for its evaluation and big data technologies for its transformation and analysis. This paper also establishes the fact that Spark outperforms the Pig with respect to evaluation of QoS from logs.  相似文献   

9.
ABSTRACT

Understanding the developmental trajectories of big data analytics in the corporate context is highly relevant for information systems research and practice. This study presents a comprehensive bibliometric analysis of applications of big data analytics in enterprises. The sample for this study contained a total of 1727 articles from the Scopus database. The sample was analyzed with techniques such as bibliographic coupling, citation analysis, co-word analysis, and co-authorship analysis. Findings from the co-citation analysis identified four major thematic areas in the extant literature. The evolution of these thematic areas was documented with dynamic co-citation analysis.  相似文献   

10.
Nowadays, issues such as limited natural resources, environmental problems, social matters, and significance of resilience in agricultural supply chain (ASC) have dragged considerable attention worldwide. In this research, a five-level multi-objective stochastic mixed-integer linear programming model is designed for tea supply chain (TSC) in Iran. The objective functions of the suggested network are minimizing total costs of the supply chain (SC), the total water consumption, and non-resilience measures, and maximizing job opportunities of facilities. Literally, considering uncertainty for SC networks is extremely beneficial due to the existence of some variations in different parameters like demand. As a consequence, a robust possibilistic optimization (RPO) is implemented to manage the uncertainty. Due to the nature of the multi-objective optimization problem, the weighted-normalized-extended goal programming (WNEGP) approach is employed to solve the model. In order to credit the model, real data is collected from the tea organization of Iran. It is worth mentioning that parameters are gathered according to three aspects of big data: volume, velocity, and variety. The results validated the functionality of the model regarding planning strategy. In addition, it showed applying more costs on SC triggers an effective sustainable-resilient-responsive network. In terms of managerial insights, this study offers a far-reaching perspective to managers especially in ASC to develop their industries. Finally, some sensitivity analyses are discussed on key parameters such as demand, robustness coefficients, and also the value of the objective functions in various states. It is worth mentioning that sensitivity analyses on different states of the problem show how sustainability and resiliency affect the supply chain efficiency.  相似文献   

11.
The seed of this special section was the workshop celebrated at FUNCAS in Madrid in February 2019 “30 Years of Cointegration and Dynamic Factor Models Forecasting and its Future with Big Data”. In this editorial, we describe the main contributions of the 13 papers published within the special section towards forecasting in the context of non- stationary Big Data using cointegration or Dynamic Factor Models.  相似文献   

12.
Our planet is gradually moving towards an urbanized world. Modern urban agglomerations tend to turn nowadays into advanced information hubs supporting a smart management of dynamic urban systems. The currently popular notion of ‘smart cities’ aims to provide a new perspective for sustainable and high-performance strategies of city stakeholders in our ‘urban century’. In this context, digital information technology provides a new tool for efficient and effective management and planning of urban space, inter alia in the field of transportation, environment, public facilities or advanced service provision to citizens. This paper aims to offer, first, a concise overview of the emerging opportunities of information and communication technology (ICT) for smart urban policy; digital technology in particular, appears to provide novel pathways for modern planning strategies in smart cities. Against this background, the paper sketches out the complex force field of global urbanisation phenomena and highlights the data and information needs for strategic planning of cities (using inter alia as a framework the so-called ‘urban piazza’ strategy framework). Secondly, various new decision support tools that are currently emerging and that offer a new promising scope for handling complex urban management issues (for instance, on accessibility, congestion, safety or sustainability) are briefly presented. And finally, the potential of such digital data systems for urban management and policy is concisely illustrated by means of some recent applications in the area of smartphone data systems. The paper concludes with a discussion of the challenges ahead for urban policy, inter alia by paying attention to institutional and governance aspects of ‘big digital data’ management in urban systems.  相似文献   

13.
The development of technology strategies are often supported by strategic frameworks. Although standards can be critical in fostering technological innovation, particularly by supporting knowledge diffusion, their importance is often neglected by commonly used strategic frameworks. This paper presents a framework which uses the knowledge that needs to transition between key anticipated innovation activities to anticipate potential standardisation needs for emerging technologies. The framework draws attention to strategic considerations and dimensions that might otherwise be overlooked, including different types of standards; standardisation stakeholders; the alignment, coordination, and sequencing of standards; and how these all change over time. A technology roadmapping based framework was used because it explicitly characterises the alignment, coordination, and sequencing of innovation activities (over time) and can be configured to draw out information against the other above strategic considerations and dimensions. The principles and utility of the framework are demonstrated in three contrasting case studies: synthetic biology, additive manufacturing, and smart grid. These show how standards mediate between innovation actors by codifying and diffusing knowledge and can enhance and catalyse innovation. The proposed framework can be used to reveal where standards might be used to support innovation, better characterise the types of standards needed, identify the stakeholders needed to develop them, and highlight any potential alignment, coordination, and sequencing issues related to standardisation activities.  相似文献   

14.
Online communities have become an important source for knowledge and new ideas. This paper considers the potential of crowdsourcing as a tool for data analysis to address the increasing problems faced by companies in trying to deal with “Big Data”. By exposing the problem to a large number of participants proficient in different analytical techniques, crowd competitions can very quickly advance the technical frontier of what is possible using a given dataset. The empirical setting of the research is Kaggle, the world?s leading online platform for data analytics, which operates as a knowledge broker between companies aiming to outsource predictive modelling competitions and a network of over 100,000 data scientists that compete to produce the best solutions. The paper follows an exploratory case study design and focuses on the efforts by Dunnhumby, the consumer insight company behind the success of the Tesco Clubcard, to find and lever the enormous potential of the collective brain to predict shopper behaviour. By adopting a crowdsourcing approach to data analysis, Dunnhumby were able to extract information from their own data that was previously unavailable to them. Significantly, crowdsourcing effectively enabled Dunnhumby to experiment with over 2000 modelling approaches to their data rather than relying on the traditional internal biases within their R&D units.  相似文献   

15.
ABSTRACT

With the technology development in cyber physical systems and big data, there are huge potential to apply them to achieve personalization and improve resource efficiency in Industry 4.0. As Industry 4.0 is the relatively new concept originated from an advanced manufacturing vision supported by the German government in 2011, there are only several existing surveys on either cyber physical systems or big data in Industry 4.0. In addition, there are much less surveys related to the intersection between cyber physical systems and big data in Industry 4.0. However, cyber physical systems are closely related to big data in nature. For example, cyber physical systems will continuously generate a large amount of data which requires the big data techniques to process and help to improve system scalability, security, and efficiency. Therefore, we conduct this survey to bring more attention to this critical intersection and highlight the future research direction to achieve the fully autonomy in Industry 4.0.  相似文献   

16.
In this paper, we use several indicators of trade informativeness to search for informed traders on the final trading days of Banco Popular, the first and only bank resolution case to date in the euro area. In particular, we use the model proposed by Preve and Tse (2013) to estimate the adjusted daily probability of informed trading and the probability of symmetric order-flow shock using high-frequency transaction data. Our empirical results indicate that upon the anticipation of a possible liquidation of the bank, informed investors reacted to the bad news by placing more weight on it and that Banco Popular experienced large increases in both buy- and sell-orders during the last days of trading when the bank registered a significant depletion of its deposit base. Moreover, we find evidence supporting the presence of inside trading and illiquidity, especially after speculation in the media that the bank could face a liquidation. Our study has important implications for market participants and regulatory authorities.  相似文献   

17.
In this paper we estimate a dynamic structural model of employment at firm level. Our dataset consists of a balanced panel of 2790 Greek manufacturing firms. The empirical evidence of this dataset stresses three important stylized facts: (a) there are periods in which firms decide not to change their labour input, (b) there are periods of large employment changes (lumpy nature of labour adjustment) and (c) the commonality is employment spikes to be followed by smooth and low employment growth periods. Following Cooper and Haltiwanger [Cooper, R.W. and Haltiwanger, J. “On the Nature of Capital Adjustment Costs”, Review of Economic Studies, 2006; 73(3); 611–633], we consider a dynamic discrete choice model of a general specification of adjustment costs including convex, non-convex and “disruption of production” components. We use a method of simulated moments procedure to estimate the structural parameters. Our results indicate considerable fixed costs in the Greek employment adjustment.  相似文献   

18.
ABSTRACT

This article focuses on the managerial discretion that public managers experience. More specifically, it discusses how managerialism is an embedded ideological stance that influences understandings of public sector governance. I argue that managers’ perceptions of discretion are affected by these understandings. The analysis draws on empirical data from a longitudinal study, demonstrating how public managers engage discourses emanating from managerialism in order to rationalize increased discretion. The findings suggest that customer perspectives functions as a rationalizing factor for engaging public managers’ transition towards increased discretion. As such, this article contributes to knowledge about managerial discretion as well as managerialism.  相似文献   

19.
Due to the existence of free software and pedagogical guides, the use of Data Envelopment Analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run their own efficiency analysis. Within DEA, several alternative models allow for an environmental adjustment. Four alternative models, each user-friendly and easily accessible to practitioners and decision makers, are performed using empirical data of 90 primary schools in the State of Geneva, Switzerland. Results show that the majority of alternative models deliver divergent results. From a political and a managerial standpoint, these diverging results could lead to potentially ineffective decisions. As no consensus emerges on the best model to use, practitioners and decision makers may be tempted to select the model that is right for them, in other words, the model that best reflects their own preferences. Further studies should investigate how an appropriate multi-criteria decision analysis method could help decision makers to select the right model.  相似文献   

20.
This article examines the impact of fixed effects production functions vis-à-vis stochastic production frontiers on technical efficiency measures. An unbalanced panel consisting of 96 Vermont dairy farmers for the 1971–1984 period was used in the analysis. The models examined incorporated both time-variant and time-invariant technical efficiency. The major source of variation in efficiency levels across models stemmed from the assumption made concerning the distribution of the one-sided term in the stochastic frontiers. In general, the fixed effects technique was found superior to the stochastic production frontier methodology. Despite the fact that the results of various statistical tests revealed the superiority of some specifications over others, the overall conclusion of the study is that the efficiency analysis was fairly consistent throughout all the models considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号