首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
Approximate Bayesian Computation (ABC) has become increasingly prominent as a method for conducting parameter inference in a range of challenging statistical problems, most notably those characterized by an intractable likelihood function. In this paper, we focus on the use of ABC not as a tool for parametric inference, but as a means of generating probabilistic forecasts; or for conducting what we refer to as ‘approximate Bayesian forecasting’. The four key issues explored are: (i) the link between the theoretical behavior of the ABC posterior and that of the ABC-based predictive; (ii) the use of proper scoring rules to measure the (potential) loss of forecast accuracy when using an approximate rather than an exact predictive; (iii) the performance of approximate Bayesian forecasting in state space models; and (iv) the use of forecasting criteria to inform the selection of ABC summaries in empirical settings. The primary finding of the paper is that ABC can provide a computationally efficient means of generating probabilistic forecasts that are nearly identical to those produced by the exact predictive, and in a fraction of the time required to produce predictions via an exact method.  相似文献   

2.
In this field-based study, we interview top- and middle-level managers at Insteel Industries and conduct statistical analysis of firm-level data in order to shed light on whether activity-based costing (ABC) provides new information to managers and whether activity-based management (ABM) significantly influences product and customer-related decisions. We find that after the ABC analysis, Insteel undertook a number of process improvements that resulted in significant cost savings. Additionally, Insteel displayed a higher propensity to discontinue or increase prices of products and discontinue customers that were found comparatively unprofitable in the ABC study. Thus we provide empirical evidence that ABC influences both strategic and operational managerial decisions.  相似文献   

3.
闵亨锋 《物流科技》2007,30(6):93-95
传统的作业成本法在核算物流成本过程中显得特别繁琐,很难推广开来。时间驱动作业成本克服了核算程序上繁琐的问题,它将资源动因和作业动因过程融合在一起,使物流成本的计算过程更简单,这种方法也更容易推广。本文将介绍时间驱动作业成本法的核算步骤、时间驱动作业成本法与传统作业成本的区别、时间作业成本法在物流成本核算中具体应用等相关内容。  相似文献   

4.
本文首先从分析我国企业应用作业成本法的实际状况出发,通过样本分析说明我国企业目前尚无真正意义上的作业成本法,然后,对我国企业实施作业成本法的瓶颈及实施作业成本法的具体方法进行了论述。  相似文献   

5.
A theory of the temporary organization   总被引:1,自引:0,他引:1  
The idea of the firm as an eternal entity possibly came in with the era of industrialism. In any case, the practical consequences of this idea contrast sharply with many ideas about projects and temporary organizations. Mainstream organization theory is based upon the assumption that organizations are or should be permanent; theories on temporary organizational settings (e.g., projects) are much less prevalent. In this article, we address the need for a theory of temporary organizations, thus seeking to supplement traditional project management wisdom. We also suggest some components of such a theory by elaborating on certain ideas about projects. “Action”, as opposed to “decision”, is one such component which is central to a theory of the temporary organization. In some respects we are thus dealing with antipoles, in other respects with concepts similar to those in established mainstream organizational theory. The role of “time” in the firm is different as compared to its role in the temporary organization. The differences have several important implications and we are able to suggest a coherent outline of a theory which we believe could be useful and which also covers several important aspects of temporary organizations.  相似文献   

6.
ABC理论在ERP系统的应用研究   总被引:9,自引:0,他引:9  
陈志祥  黄艳芬 《价值工程》2004,23(4):110-113
ERP(企业资源计划)是支持供应链系统运作的一个综合计算机系统。基于活动的作业成本计算理论(ABC-Ac鄄tivity-BasedCosting)与供应链管理有极其相似性的原理-面向过程的绩效管理与优化。ABC理论用于ERP系统包括标准成本计算、财务管理、库存管理、生产计划与控制等;把ABC理论用于ERP系统,能使ERP系统各子系统紧密地联系在一起,形成面向过程的生产组织管理方式,有利于分析企业价值链变化情况,有利于业务流程的改造与优化。  相似文献   

7.
为了弥补传统的ABC分类法单纯的以库存物资的单个品种的库存资金占整个库存物资资金的累积百分数为标准进行分类的缺点,提出了基于模糊评价的备件ABC分类法。采用该方法对某油田公司的备件进行分类,结果表明,该方法较之传统的ABC分类法,更能科学有效地找出众多备件中的关键备件,提高备件分类的有效性和备件管理的针对性。  相似文献   

8.
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions.
The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches.
This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.  相似文献   

9.
In this paper we present the physics of the city, a new approach in order to investigate the urban dynamics. In particular we focus on the citizens’ mobility observation and modeling. Being in principle the social dynamics not directly observable, our main idea is that observing the human mobility processes we can deduce some features and characteristics of social dynamics. We define the automata gas paradigm and we write a crowding equation able to predict, in a statistical sense, the threshold between a selforganized crowd and a chaotic one, which we interpret as the emergence of a possible panic scenario. We show also some specific results obtained on the Venezia pedestrian network. Firstly, analyzing the network we estimate the Venice complexity, secondly measuring the pedestrian flow on some bridges we find significant statistical correlations, and by the experimental data we design two different bridges flow profiles depending from the pedestrian populations. Furthermore considering a reduced portion of the city, i.e. Punta della Dogana, we build up a theoretical model via a Markov approach, with a stationary state solution. Finally implementing some individual characteristics of pedestrians, we simulate the flows finding a good agreement with the empirical distributions. We underline that these results can be the basis to construct an E-governance mobility system.  相似文献   

10.
物流成本是第三方物流企业的核心竞争优势,其核算方法创新是管理提升的关键。时间驱动作业成本法(TDABC)对第三方物流企业有较好的适用性,但目前缺乏系统研究。以天水物流公司为例,针对作业成本核算法(ABC)面临的问题,设计TDABC物流成本核算体系,并构建物流成本核算模型。研究表明,此体系及模型能准确反映服务的物流成本,更能显示闲置生产能力成本。  相似文献   

11.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

12.
This paper presents and evaluates a way of making product-to-product tables from Use and Make matrices that are of immediate relevance to any statistical office that makes input-output tables. Two ways of making a product-to-product table are in common practice: one based on the product-technology assumption and the other on the industry-technology assumption. The industry-technology assumption is recognized as highly implausible but is often used because the product-technology assumption frequently leads to small negative flows which make no economic sense. This paper shows how a slight adjustment in the product-technology assumption leads to an algorithm that is certain to avoid negative flows yet keeps close to the spirit of the product-technology idea. Some details of the application of this method to the USA table for 1992 are reported. Similar applications to every American table since 1958 have given consistently sensible results. A computer program for the method is available.  相似文献   

13.
Statistical offices are responsible for publishing accurate statistical information about many different aspects of society. This task is complicated considerably by the fact that data collected by statistical offices generally contain errors. These errors have to be corrected before reliable statistical information can be published. This correction process is referred to as statistical data editing. Traditionally, data editing was mainly an interactive activity with the aim to correct all data in every detail. For that reason the data editing process was both expensive and time-consuming. To improve the efficiency of the editing process it can be partly automated. One often divides the statistical data editing process into the error localisation step and the imputation step. In this article we restrict ourselves to discussing the former step, and provide an assessment, based on personal experience, of several selected algorithms for automatically solving the error localisation problem for numerical (continuous) data. Our article can be seen as an extension of the overview article by Liepins, Garfinkel & Kunnathur (1982). All algorithms we discuss are based on the (generalised) Fellegi–Holt paradigm that says that the data of a record should be made to satisfy all edits by changing the fewest possible (weighted) number of fields. The error localisation problem may have several optimal solutions for a record. In contrast to what is common in the literature, most of the algorithms we describe aim to find all optimal solutions rather than just one. As numerical data mostly occur in business surveys, the described algorithms are mainly suitable for business surveys and less so for social surveys. For four algorithms we compare the computing times on six realistic data sets as well as their complexity.  相似文献   

14.
This article presents an approach to integrating life cycle assessment (LCA) into an activity-based costing (ABC) model to develop a steering system that takes into account both financial costs and associated environmental impacts. By combining the formalism of LCA and ABC matrix calculations, we show how impact assessment results can be affiliated with costs to jointly and simultaneously compute the costs and environmental impacts of products and activities. The conditions of integration are developed following the four-step structure of LCA. The proposal is applied to a simplified case study of the ‘Classic Pen Company.’ The developed ABC-LCA approach paves the way for further test applications, which are considered useful in the context of environmental indicators for strategic steering, communication with customers and forecasting or simulation.  相似文献   

15.
Bull and bear markets are a common way of describing cycles in equity prices. To fully describe such cycles one would need to know the data generating process (DGP) for equity prices. We begin with a definition of bull and bear markets and use an algorithm based on it to sort a given time series of equity prices into periods that can be designated as bull and bear markets. The rule to do this is then studied analytically and it is shown that bull and bear market characteristics depend upon the DGP for capital gains. By simulation methods we examine a number of DGPs that are known to fit the data quite well—random walks, GARCH models, and models with duration dependence. We find that a pure random walk provides as good an explanation of bull and bear markets as the more complex statistical models. In the final section of the paper we look at some asset pricing models that appear in the literature from the viewpoint of their success in producing bull and bear markets which resemble those in the data. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

16.
作业成本法相对于传统成本法具有不可比拟的优越性。但是作业成本法所核算的也是一种不完全成本,并没有包括资本成本。本文将尝试借助作业成本法的思路将相关的资本成本追溯到产品成本中去,借此正确评估产品为企业创造的价值,并进而为定价、内部管理等决策提供更为精确的信息。  相似文献   

17.
陈涛  杨柳 《物流技术》2012,(17):293-295,325
通过分析港口物流企业的成本控制方法,提出作业成本法是进行成本控制的有效方法。以连云港港口物流有限公司为例,分析了港口物流企业的日常控制,并结合相关数据探讨了作业成本法的具体应用。  相似文献   

18.
In toxicity studies, model mis‐specification could lead to serious bias or faulty conclusions. As a prelude to subsequent statistical inference, model selection plays a key role in toxicological studies. It is well known that the Bayes factor and the cross‐validation method are useful tools for model selection. However, exact computation of the Bayes factor is usually difficult and sometimes impossible and this may hinder its application. In this paper, we recommend to utilize the simple Schwarz criterion to approximate the Bayes factor for the sake of computational simplicity. To illustrate the importance of model selection in toxicity studies, we consider two real data sets. The first data set comes from a study of dietary fortification with carbonyl iron in which the Bayes factor and the cross‐validation are used to determine the number of sub‐populations in a mixture normal model. The second example involves a developmental toxicity study in which the selection of dose–response functions in a beta‐binomial model is explored.  相似文献   

19.
Principal component analysis (PCA) is a method of choice for dimension reduction. In the current context of data explosion, online techniques that do not require storing all data in memory are indispensable to perform the PCA of streaming data and/or massive data. Despite the wide availability of recursive algorithms that can efficiently update the PCA when new data are observed, the literature offers little guidance on how to select a suitable algorithm for a given application. This paper reviews the main approaches to online PCA, namely, perturbation techniques, incremental methods and stochastic optimisation, and compares the most widely employed techniques in terms statistical accuracy, computation time and memory requirements using artificial and real data. Extensions of online PCA to missing data and to functional data are detailed. All studied algorithms are available in the  package onlinePCA on CRAN.  相似文献   

20.
Many production/inventory systems contain thousands of stock keeping units (SKUs). In general, it is not computationally (or conceptually) feasible to consider every one of these items individually in the development of control polices and strategies. Our objective here is to develop a methodology for defining groups to support strategic planning for the operations function. Accordingly, such groups should take into consideration all product characteristics which have a significant impact on the particular operations management problem of interest. These characteristics can include many of the attributes which are used in other functional groupings and will most certainly go beyond the cost and volume attributes used in ABC analysis.The ORG methodology is based on statistical clustering and can utilize a full range of operationally significant item attributes. It considers both statistical measures of discrimination and the operational consequences associated with implementing policies derived on the basis of group membership. The main departure of this analysis from earlier work is: 1) the approach can handle any combination of item attribute information that is important for strategy purposes, 2) management's interest in defining groups on the basis of operational factors can be accommodated, 3) statistical discrimination is considered directly, 4) group definition reflects the performance of management policies which are based (in part) on group membership, and 5) the method can be applied successfully to systems with a large number of SKUs.The specific application which motivated development of the ORG methodology was an analysis of distribution strategy for the service parts division of a major automobile manufacturer. The manufacturer was interested in developing optimal inventory stocking policies, which took into account the complexities of its multiechelon distribution network, supplier relationships and customer service targets for each market segment. This manufacturer stocked over 300,000 part numbers in an extensive network with approximately 50 distribution centers and thousands of dealer locations (i.e., 1.5 million SKU/ location combinations). The results of this application indicated that the advantage of using operationally relevant data for grouping and for defining generic, group-based policies for controlling inventory can be substantial. The ORG methodology can be of value to operations managers in industries with a large number of diverse items.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号