首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data mining applies traditional statistical tools as well as artificial intelligence algorithms to the analysis of large datasets. Data mining has proven very effective in many fields, including business. This paper reviews applications of data mining relevant to the service industry, and demonstrates primary business functions and data mining methods. Typical industry data mining process is described, analytic tools are reviewed, and major software tools noted.  相似文献   

2.
Companies greatly benefit from knowing how problems with data quality influence the performance of segmentation techniques and which techniques are more robust to these problems than others. This study investigates the influence of problems with data accuracy – an important dimension of data quality – on three prominent segmentation techniques for direct marketing: RFM (recency, frequency, and monetary value) analysis, logistic regression, and decision trees. For two real-life direct marketing data sets analyzed, the results demonstrate that (1) under optimal data accuracy, decision trees are preferred over RFM analysis and logistic regression; (2) the introduction of data accuracy problems deteriorates the performance of all three segmentation techniques; and (3) as data becomes less accurate, decision trees retain superior to logistic regression and RFM analysis. Overall, this study recommends the use of decision trees in the context of customer segmentation for direct marketing, even under the suspicion of data accuracy problems.  相似文献   

3.
Data mining techniques have numerous applications in credit scoring of customers in the banking field. One of the most popular data mining techniques is the classification method. Previous researches have demonstrated that using the feature selection (FS) algorithms and ensemble classifiers can improve the banks' performance in credit scoring problems. In this domain, the main issue is the simultaneous and the hybrid utilization of several FS and ensemble learning classification algorithms with respect to their parameters setting, in order to achieve a higher performance in the proposed model. As a result, the present paper has developed a hybrid data mining model of feature selection and ensemble learning classification algorithms on the basis of three stages. The first stage, as expected, deals with the data gathering and pre-processing. In the second stage, four FS algorithms are employed, including principal component analysis (PCA), genetic algorithm (GA), information gain ratio, and relief attribute evaluation function. In here, parameters setting of FS methods is based on the classification accuracy resulted from the implementation of the support vector machine (SVM) classification algorithm. After choosing the appropriate model for each selected feature, they are applied to the base and ensemble classification algorithms. In this stage, the best FS algorithm with its parameters setting is indicated for the modeling stage of the proposed model. In the third stage, the classification algorithms are employed for the dataset prepared from each FS algorithm. The results exhibited that in the second stage, PCA algorithm is the best FS algorithm. In the third stage, the classification results showed that the artificial neural network (ANN) adaptive boosting (AdaBoost) method has higher classification accuracy. Ultimately, the paper verified and proposed the hybrid model as an operative and strong model for performing credit scoring.  相似文献   

4.
Successful product line design and development often require a balance of technical and market tradeoffs. Quantitative methods for optimizing product attribute levels using preference elicitation (e.g., conjoint) data are useful for many product types. However, products with substantial engineering content involve critical tradeoffs in the ability to achieve those desired attribute levels. Technical tradeoffs in product design must be made with an eye toward market consequences, particularly when heterogeneous market preferences make differentiation and strategic positioning critical to capturing a range of market segments and avoiding cannibalization.We present a unified methodology for product line optimization that coordinates positioning and design models to achieve realizable firm-level optima. The approach overcomes several shortcomings of prior product line optimization models by incorporating a general Bayesian account of consumer preference heterogeneity, managing attributes over a continuous domain to alleviate issues of combinatorial complexity, and avoiding solutions that are impossible to realize. The method is demonstrated for a line of dial-readout scales, using physical models and conjoint-based consumer choice data. The results show that the optimal number of products in the line is not necessarily equal to the number of market segments, that an optimal single product for a heterogeneous market differs from that for a homogeneous one, and that the representational form for consumer heterogeneity has a substantial impact on the design and profitability of the resulting optimal product line — even for the design of a single product. The method is managerially valuable because it yields product line solutions efficiently, accounting for marketing-based preference heterogeneity as well as engineering-based constraints with which product attributes can be realized.  相似文献   

5.
6.
The recency/frequency/monetary value (RFM) segmentation framework remains a mainstay in the retailing industry to quantify consumer values. However, the RFM model does not consider the added value travelers ascribe to auxiliary services. We extend the RFM framework to an RFM-P model by considering the likelihood of purchasing ancillary services during travel (P). We proposed four traveler groups based on the RFM-P model using unique sales data provided by Chinese airline. The four customer groups were compared through the lens of personal and scenario characteristics to estimate travelers’ purchasing behaviours in airlines. The results help managers make market decisions and fill gaps left by consumer value theory and the RFM model in the retailing industry.  相似文献   

7.
Recently, there is increasing need of banks for targeting and acquiring new customers, for fraud detection in real time and for segmentation products through analysis of the customers. Doing it, they can serve their customers better, and can increase the effectiveness of the company. For this purpose, various data mining methods are used which enable extraction of interesting, nontrivial, implicit, previously unknown, and potentially useful patterns or knowledge from huge amounts of data. Traditional data mining methods include classification rule tasks, for their solution there are a number of methods. Among them can be mentioned, for example, Random forest algorithm or C4.5 algorithm. However, accuracy of these methods significantly reduces in the event that some data in databases is missing. These methods are always not optimal for very large databases. The aim of our work is to verify a possible solution of these problems by using the algorithm based on artificial ant colonies. This algorithm was successful in other areas. Therefore, we tested its applicability and accuracy in marketing and business intelligence and compared it with so far used methods. The experimental results showed that the presented algorithm is very effective, robust, and suitable for processing of very large files. It was also found that this algorithm overcomes the previously used algorithms in accuracy. Algorithm is easily implementable on different platforms and can be recommended for using in banking and business intelligence.  相似文献   

8.
Classification analysis is an important tool to support decision making in customer-centric applications like, e.g., proactively identifying churners or selecting responsive customers for direct-marketing campaigns. Whereas the development of novel classification algorithms is a popular avenue for research, corresponding advancements are rarely adopted in corporate practice. This lack of diffusion may be explained by a high degree of uncertainty regarding the superiority of novel classifiers over well established counterparts in customer-centric settings. To overcome this obstacle, an empirical study is undertaken to assess the ability of several novel as well as traditional classifiers to form accurate predictions and effectively support decision making. The results provide strong evidence for the appropriateness of novel methods and indicate that they offer economic benefits under a variety of conditions. Therefore, an increase in use of respective procedures can be recommended.  相似文献   

9.
Consumer direct delivery of packages ordered over the Internet has grown at well over 25 % per year over the past 10 years and now accounts for over $100 billion in sales in the U.S. alone. Retailers have rushed to capitalize on what has commonly been labeled multi‐channel retailing, while logisticians have faced a challenge in devising efficient methods of delivering billions of packages to customer homes. Inefficient deliveries in this “last mile” of the supply chain have led to numerous business collapses as well as a substantial increase in delivery costs. We present a study which examines the effect of two factors (customer density and delivery window length) on the overall efficiency of the delivery route. Data are collected based on empirically derived settings from interviews with several practicing managers. Results provide insight for logistics and marketing managers who must balance customer desires for convenience with business desires for efficiency. The data show that offering a 3 hour delivery window is 30–45% more expensive than offering unattended (9 hour delivery window) delivery. The results provide a tool for managers to address the tradeoffs between various settings for the independent variables (customer density and delivery window length) and the overall route efficiency.  相似文献   

10.
This paper expands prior work on the Sequential Binary Programming (SBP) algorithm as a framework for cost-sensitive classification. The field of cost-sensitive learning has provided a number of methods to adapt predictive data mining from engineering and hard science applications to those in commerce. This discussion will test theoretical limitations of classical cost-sensitive algorithms empirically and outline the appropriate conditions under which various methods (specifically SBP) should be implemented in favor of others.  相似文献   

11.
Determining customer satisfaction elements in retailing after-sales services have been well explored; however, the increasing competition in this area demands the investigation of actual instrumentality of these elements on satisfaction of customers. In the present research, we have proposed a framework for assessing the instrumentality of after-sales services on customer satisfaction. Kano model and SERVQUAL framework were used to categorize customer satisfaction elements. In addition, in order to address behavioral dissimilarities among customers, RFM clustering technique was used for analysing 243,180 customers of automobile after-sales services. Accordingly, dissatisfaction decrement index and satisfaction increment index were measured for every cluster separately. We identified a group of 21 quality elements and demonstrated the instrumentality and quality of these quality elements on customer satisfaction. RFM clustering technique is applied to address customer dissimilarities and we demonstrated the preferences and desires of customers in each cluster. While some papers have already identified the influential factors of after-sales services on customer satisfaction, this is for the first time that the instrumentality of after-sales services is being identified. Accordingly, this study demonstrates how different after-sales services quality elements affect customer satisfaction. Therefore, the results of this study can help companies to allocate their resources more efficiently.  相似文献   

12.
Funding pressures amidst the slow economic recovery from the late-2000's recession have forced universities, as well as other not-for-profit organizations, to increase the volume and sophistication of their direct marketing activities. The efficiency of direct marketing strategies is linked to an organization's ability to effectively target individuals. In this paper, we present a finite-mixture model framework to segment the alumni population of a university in the midwestern United States.Much of the research on customer segmentation summarizes response data (e.g., purchase and contribution histories) via recency, frequency and monetary value (RFM) statistics. Individuals sharing similar RFM characteristics are grouped together; the rationale being that the best predictor of future behavior is past behavior. Summary statistics such as RFM, however, introduce aggregation bias that mask the dynamics of purchase/contribution behavior. Accordingly, we implement latent-class segmentation models where alumni are classified based on how an individual's contribution sequence compares to those of other individuals. The framework's capability to process contribution sequences, i.e., longitudinal data, provides fundamental new insights into donor contribution behavior, and provides a rigorous mechanism to infer and segment the population based on unobserved heterogeneities (as well as based on other observable characteristics). Specifically, we analyze Markov mixture models to segment alumni based on contribution-behavior patterns, under the assumption of serially-dependent contribution sequences. We use the expectation–maximization algorithm to obtain parameter estimates for each segment. Through an extensive empirical study, we highlight the substantive insights gained through the processing of the full contribution sequences, and establish the presence of three distinct classes of alumni in the population (each with a discernible contribution pattern). The proposed framework, collectively, provides a basis to tailor direct marketing policies to optimize specific performance criteria (e.g., profits).  相似文献   

13.
The purpose of this paper is providing a value co-creation management framework in the banking industry using the data analysis. Moreover, a multi-channel segmentation approach will be developed in order to identify customer segments based on the use of each channel. Managing value co-creation can be defined as determining the channel that may be used by the customer and the kinds of encounters that lie in these channels to have different impacts on the customer for various types of encounters. The model is built on the basis of the related literature and the collaboration of a Customer Relationship Manager in a large bank in Iran. Then, a multi-channel segmentation model is developed based on the RFM variables of the five banking channels for each customer. Next, about 11,000 customers of the bank is segmented by comparing the k-means and DBSCAN algorithms. Finally, by adapting customer segments on the value co-creation framework, three general groups are identified based on the type of encounter that lies in each channel. Moreover, twenty-seven small groups are recognized based on the desirability of the customer use of the channels.  相似文献   

14.
首先分析了在合成孔径雷达(SAR)原始数据中通常使用的块自适应量化(BAQ)算法,然后在此基础上详细讨论了两种基于块自适应量化的变换域编码算法,即基于快速傅里叶变换块自适应量化(FFT-BAQ)和基于小波变换块自适应量化(WT-BAQ),并对这两种算法压缩得到数据解压缩获得图像与块自适应量化得到的图像进行分析比较,结果显示变换域编码技术能改善SAR原始数据压缩性能。  相似文献   

15.
客户关系管理(CRM)不仅是一种管理理念,也是一种旨在改善企业与客户之间关系的新型管理机制,还是一种管理软件和技术。推行客户关系管理是物流企业获得顾客、增强市场竞争力的重要途径。有效的客户关系管理离不开客户数据分析,而数据挖掘则是进行客户数据分析的基本技术和方法。数据挖掘技术为物流企业CRM的成功提供了有力的技术保障。  相似文献   

16.
ABSTRACT

With increasing use of point of sale terminals at stores, banks are seeking to achieve a bigger portion in such financial exchanges. An important problem for banks is to identify the most profitable professions. For this purpose, a new application using recency, frequency, and monetary (RFM)-based clustering and customer lifetime value analysis containing two extensions of RFM is proposed for guild segmentation. The methodology is applied on a real data from an Iranian state bank. The findings reveal that this methodology is applicable in practice and could be very effective for bank managers of any other banks.  相似文献   

17.
根据加权粒度的差别,介绍了per-RB-MRT算法和full-BW-EBB算法的原理,并对这两种算法 的特点以及性能差异进行了理论 分析。构建了符合3GPP规范定义的TD-LTE系统上下 行仿真平台,对这两种算法在有信道估计误差(包括信道估计精准度和信道信息时延)的情 况下的其性能进行了理论分析和仿真研究。仿真结果表明,per-RB-MRT算法在上行信道 估计值和 下行信道实际值契合较好的情况下能够趋于最佳理论性能,而full-BW-EBB算法由于利用了 相对稳定的信道空间相关特性,因而能够在较差的信道环境中保持较高的性能鲁棒性。这 些仿真结果对于实际场景中的算法选择提供了重要的参考依据。  相似文献   

18.
In Finland, the Finnish Association for Swimming Instruction and Life Saving (SUH) and Statistics Finland (SF) both provide nationwide data on unintentional drowning. The SUH database relies on rapid reporting from a newspaper clipping service and additional local police information, whereas the SF database relies on the later release of the death certificate information, which is based on extensive medico-legal investigation. The aim of the study was to explore the main differences between the SUH and SF databases for drowning and to evaluate the capacity of the former to characterize drowning events in Finland from 1998 to 2000. Computerized files of death certificates tabulated by SF were linked with the SUH database by deterministic methods. SF and SUH databases allowed the identification of 704 and 567 unintentional drownings, respectively, giving an unintentional drowning rate of 4.5 and 3.6/100?000 per year. Of the 704 drownings described by SF, 418 (59.4%) were also found in the SUH database. The SUH database markedly underreported drowning fatalities in certain settings, such as bath, ditch and swimming pool drownings; fall- and land-traffic-related drownings; and drownings occurring in South Finland. The narrative text of SUH drownings contributed limited information to characterize the drowning events. It was concluded that the newspaper-based SUH data provide more timely data on individual drownings but are not representative of all drownings. Conversely, the SF vital statistics data are more accurate but may take up to 2 years to become available. Both SUH and SF data provide little detailed information on drowning events. A multidisciplinary national surveillance system for drowning is necessary to provide more accurate and timely drowning data, analyse risk factors and design follow-up studies for developing and monitoring prevention strategies.  相似文献   

19.
以人口普查汇总数据为基础,将人口分析方法与遗传优化建模方法相结合,提出人口与计划生育目标规划可行性分析方法。以人口普查汇总数据为例展示人口与计划生育目标规划可行性分析方法计算与分析过程。  相似文献   

20.
Within stream restoration practice there has been little use of formal decision analysis methods for evaluating tradeoffs in selecting restoration sites and design alternatives. Restoration planning suffers from poorly defined objectives, confusion of objectives and means, and a lack of consideration of tradeoffs. Multicriteria decision analysis (MCDA) methods have the potential to improve restoration decision making by quantifying non-economic objectives, communicating tradeoffs, facilitating consistent and explicit valuation, and focusing negotiation on ultimate objectives. To explore the potential usefulness of MCDA, we first review restoration practices and define the characteristics of projects that are good candidates for MCDA. We also present two case studies. The first study is a prioritization of stream reaches for restoration that illustrates how value judgments can affect such decisions. The second study addresses the proposed removal of the Ballville Dam on the Sandusky River in Ohio. An important challenge in the dam removal decision is the linking of habitat improvements to changes in species populations and ecological services that people value. The analysis shows how MCDA can assist decision making by clarifying tradeoffs, in this case by showing that the key issues are conflicts among ecological criteria—not all of which are improved by restoration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号