首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
个人信用评分关键技术研究的新进展   总被引:1,自引:0,他引:1  
从系统论的角度总结了个人信用评分发展的前沿问题,从数据预处理、指标体系筛选、以及模型设计三个方面对个人信用评分关键技术的最新研究成果进行了细致分类和综合比较,从而指出个人信用评分研究中存在的难点以及未来发展方向。  相似文献   

2.
券商客户积分分级的模型设计与应用   总被引:1,自引:1,他引:1  
中国证券行业服务严重同质化,客户对证券公司品牌的忠诚度低,如何有效衡量客户价值并持续跟踪客户价值量的变化,是券商提供差异化服务和提高客户忠诚度的前提。本文通过研究券商客户分级方法,指出传统客户分类模式的不足,提出了基于客户积分的客户分级模型,并选取4家营业部客户真实交易数据进行实证分析,进一步论证积分分级模式在量化客户价值、挖掘最有价值客户、营销新客户和服务老客户等方面的独特优势,从而为券商提高客户关系管理水平提供崭新的工具。  相似文献   

3.
随着计算机及网络信息技术的发展,规模效益导致关注成本大大降低。在"长尾理论"越来越适用的网络时代,银行业应当重新审视过去的管理原则和方法,重新构建客户层次并采取针对性的营销措施。  相似文献   

4.
高质量的数据是商业银行正确评估客户信用质量的基础。本文尝试利用模糊数学的原理与方法,建立多层次数据质量模糊综合评价模型,评价贷款客户数据库中的数据质量。本方法对于商业银行客户信用评价体系的建立和实施,具有重要的现实意义。  相似文献   

5.
    
This paper contributes to the empirical literature on Islamic finance by investigating the feature of Islamic and conventional banks in Gulf Cooperation Council (GCC) countries over the period 2003–2010. We use parametric and non-parametric classification models (Linear discriminant analysis, Logistic regression, Tree of classification and Neural network) to examine whether financial ratios can be used to distinguish between Islamic and conventional banks. Univariate results show that Islamic banks are, on average, more profitable, more liquid, better capitalized, and have lower credit risk than conventional banks. We also find that Islamic banks are, on average, less involved in off-balance sheet activities and have more operating leverage than their conventional peers. Results from classification models show that the two types of banks may be differentiated in terms of credit and insolvency risk, operating leverage and off-balance sheet activities, but not in terms of profitability and liquidity. More interestingly, we find that the recent global financial crisis has a negative impact on the profitability for both Islamic and conventional banks, but time shifted. Finally, results show that Logit regression obtained slightly higher classification accuracies than other models.  相似文献   

6.
Inspired by Alexander and Dakos (2020), we shed more light on the adequacy of data in the cryptocurrency literature by analysing the scaling properties and underlying processes of the main cryptocurrency databases (Coinmarketcap, Coingecko, BraveNewCoin and Cryptocompare) and exchange platforms (Coinbase, Bitstamp, Bittrex, Cexio and Exmo). Our results show that coin-ranking sites, such as Coinmarketcap, Coingecko and BraveNewCoin (i) include most of the cryptocurrency trading activity and (ii) are essentially characterised by the same underlying processes as the main exchange platforms (Coinbase and Bitstamp) and alternative coin-ranking sites (Cryptocompare), regardless of the possible issues arising from the aggregation of different exchanges to compute a unique cryptocurrency price. Therefore, we state that these databases are appropriate to conduct research. At any rate, we observe that all the databases analysed in this paper show the same underlying process for most liquid cryptocurrencies; consequently, scholars could use any of them for their studies, as long as they consider the different trading activity included by each database. This result is supported by an empirical analysis focused on weak-form market efficiency, since we report the same degree of efficiency regardless of the database and exchange platform. Nevertheless, we recognise the need for further research, given the gap in the literature and the black-box method used by coin-ranking sites to compute a unique cryptocurrency price.  相似文献   

7.
基于分类回归树算法的上市公司会计信息失真识别研究   总被引:1,自引:0,他引:1  
利用26个财务变量建立分类回归树模型对会计信息失真进行识别研究,结果表明所建模型对会计信息失真企业的正确识别率达到80%以上,能将第二类错误率控制在20%以下.实证还发现留存收益在总资产中的比率小于2%的公司很容易出现会计信息失真,最后作者利用8年数据对该结果进行检验,表明其识别能力非常出色.  相似文献   

8.
Using a unique database of daily trading activity, the present study examines the ability of active Australian equity managers to earn superior risk‐adjusted returns. We find evidence of superior trade performance, where performance is a function of stock size. Our findings indicate that active equity managers are able to successfully exploit private information more readily in stocks ranked 101–150 by market‐cap, where the degree of analyst coverage, information flows and market efficiency are lower than for large‐cap stocks. We also find evidence of manager specialization. Our evidence provides further support of the value of active investment management in Australian equities.  相似文献   

9.
The present paper questions the financial efficiency of the most used market portfolio proxies in Spain and Mexico (IBEX35 and IPC) in order to determine if these can be considered a proper market portfolio proxy. The paper questions if they can be used as “neutrals”, according to Black & Litterman (1992) proposals in portfolio management. For this purpose, two discrete event simulations that use the Markowtiz-Tobin-Sharpe-Linter model (Markowitz, 1987, p.5) are performed with monthly data of the stock members of these indices in a February 2001 to December 2010 time window. The results are compared by using the Sharpe ratio (Sharpe, 1966) and show that the equilibrium assumptions in the market do not hold, leading to conclude that these market portfolio proxies are inefficient.  相似文献   

10.
    
The aim of this paper is to compare several predictive models that combine features selection techniques with data mining classifiers in the context of credit risk assessment in terms of accuracy, sensitivity and specificity statistics. The t‐statistic, Battacharrayia statistic, the area between the receiver operating characteristic, Wilcoxon statistic, relative entropy, and genetic algorithms were used for the features selection task. The selected features are used to train the support vector machine (SVM) classifier, backpropagation neural network, radial basis function neural network, linear discriminant analysis and naive Bayes classifier. Results from three datasets using a 10‐fold cross‐validation technique showed that the SVM provides the best accuracy under all features selections techniques adopted in the study for all three datasets. Therefore, the SVM is an attractive classifier to be used in real applications for bankruptcy prediction in corporate finance and financial risk management in financial institutions. In addition, we found that our best results are superior to earlier studies on the same datasets.  相似文献   

11.
    
We propose a combined method for bankruptcy prediction based on fuzzy set qualitative comparative analysis (fsQCA) and convolutional neural networks (CNN). Currently, CNNs are being applied to various fields, and in some areas are providing higher performance than traditional models. In our proposed method, a CNN uses calibrated variables from fuzzy sets to improve performance accuracy. In addition, there are no published studies on the effect of feature selection at the input level of convolutional neural networks. Therefore, this study compares four well-known feature selection methods used in financial distress prediction, (t-test, stepdisc discriminant analysis, stepwise logistic regression and partial least square discriminant analysis) to investigate their effect on classification performance. The results show that fuzzy convolutional neural networks (FCNN) lead to better performance than when using traditional methods.  相似文献   

12.
Investor sentiment is widely recognized as the major determinant of cryptocurrency prices. Although earlier research has revealed the influence of investor sentiment on cryptocurrency prices, it has not yet generated cohesive empirical findings on an important question: How effective is investor sentiment in predicting cryptocurrency prices? To address this gap, we propose a novel prediction model based on the Bitcoin Misery Index, using trading data for cryptocurrency rather than judgments from individuals who are not Bitcoin investors, as well as bagged support vector regression (BSVR), to forecast Bitcoin prices. The empirical analysis is performed for the period between March 2018 and May 2022. The results of this study suggest that the addition of the sentiment index enhances the predictive performance of BSVR significantly. Moreover, the proposed prediction system, enhanced with an automatic feature selection component, outperforms state-of-the-art methods for predicting cryptocurrency for the next 30 days.  相似文献   

13.
This article looks at the role that information sharing plays in supporting new models of public service delivery. It sets out the barriers to information sharing, attempts to overcome them and considers some of the factors involved in shaping a new direction for information sharing, such as changing public expectations and the rapidly changing regulatory environment.  相似文献   

14.
This paper investigates the out-of-sample predictability of monthly market as well as size, value, and momentum premiums. We use a sample from each of the US and the Swiss stock markets between 1989 and 2007. Using the Swiss sample provides an important new perspective as the repeated evaluation of the same (US) data set leads to data mining problems. To avoid data mining in our predictability study, we test both statistical significance and robustness in the two samples. Our key results are as follows. We find no robust indication that the market premium is predictable, which is also true for the momentum and value premiums. It cannot be excluded that the results from the US may be caused by data mining in light of the results from the Swiss sample. However, the size premium seems to be somewhat predictable, due to the credit spread. We theorize that there are three possible reasons for this rare evidence for predictability. First, predictability may have disappeared over the last decade, as academic research made the respective information public. Second, predictability seems, as we demonstrate, not to be robust to the choice of methodology. Third, robustness tests in the Swiss sample reveal that many of the supposedly statistically significant interrelations from the US sample may be attributed to randomness, which, in that case, would be data mining. Therefore, we think that future discussions of predictability should address the issue of data mining by applying robustness tests.
Michael SteinerEmail:
  相似文献   

15.
The present study, based on data for delisted and active corporations in the Australian materials industry, is an attempt to develop a systematic way of selecting corporate failure‐related features. We empirically tested the proposed procedure using three datasets. The first dataset contains 82 financial economic factors from the corporation's financial statement. The second dataset comprises 73 relevant financial ratios, which either directly or indirectly measure a corporation's propensity to fail, and are conciliated from the first dataset. The third dataset is a parsimonious dataset obtained from the application of combining a filter and a wrapper to preprocess the first dataset. The robustness of this preprocessed dataset is tested by comparing its performance with the first and second datasets in two statistical (logistic regression and naïve‐Bayes) and two machine learning (decision tree, neural network) classes of prediction models. Tests for prediction accuracies and reliabilities, using the computational (ROC curve, AUC) and the statistical (Cochran's Q statistic) criteria show that the third dataset outperforms the other two datasets in all four predicting models, achieving various accuracies ranges from 81 per cent to 84 per cent.  相似文献   

16.
eXtensible Business Reporting Language (XBRL) is a language for the electronic communication of business and financial data which is revolutionizing business reporting around the world. It is a tool to bridge potential language barriers and unify financial reporting. This has appeal to foreign investors, among others, who can rely on information in XBRL‐tagged financial reports to make investment decisions without having to translate financial statements from local language. In 2008, Israel required most public companies to adopt International Financial Reporting Standards (IFRS) for financial reporting and to use XBRL‐tagged reporting format, as part of an aggressive effort to make its capital markets more transparent and attractive for foreign investors. In this paper, we study all Israeli public companies and analyze the accuracy and reliability of their XBRL‐tagged financial statements that are available on MAGNA, the Israel Securities Authority's electronic system. We describe the process by which the XBRL‐based data were collected and reported. We document, categorize, and analyze deficiencies in the XBRL‐tagged filings, and inconsistencies between them and the Hebrew‐based annual reports. We observe pervasive data entry errors resulting in inaccurate XBRL‐generated financial reports, which went undetected for over one year. Further, first year XBRL reporting (in conjunction with IFRS adoption) did not increase foreign investment in the Israeli capital markets. This analysis allows us to better understand the benefits and challenges of the adoption of XBRL.  相似文献   

17.
A hotly debated question in finance is whether the higher stock returns under Democratic presidencies relative to Republican presidencies represent abnormal return, risk premium, or mere statistical fluke. This paper investigates whether this presidential premium is due to spurious-regression bias, data mining, or economic policy uncertainty. Decomposing the presidential premium into expected and unexpected components, we find that over two-thirds of the premium is unexpected, which is inconsistent with the spurious regression bias explanation. The presidential premium is not explained by data mining given that it persists in the post-publication period, and remains robust even if we purge returns of their covariation with economic policy uncertainty.  相似文献   

18.
    
The audit of financial statements is a complex and highly specialized process. Digitalization and the increasing automation of transaction processing create new challenges for auditors who carry out those audits. New data analysis techniques offer the opportunity to improve the auditing of financial statements and to overcome the limitations of traditional audit procedures when faced with increasingly large amounts of financially relevant transactions that are processed automatically or semi-automatically by computer systems. This study discusses process mining as a novel data analysis technique which has been receiving increased attention in the audit practice. Process mining makes it possible to analyse business processes in an automated manner. This study investigates how process mining can be integrated into contemporary audits by reviewing the relevant audit standards and incorporating the results from a field study. It demonstrates the feasibility of embodying process mining within financial statement audits in accordance with contemporary audit standards and generally accepted audit practices. Implementation of process mining increases the reliability of the audit conclusions and improves the robustness of audit evidence by replacing manual audit procedures. Process mining as novel data mining technique provides auditors the means to keep pace with current technological developments and challenges.  相似文献   

19.
    
The accounting fraud detection models developed on financial data prepared under US Generally Accepted Accounting Principles (GAAP) in the current literature achieve significantly weaker performance than models based on financial data prepared under different accounting standards. This study contributes to the US GAAP accounting fraud data mining literature through the attainment of higher model performance than that reported in the prior literature. Financial data from the 10-K forms of 320 fraudulent financial statements (80 fraudulent companies) and 1,200 nonfraudulent financial statements (240 nonfraudulent companies) were collected from the US Security and Exchange Commission. The eight most commonly used data mining techniques were applied to develop prediction models. The results were cross-validated on a testing dataset and then compared according to parameters of accuracy, F-measure, and type I and II errors with existing studies from the US, China, Greece, and Taiwan. As a result, the developed predictive models for accounting fraud achieved performance comparable to those achieved by models built on data from other accounting standards. Moreover, the developed models also significantly outperformed (accuracy 10.5%, F-measure 16.1%, type I error 12.2% and type II error 15.2%) existing studies based on US GAAP financial data. Furthermore, this study provides an extensive literature review encompassing recent accounting fraud theory. It enhances the existing US fraud data mining literature with a performance comparison of studies based on other accounting standards.  相似文献   

20.
    
The capability of identifying customers who are more likely to respond to a product is an important issue in direct marketing. This paper investigates the impact of feature selection on predictive models which predict reordering demand of small and medium‐sized enterprise customers in a large online job‐advertising company. Three well‐known feature subset selection techniques in data mining, namely correlation‐based feature selection (CFS), subset consistency (SC) and symmetrical uncertainty (SU), are applied in this study. The results show that the predictive models using SU outperform those without feature selection and those with the CFS and SC feature subset evaluators. This study has examined and demonstrated the significance of applying the feature‐selection approach to enhance the accuracy of predictive modelling in a direct‐marketing context. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号