首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we propose a method to enhance the performance of knowledge‐based decision‐support systems, knowledge of which is volatile and incomplete by nature in a dynamically changing situation, by providing meta‐knowledge augmented by the Qualitative Reasoning (QR) approach. The proposed system intends to overcome the potential problem of completeness of the knowledge base. Using the deep meta‐knowledge incorporated into the QR module, along with the knowledge we gain from applying inductive learning, we then identify the ongoing process and amplify the effects of each pending process to the attribute values. In doing so, we apply the QR models to enhance or reveal the patterns which are otherwise less obvious. The enhanced patterns can eventually be used to improve the classification of the data samples. The success factor hinges on the completeness of the QR process knowledge base. With enough processes taking place, the influences of each process will lead prediction in a direction that can reflect more of the current trend. The preliminary results are successful and shed light on the smooth introduction of Qualitative Reasoning to the business domain from the physical laboratory application. © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
Over the last decades, there has been a growing interest in applying artificial intelligence techniques to solve a spectrum of financial problems. A number of studies have shown promising results in using artificial neural networks (ANNs) to guide investment trading. Given the expanding role of ANNs in financial trading, this paper proposes the use of a hybrid neural network, which consists of two independent ANN architectures, and comparatively evaluates its performance against independent ANNs and econometric models in the trading of a financial‐engineered (synthetic) derivative composed of options on foreign exchange futures. We examine the financial profitability and the market timing ability of the competing neural network models and statistically compare their attributes with those based on linear and nonlinear statistical projections. A random walk model and the option pricing method are also included as benchmarks for comparison. Our empirical investigation finds that, for each of the currencies analysed, trading strategies guided by the proposed dual network are financially profitable and yield a more stable stream of investment returns than the other models. Statistical results strengthen the notion that diffusion of information contents and cross‐validation between the independent components within the dual network are able to reduce bias and extreme decision making over the long run. Moreover, the results are robust with respect to different levels of transaction costs. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
This work proposes a model for the valuation of branch of?ces of banks based on the rough set theory, which could be used as the basis for a decision‐making system for dimensioning strategies of a ?nancial entity. It compares the rough set approach with the competitive discriminant analysis methodology using a common set of data from 421 branches. We pay special attention to data reduction and the creation of decision rules that will allow future branches to be classi?ed. These rules could constitute the basis for the evaluation of the viability of dimensioning strategies for a ?nancial entity. In order to evaluate the predictive capabilities of the decision rules, we present the results of cross‐validation tests to evaluate the ability of the model to classify new branches. It appears that the rough sets approach provides a favourable tool for the valuation of branch of?ces. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
Using a sample of Shanghai and Shenzhen A-share listed firms during 2009–2020, we examine how customer concentration would influence firms' digital transformation. In this study, we construct a proxy for digital transformation based on a text analysis approach. Our baseline results show that customer concentration hinders digital transformation at the firm level. Moreover, we design a series of tests including instrumental variables and 2SLS regression to mitigate the endogeneity concern. Still, we find results consistent with the baseline regression. The results hold after multiple robustness tests. Furthermore, this negative effect of customer concentration on digital transformation is more pronounced when firms are subject to 1) more market competition, 2) more financing constraints, 3) higher transaction costs, and 4) less efficient use of resources. Overall, our results demonstrate the role of customer concentration in inhibiting firms' digital transformation from the perspective of supply chain management.  相似文献   

5.
This paper reports the results of a research project which examines the feasibility of developing a machine‐independent audit trail analyser (MIATA). MIATA is a knowledge‐based system which performs intelligent analysis of operating system audit trails. Such a system is proposed as a decision support tool for auditors when assessing the risk of unauthorized user activity in multi‐user computer systems. It is also relevant to the provision of a continuous assurance service to clients by internal and external auditors. Monitoring user activity in system audit trails manually is impractical because of the vast quantity of events recorded in those audit trails. However, if done manually, an expert security auditor would be needed to look for two main types of events—user activity rejected by the system's security settings (failed actions) and users behaving abnormally (e.g. unexpected changes in activity such as the purchasing clerk attempting to modify payroll data). A knowledge‐based system is suited to applications that require expertise to perform well‐de?ned, yet complex, monitoring activities (e.g. controlling nuclear reactors and detecting intrusions in computer systems). To permit machine‐independent intelligent audit trail analysis, an anomaly‐detection approach is adopted. Time series forecasting methods are used to develop and maintain the user pro?le database (knowledge base) that allows identi?cation of users with rejected behaviour as well as abnormal behaviour. The knowledge‐based system maintains this knowledge base and permits reporting on the potential intruder threats (summarized in Table I). The intelligence of the MIATA system is its ability to handle audit trails from any system, its knowledge base capturing rejected user activity and detecting anomalous activity, and its reporting capabilities focusing on known methods of intrusion. MIATA also updates user pro?les and forecasts of behaviour on a daily basis. As such, it also ‘learns’ from changes in user behaviour. The feasibility of generating machine‐independent audit trail records, and the applicability of the anomaly‐detection approach and time series forecasting methods, are demonstrated using three case studies. These results support the proposal that developing a machine‐independent audit trail analyser is feasible. Such a system will be an invaluable aid to an auditor in detecting potential computer intrusions and monitoring user activity. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
Accurate prediction of stock market price is of great importance to many stakeholders. Artificial neural networks (ANNs) have shown robust capability in predicting stock price return, future stock price and the direction of stock market movement. The major aim of this study is to predict the next trading day closing price of the Qatar Exchange (QE) Index using historical data from 3 January 2010 to 31 December 2012. A multilayer perceptron ANN architecture was used as a prediction model with 10 market technical indicators as input variables. The experimental results indicate that ANNs are an effective modelling technique for predicting the QE Index with high accuracy, outperforming the well‐established autoregressive integrated moving average models. To the best of our knowledge, this is the first attempt to use ANNs to predict the QE Index, and its performance results are comparable to, and sometimes better than, many stock market predictions reported in the literature. The ANN model also revealed that the weighted and simple moving averages are the most important technical indicators in predicting the QE Index, and the accumulation/distribution oscillator is the least important such indicator. The analysis results also indicated that the ANNs are resilient to stock market volatility. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Large merger and acquisition (M&A) samples feature the pervasive presence of repetitive acquirers. They offer an attractive empirical context for revealing the presence of acquirer fixed effects (permanent abnormal performance). But panel data M&A are quite heterogeneous; just a few acquirers undertake many M&As. Does this feature affect statistical inference? To investigate the issue, our study relies on simulations based on real data sets. The results suggest the existence of a bias, confirming suspicions reported in the extant literature about the validity of fixed-effect regression based statistics (R- square, adjusted R- square and fixed effects Fisher tests) used to detect the presence and significance of acquirer fixed effects. We introduce a new resampling method to detect acquirer fixed effects with attractive statistical properties (size and power) for samples of acquirers that complete at least five acquisitions. The proposed method confirms the presence of acquirer fixed effects but only for a marginal fraction of the acquirer population. This result is robust to endogenous attrition and varying time periods between successive transactions.  相似文献   

8.
This paper focuses on the liquidity of electronic stock markets applying a sequential estimation approach of models for volume duration with increasing threshold values. A modified ACD model with a Box–Tukey transformation and a flexible generalized beta distribution is proposed to capture the changing cluster structure of duration processes. The estimation results with German XETRA data reveal the market's absorption limit for high volumes of shares, expanding the time costs of illiquidity when trading these quantities.  相似文献   

9.
In this paper, we propose a goal-based investment model that is suitable for personalized wealth management. The model only requires a few intuitive inputs such as size of wealth, investment amount, and consumption goals from individual investors. In particular, a priority level can be assigned to each consumption goal and the model provides a holistic solution based on a sequential approach starting with the highest priority. This allows strict prioritization by maximizing the probability of achieving higher priority goals that are not affected by goals with lower priorities. Furthermore, the proposed model is formulated as a linear program that efficiently finds the optimal financial plan. With its simplicity, flexibility, and computational efficiency, the proposed goal-based investment model provides a new framework for automated investment management services.  相似文献   

10.
Valuation of vulnerable American options with correlated credit risk   总被引:1,自引:0,他引:1  
This article evaluates vulnerable American options based on the two-point Geske and Johnson method. In accordance with the Martingale approach, we provide analytical pricing formulas for European and multi-exercisable options under risk-neutral measures. Employing Richardson’s extrapolation gets the values of vulnerable American options. To demonstrate the accuracy of our proposed method, we use numerical examples to compare the values of vulnerable American options from our proposed method with the benchmark values from the least-square Monte Carlo simulation method. We also perform sensitivity analyses for vulnerable American options and show how the prices of vulnerable American options vary with the correlation between the underlying assets and the option writer’s assets.   相似文献   

11.
This paper compares the performance of artificial neural networks (ANNs) with that of the modified Black model in both pricing and hedging short sterling options. Using high‐frequency data, standard and hybrid ANNs are trained to generate option prices. The hybrid ANN is significantly superior to both the modified Black model and the standard ANN in pricing call and put options. Hedge ratios for hedging short sterling options positions using short sterling futures are produced using the standard and hybrid ANN pricing models, the modified Black model, and also standard and hybrid ANNs trained directly on the hedge ratios. The performance of hedge ratios from ANNs directly trained on actual hedge ratios is significantly superior to those based on a pricing model, and to the modified Black model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
This paper introduces a novel conceptual framework to support the creation of knowledge representations based on enriched semantic vectors, using the classical vector space model approach extended with ontological support. This work is focused on collaborative engineering projects where knowledge plays a key role in the process. Collaboration is the arena, engineering projects are the target and knowledge is the currency used to provide harmony into the arena since it can potentially support innovation and, hence, a successful collaboration. The test bed for the assessment of the approach comes from the Building and Construction sector, which is challenged with significant problems for exchanging, sharing and integrating information among actors. Semantic gaps or lack of meaning definition at the conceptual and technical levels, for example, are problems fundamentally originated through the employment of representations to map the ‘world’ into models in an endeavour to anticipate other actors’ views, vocabulary and even motivations. One of the primary research challenges addressed in this work relates to the process of formalization and representation of document contents, where most existing approaches are limited and only take into account the explicit, word-based information in the document. The research described in this paper explores how traditional knowledge representations can be enriched through incorporation of implicit information derived from the complex relationships (semantic associations) modelled by domain ontologies with the addition of information presented in documents, by providing a baseline for facilitating knowledge interpretation and sharing between humans and machines. Preliminary results were collected using a clustering algorithm for document classification, which indicates that the proposed approach does improve the precision and recall of classifications. Future work and open issues are also discussed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
We propose two novel approaches for feature selection and ranking tasks based on simulated annealing (SA) and Walsh analysis, which use a support vector machine as an underlying classifier. These approaches are inspired by one of the key problems in the insurance sector: predicting the insolvency of a non‐life insurance company. This prediction is based on accounting ratios, which measure the health of the companies. The approaches proposed provide a set of ratios (the SA approach) and a ranking of the ratios (the Walsh analysis ranking) that would allow a decision about the financial state of each company studied. The proposed feature selection methods are applied to the prediction the insolvency of several Spanish non‐life insurance companies, yielding state‐of‐the‐art results in the tests performed. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
We analyse the implications of three different factors (preprocessing method, data distribution and training mechanism) on the classification performance of artificial neural networks (ANNs). We use three preprocessing approaches: no preprocessing, division by the maximum absolute values and normalization. We study the implications of input data distributions by using five datasets with different distributions: the real data, uniform, normal, logistic and Laplace distributions. We test two training mechanisms: one belonging to the gradient‐descent techniques, improved by a retraining procedure, and the other is a genetic algorithm (GA), which is based on the principles of natural evolution. The results show statistically significant influences of all individual and combined factors on both training and testing performances. A major difference with other related studies is the fact that for both training mechanisms we train the network using as starting solution the one obtained when constructing the network architecture. In other words we use a hybrid approach by refining a previously obtained solution. We found that when the starting solution has relatively low accuracy rates (80–90%) the GA clearly outperformed the retraining procedure, whereas the difference was smaller to non‐existent when the starting solution had relatively high accuracy rates (95–98%). As reported in other studies, we found little to no evidence of crossover operator influence on the GA performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
This study investigates customer behaviour and activity in the banking sector and uses various feature transformation techniques to convert the behavioural data into different data structures. Feature selection is then performed to generate feature subsets from the transformed datasets. Several classification methods used in the literature are applied to the original and transformed feature subsets. The proposed combined knowledge mining model enable us to conduct a benchmark study on the prediction of bank customer behaviour. A real bank customer dataset, drawn from 24,000 active and inactive customers, is used for an experimental analysis, which sheds new light on the role of feature engineering in bank customer classification. This paper’s detailed systematic analysis of the modelling of bank customer behaviour can help banking institutions take the right steps to increase their customers’ activity.  相似文献   

16.
《Finance Research Letters》2014,11(3):194-202
This paper studies the hedging performance of static replication approach proposed by Derman, Ergener, and Kani (DEK, 1995) for continuous barrier options under the constant elasticity of variance (CEV) model of Cox (1975) and Cox and Ross (1976), and then focuses on how to improve the DEK method. Given the time-varying volatility feature of the CEV model, I show that the DEK static hedging portfolio exhibits serious mismatches of the theta values on the barrier, particularly when one of the component options of the portfolio is around the neighborhood of expiration, which primarily explains why static portfolio values are greater than zero on the barrier except at the matching points. The DEK method (hereafter, the improved DEK method) is improved by re-forming a static replication portfolio consisting of plain vanilla options and cash-or-nothing binary options with different maturities to match both the value-matching condition and the theta-matching condition on the barrier. The numerical analyses indicate that under the CEV model, the improved DEK method significantly reduces replication errors for an up-and-out call option.  相似文献   

17.
The aim of this article is to propose a new approach to the estimation of the mortality rates based on two extended Milevsky and Promislov models: the first one with colored excitations modeled by Gaussian linear filters and the second one with excitations modeled by a continuous non-Gaussian process. The exact analytical formulas for theoretical mortality rates based on Gaussian linear scalar filter models have been derived. The theoretical values obtained in both cases were compared with theoretical mortality rates based on a classical Lee–Carter model, and verified on the basis of empirical Polish mortality data. The obtained results confirm the usefulness of the switched model based on the continuous non-Gaussian process for modeling mortality rates.  相似文献   

18.
Summary

An estimator which is a linear function of the observations and which minimises the expected square error within the class of linear estimators is called an “optimal linear” estimator. Such an estimator may also be regarded as a “linear Bayes” estimator in the spirit of Hartigan (1969). Optimal linear estimators of the unknown mean of a given data distribution have been described by various authors; corresponding “linear empirical Bayes” estimators have also been developed.

The present paper exploits the results of Lloyd (1952) to obtain optimal linear estimators based on order statistics of location or/and scale parameter (s) of a continuous univariate data distribution. Related “linear empirical Bayes” estimators which can be applied in the absence of the exact knowledge of the optimal estimators are also developed. This approach allows one to extend the results to the case of censored samples.  相似文献   

19.
Abstract

The popular domain-specific approach to risk reduction created the illusion that efficient risk reduction can be delivered successfully solely by using methods offered by the specific domain. As a result, many industries have been deprived of efficient risk reducing strategy and solutions. This paper argues that risk reduction is underlined by domain-independent methods and principles which, combined with knowledge from the specific domain, help to generate effective risk reduction solutions. In this respect, the paper introduces a powerful method for reducing the likelihood of computational errors based on combining the domain-independent method of segmentation and local knowledge of the chain rule for differentiation. The paper also demonstrates that lack of knowledge of domain-independent principles for risk reduction misses opportunities to reduce the risk of failure even in a mature field like stress analysis. The domain-independent methods for risk reduction do not rely on reliability data or knowledge of physical mechanisms underlying possible failure modes and are particularly well suited for developing new designs, with unknown failure mechanisms and failure history. In many cases, the reliability improvement and risk reduction by using the domain-independent methods reduces risk at no extra cost or at a relatively small cost. The presented domain-independent methods work across unrelated domains and this is demonstrated by the supplied examples which range from various areas of engineering and technology, computer science, project management, health risk management, business and mathematics. The domain-independent risk reduction methods presented in this paper promote building products and systems characterised by high-reliability and resilience.  相似文献   

20.
In this article, we introduce a new method to approximate Markov perfect equilibrium in large‐scale Ericson and Pakes (1995)‐style dynamic oligopoly models that are not amenable to exact solution due to the curse of dimensionality. The method is based on an algorithm that iterates an approximate best response operator using an approximate dynamic programming approach. The method, based on mathematical programming, approximates the value function with a linear combination of basis functions. We provide results that lend theoretical support to our approach. We introduce a rich yet tractable set of basis functions, and test our method on important classes of models. Our results suggest that the approach we propose significantly expands the set of dynamic oligopoly models that can be analyzed computationally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号