首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Falsifying financial statements involves the manipulation of financial accounts by overstating assets, sales and profit, or understating liabilities, expenses or losses. This paper explores the effectiveness of an innovative classification methodology in detecting firms that issue falsified financial statements (FFS) and the identification of the factors associated to FFS. The methodology is based on the concepts of multicriteria decision aid (MCDA) and the application of the UTADIS classification method (UTilités Additives DIScriminantes). A sample of 76 Greek firms (38 with FFS and 38 non-FFS) described over ten financial ratios is used for detecting factors associated with FFS. A jackknife procedure approach is employed for model validation and comparison with multivariate statistical techniques, namely discriminant and logit analysis. The results indicate that the proposed MCDA methodology outperforms traditional statistical techniques which are widely used for FFS detection purposes. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of FFS and highlight the importance of financial ratios such as the total debt to total assets ratio, the inventories to sales ratio, the net profit to sales ratio and the sales to total assets ratio.  相似文献   

2.
本文以民爆标准体系构建的方法论实践,详细解释了标准体系构建方法论的应用,深入分析了标准体系目标分析、标准需求分析、标准适用性分析、标准体系结构设计、标准体系表及标准规划表编制、标准体系研究报告编写、标准体系改进和维护等重要的实施操作内容,以为标准体系方法论的应用提供具体指导和案例参考。  相似文献   

3.
In this article, we adopt an efficiency approach to the two-group linear programming method of discriminant analysis (DA), using principles taken from data envelopment analysis (DEA), to predict group membership in an insurance underwriting scheme. Using an empirical insurance data base we illustrate the effectiveness of our model as a decision-making tool to distinguish among automobile insurance applicants by contrasting our hybrid model with both statistical and LP methods of discriminant analysis. We find for this insurance application that our hybrid model significantly outperforms the more traditional methods in separation and misclassification outcomes.  相似文献   

4.
For managing credit risk, commercial banks use various scoring methodologies to evaluate the financial performance of client firms. This paper upgrades the quantitative analysis used in the financial performance modules of state-of-the-art credit scoring methodologies. This innovation should help lending officers in branch levels filter out the poor risk applicants. The Data Envelopment Analysis-based methodology was applied to current data for 82 industrial/manufacturing firms comprising the credit portfolio of one of Turkey's largest commercial banks. Using financial ratios, the DEA synthesizes a firm's overall performance into a single financial efficiency score—the “credibility score”. Results were validated by various supporting (regression and discriminant) analyses and, most importantly, by expert judgments based on data or on current knowledge of the firms.  相似文献   

5.
A value chain framework for guiding the financial firms in their credit decisions is urgent, as the current COVID-19 pandemic has highlighted, but missing in the extant literature, particularly for those that lend to industries sensitive to value and supply chain bottlenecks. This study creates knowledge in value chain finance, a big untapped and un-researched market. It constructs, confirms, and validates a value chain framework for assessing risks in lending to Agro and Food Processing firms in which value chain risks are major business concerns globally. To pursue the objectives of the study, we use a novel methodology that integrates the Modified Delphi technique, exploratory factor analysis, confirmatory factor analysis, and discriminant analysis. Based on testing and analysis of primary data, including loan data, a framework comprising six factors is proposed for use in conjunction with existing risk assessment models of finance companies to improve the quality of their credit decisions, contributing to their performance sustainability.  相似文献   

6.
In this paper we compare classical econometrics, calibration and Bayesian inference in the context of the empirical analysis of factor demands. Our application is based on a popular flexible functional form for the firm's cost function, namely Diewert's Generalized Leontief function, and uses the well-known Berndt and Wood 1947–1971 KLEM data on the US manufacturing sector. We illustrate how the Gibbs sampling methodology can be easily used to calibrate parameter values and elasticities on the basis of previous knowledge from alternative studies on the same data, but with different functional forms. We rely on a system of mixed non-informative diffuse priors for some key parameters and informative tight priors for others. Within the Gibbs sampler, we employ rejection sampling to incorporate parameter restrictions, which are suggested by economic theory but in general rejected by economic data. Our results show that values of those parameters that relate to non-informative priors are almost equal to the standard SUR estimates, whereas differences come out for those parameters to which we have assigned informative priors. Moreover, discrepancies can be appreciated in some crucial parameter estimates obtained with or without rejection sampling.  相似文献   

7.
The objective of the study was to develop a valid measurement scale for green human resource management (HRM). Even though the common practices of green HRM have been presented in much of the literature, the previous studies focused only on a small number of functions in integrating environmental management with HRM. Additionally, the measurement of green HRM practices still calls for empirical validation. The two‐stage methodology of structural equation modeling in AMOS was employed for data analysis. Exploratory factor analysis revealed seven dimensions of the construct measured by 28 items. Confirmatory factor analysis confirmed the factor structure. The measuring instruments revealed convergent and discriminant validity. Several model fit indices indicated the model fitness. The study provided supplementary evidence on the underlying structure of the construct that can be valuable to researchers and practitioners in this area.  相似文献   

8.
9.
The performance on small and medium-size samples of several techniques to solve the classification problem in discriminant analysis is investigated. The techniques considered are two widely used parametric statistical techniques (Fisher's linear discriminant function and Smith's quadratic function), and a class of recently proposed nonparametric estimation techniques based on mathematical programming (linear and mixed-integer programming). A simulation study is performed, analyzing the relative performance of the above techniques in the two-group case, for various small sample sizes, moderate group overlap and across six different data conditions. Training samples as well as validation samples are used to assess the classificatory performance of the techniques. The degree of group overlap and sample sizes selected for analysis in this paper are of interest in practice because they closely reflect conditions of many real data sets. The results of the experiment show that Smith's nonlinear quadratic function tends to be superior on the training samples and validation samples when the variances–covariances across groups are heterogeneous, while the mixed-integer technique performs best on the training samples when the variances–covariances are equal, and on validation samples with equal variances and discrete uniform independent variables. The mixed-integer technique and the quadratic discriminant function are also found to be more sensitive than the other techniques to the sample size, giving disproportionally inaccurate results on small samples.  相似文献   

10.
Many production/inventory systems contain thousands of stock keeping units (SKUs). In general, it is not computationally (or conceptually) feasible to consider every one of these items individually in the development of control polices and strategies. Our objective here is to develop a methodology for defining groups to support strategic planning for the operations function. Accordingly, such groups should take into consideration all product characteristics which have a significant impact on the particular operations management problem of interest. These characteristics can include many of the attributes which are used in other functional groupings and will most certainly go beyond the cost and volume attributes used in ABC analysis.The ORG methodology is based on statistical clustering and can utilize a full range of operationally significant item attributes. It considers both statistical measures of discrimination and the operational consequences associated with implementing policies derived on the basis of group membership. The main departure of this analysis from earlier work is: 1) the approach can handle any combination of item attribute information that is important for strategy purposes, 2) management's interest in defining groups on the basis of operational factors can be accommodated, 3) statistical discrimination is considered directly, 4) group definition reflects the performance of management policies which are based (in part) on group membership, and 5) the method can be applied successfully to systems with a large number of SKUs.The specific application which motivated development of the ORG methodology was an analysis of distribution strategy for the service parts division of a major automobile manufacturer. The manufacturer was interested in developing optimal inventory stocking policies, which took into account the complexities of its multiechelon distribution network, supplier relationships and customer service targets for each market segment. This manufacturer stocked over 300,000 part numbers in an extensive network with approximately 50 distribution centers and thousands of dealer locations (i.e., 1.5 million SKU/ location combinations). The results of this application indicated that the advantage of using operationally relevant data for grouping and for defining generic, group-based policies for controlling inventory can be substantial. The ORG methodology can be of value to operations managers in industries with a large number of diverse items.  相似文献   

11.
澳大利亚知识管理标准认为现代组织机构是一种知识生态系统。但是,目前的知识管理研究缺少对知识生态系统的根本特征及其管理方法论的研究。文章通过生态系统论在澳大利亚知识管理标准中应用现状的分析,得出组织机构知识生态系统的主要特征。在此基础上,借鉴复杂性理论,提出知识生态系统的复杂性方法论,为推动组织机构知识管理理念的转变提供了依据。  相似文献   

12.
Although economic theory assumes that risk is of central importance in financial decision making, it is difficult to measure the uncertainty faced by investors. Commonly used empirical proxies for risk (such as the moving standard deviation of the returns on an asset) are not firmly grounded in economic theory. Risk measures have been developed by other studies, but these are often based on subjective weights attaching to a range of objective component indicators, are difficult to replicate and are not strictly consistent with underlying theory. The contribution of this article is to develop a methodology to construct rational expectations consistent empirical risk measures. It has the advantages of being explicitly consistent with economic theory and easily replicable. We illustrate this methodology by specific application to the South African context. The time‐varying risk measure developed in this article is consistent with a rational expectations application of the expectations hypothesis. The constructed measure is a broad one (it includes political risk and peso problems for instance) and reflects investors’ perceptions of systematic risk.  相似文献   

13.
Abstract  In Part I exact results for univariate (" p = 1") two-group ("k = 2") classification problems were derived assuming normality and equality of the variances. In Part IIa asymptotic results for multivariate (" p > I") two-group classification and discrimination problems are based on the corresponding assumptions of multivariate normality and equality of the covariance matrices. The results (4.6.5), (4.6.6) and (4.6.7) are believed to be new.
The asymptotic results in Section 4.6, together with results presented elsewhere in the literature, constitute the basis of various detailed proposals to deal with problems from actual statistical practice. Most of these proposals are modifications or specifications of existing ones. We shall pay some attention to (I) testing whether differences exist. But we are mainly interested in: (II) constructing a discriminant function, (III) assigning the individual under classification, and in (IV) constructing a confidence interval for "the" posterior probability that the individual under classification belongs to Population 2.
An important part in our theory is played by various techniques for selecting variables in discriminant analysis. The need for such techniques follows from Section 4.10. The consequences of building-in a selection technique are discussed in Section 4.12. One of our proposals motivates the theory presented in Chapter 3 and is mentioned here for that reason: employ a large part of the data, say 70%, in order to construct a discriminant function (via a selection of variables); by applying this function to the rest of the data, the exact univariate theory of Part I becomes of application. Part IIb will contain a chapter on applications.  相似文献   

14.
This paper provides a method for the real-time monitoring of job stress in emergency department (ED) physicians. It is implemented in a Decision Support System (DSS) designed for patient-to-physician assignment after triage. Our concept of job stress includes not only the workload but also time pressure and uncertainty. A job stress function is estimated based on the consensus views of ED physicians obtained through a novel methodology involving stress factor analysis, questionnaire design, and the statistical analysis of expert opinions. The resulting stress score enables the assessment of job stress using workload data from the ED physicians’ whiteboard. These data can be used for the real-time measurement and monitoring of ED physician job stress in a stochastic and dynamic environment, which is the main novelty of this method as compared to previous workload and stress measurement proposals. A further advantage of this methodology is that it is general enough to be adapted to physician job stress monitoring in any ED. The use of the DSS for ED patient-flow management reduces job stress and spreads it more evenly among the whole team of physicians, while also improving other important ED performance measures such as arrival-to-provider time and the percentage of compliance with patient waiting time targets. A case study illustrates the application of the methodology for the construction of a stress-score, the monitoring of physician stress levels, and ED patient-flow management.  相似文献   

15.
The standard approach to measuring total factor productivity can produce biased results if the data are drawn from a market that is not in long-run competititve equilibrium. This article presents a methodology for adjusting data on output and variable inputs to the values they would have if the market were in long-run competitive equilibrium, given the fixed inputs and input prices. The method uses nonstochastic, parametric translog cost frontiers and calculates equilibrium values for output and varible inputs using an iterative linear programming procedure. Data from seven industries for 1970–1979 are used to illustrate the methodology.The editor for this paper was William H. Greene.  相似文献   

16.
The present paper tests a new model comparison methodology by comparing multiple calibrations of three agent-based models of financial markets on the daily returns of 24 stock market indices and exchange rate series. The models chosen for this empirical application are the herding model of Gilli and Winker (2003), its asymmetric version by Alfarano et al. (2005) and the more recent model by Franke and Westerhoff (2011), which all share a common lineage to the herding model introduced by Kirman (1993). In addition, standard ARCH processes are included for each financial series to provide a benchmark for the explanatory power of the models. The methodology provides a consistent and statistically significant ranking of the three models. More importantly, it also reveals that the best performing model, Franke and Westerhoff, is generally not distinguishable from an ARCH-type process, suggesting their explanatory power on the data is similar.  相似文献   

17.
In recent years an avalanche of literature has been published in the field of multiple criteria analysis. This methodology for decision-making and evaluation serves to find the best compromise solutions among alternative choice options, taking into account the existence of conflicting judgment criteria.The present paper focuses attention on one particular class of multiple criteria methods, viz. those in which the available information (impacts and policy priorities) is measured in an ordinal sense. This low level of measurement precludes the application of standard numerical methods. For this problem a new method, the so-called regime analysis, is devised in the paper.  相似文献   

18.
The purpose of this study is to extend the empirical analysis of Thompson (1979) to generate individual company costs of equity capital from statistical analysis of cross-sectional industry data. The paper uses a new random coefficients regression approach on 18 cross-sections of 75 electric utilities for the years 1963–1980. The results show rank stability between years and plots of individual company costs of average and simple facts about companies. The procedure does not allow calculation of standard errors of the estimates but when compared with existing cost of capital methods is shown to be a useful addition to the methodology of estimation.  相似文献   

19.
Concomitant variables in finite mixture models   总被引:1,自引:0,他引:1  
  相似文献   

20.
The linear mixed-effects model has been widely used for the analysis of continuous longitudinal data. This paper demonstrates that the linear mixed model can be adapted and used for the analysis of structured repeated measurements. A computational advantage of the proposed methodology is that there is no extra burden on the analyst since any software for linear mixed-effects models can be used to fit the proposed models. Two data sets from clinical psychology are used as motivating examples and to illustrate the methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号