首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
稳健统计作为统计学的一个较为活跃的研究领域,是对传统统计方法的完善和补充,其中最为简单和通俗易懂的是统计量的稳健性。本文对几种常用的统计量进行深入地剖析,进而揭示其稳健性的不足,同时给出几种稳健统计量,并与传统的统计量进行比较,通过比较来展现稳健统计量的优势及其应用价值。  相似文献   

2.
针对地级市区的金融发展与经济增长的关系,利用中国31个省区共336个样本地级市区2000-2006年的数据,基于分位数回归统计分析结果表明,在被解释变量经济增长的不同条件分位数处解释变最金融发展和控制变量对经济增长影响的差异和波动是统计显著的.与经典的条件均值回归相比,条件分位数回归实证分析能够揭示数据生成过程更加丰富的信息,这为对区域金融发展与经济增长关系进行时空特征整合的统计建模提供了有力支持.  相似文献   

3.
风险分析中贝叶斯方法在构建决策框架、估计风险分布及参数化模型等方面,相比传统方法有更强的适应性和灵活性,但同时也存在一定的弊端。稳健方法弥补了贝叶斯方法的部分局限,针对不确定性问题的分析表明,在缺少准确和完全的统计信息时,稳健贝叶斯方法能够给出更加可靠的推断。  相似文献   

4.
交通运输产业是国民经济的重要支柱,预测交通运输产业值可以为相关部门制定发展规划提供理论依据。应用统计回归方法,根据某省以往的工农业总产值、固定资产投资与交通运输产业值,建立统计回归模型,并应用MATLAB软件求解,根据结果分析对模型进行改进,为交通运输产业值的预测提供一种可行的方法。  相似文献   

5.
李爽楠 《云南金融》2012,(5X):202-203
本文依据美国费城地区218辆二手宝马轿车的数据,采用基于Bootstrap方法的回归模型,考察影响二手车价格的因素,探索合理的价格评估系统。在确定了回归模型具体形式后,引入Bootstrap方法,利用给定的较少的原始数据进行有放回地抽样得到Bootstrap样本,从而进行统计推断。本文应用此方法拟合模型,得到了很好的拟合效果。利用所构建的模型,我们可以对给定的二手车评估出较为公正合理的交易价格,为我国从事二手车估价及交易的人员和部门提供一定的参考价值。 件  相似文献   

6.
李爽楠 《时代金融》2012,(15):202-203
本文依据美国费城地区218辆二手宝马轿车的数据,采用基于Bootstrap方法的回归模型,考察影响二手车价格的因素,探索合理的价格评估系统。在确定了回归模型具体形式后,引入Bootstrap方法,利用给定的较少的原始数据进行有放回地抽样得到Bootstrap样本,从而进行统计推断。本文应用此方法拟合模型,得到了很好的拟合效果。利用所构建的模型,我们可以对给定的二手车评估出较为公正合理的交易价格,为我国从事二手车估价及交易的人员和部门提供一定的参考价值。 件  相似文献   

7.
中央经济工作会议上,货币政策由适度宽松回归稳健,未来或将进入稳中从紧的基调。债券市场将面临资金面收紧、运行波动加大、发债主体积极性下降、企业潜在违约风险增加等挑战,也将迎来政策环境向好、银行投资需求增加、国际化加快等良好机遇,为应对这一局面,债券发行和监管需要适时进行调整。  相似文献   

8.
政策     
《证券导刊》2011,(1):6-6
年内二次加息 货币政策快速回归稳健 中国人民银行2010年12月25日宣布,自上月26日起上调金融机构人民币存贷款基准利率,一年期存贷款基准利率分别上调0.25个百分点。这是中国人民银行年内第二次加息,也是自中央宣布要实施稳健货币政策以来的首次加息。业界专家称,这表明货币政策正在快速回归稳健,加息周期或已来临,  相似文献   

9.
陈涛 《中国金融家》2011,(10):34-35
今年以来,由于通胀率持续上涨,稳定物价已成为稳健货币政策的首要目标,央行虽继续通过多次上调存款准备金率、利率等措施对流动性实施管理,显然,通胀的严峻形势尚未根本扭转,但稳健货币政策的累积效应已经开始显现。  相似文献   

10.
2011年.中国在经济政策上实施积极的财政政策和稳健的货币政策.增强宏观调控的针对性,灵活性有效性。这意味着从2008年起实施的“适度宽松”的货币政策将正式转向“稳健”。  相似文献   

11.
会计回归与会计外部化——会计国际化的一种途径   总被引:12,自引:0,他引:12  
会计职能和会计信息的膨胀 ,使会计不堪重负 ,会计系统相对市场毫无效率可言。会计应适当回归 ,提供原始信息和核心的会计程序 ,而将剩余的功能交由更有效率的市场 ,如会计服务、按需财务报告和可扩展的企业报告语言去执行。会计回归与会计外部化将使会计与技术、政治经济及环境的依赖减弱。各国会计间的差异缩小 ,更利于会计国际化。会计国际化是必然趋势 ,将呈现小范围统一和大范围大同小异的格局。  相似文献   

12.
This article applies machine learning techniques to credibility theory and proposes a regression-tree-based algorithm to integrate covariate information into credibility premium prediction. The recursive binary algorithm partitions a collective of individual risks into mutually exclusive subcollectives and applies the classical Bühlmann-Straub credibility formula for the prediction of individual net premiums. The algorithm provides a flexible way to integrate covariate information into individual net premiums prediction. It is appealing for capturing nonlinear and/or interaction covariate effects. It automatically selects influential covariate variables for premium prediction and requires no additional ex ante variable selection procedure. The superiority in prediction accuracy of the proposed algorithm is demonstrated by extensive simulation studies. The proposed method is applied to the U.S. Medicare data for illustration purposes.  相似文献   

13.
Using the theory of stationary Markov chains, we uncover a previously unknown property of the behavior of betas. Specifically, if the cross-sectional distribution of betas is stationary over time, then the set of firms that remain in an arbitrarily chosen beta interval between one period and the next will not regress toward the mean. This surprising result occurs in spite of the well-known fact that the set of all the firms in the interval will exhibit the regression tendency. Our empirical tests indicate that betas behave in remarkable accordance with this prediction.  相似文献   

14.
15.
16.
Insurance claims data usually contain a large number of zeros and exhibits fat-tail behavior. Misestimation of one end of the tail impacts the other end of the tail of the claims distribution and can affect both the adequacy of premiums and needed reserves to hold. In addition, insured policyholders in a portfolio are naturally non-homogeneous. It is an ongoing challenge for actuaries to be able to build a predictive model that will simultaneously capture these peculiar characteristics of claims data and policyholder heterogeneity. Such models can help make improved predictions and thereby ease the decision-making process. This article proposes the use of spliced regression models for fitting insurance loss data. A primary advantage of spliced distributions is their flexibility to accommodate modeling different segments of the claims distribution with different parametric models. The threshold that breaks the segments is assumed to be a parameter, and this presents an additional challenge in the estimation. Our simulation study demonstrates the effectiveness of using multistage optimization for likelihood inference and at the same time the repercussions of model misspecification. For purposes of illustration, we consider three-component spliced regression models: the first component contains zeros, the second component models the middle segment of the loss data, and the third component models the tail segment of the loss data. We calibrate these proposed models and evaluate their performance using a Singapore auto insurance claims dataset. The estimation results show that the spliced regression model performs better than the Tweedie regression model in terms of tail fitting and prediction accuracy.  相似文献   

17.
A problem that sometimes occurs in undertaking empirical research in accounting and finance is that the theoretically correct form of the relation between the dependent and independent variables is not known, although often thought or assumed to be monotonic. In addition, transformations of disclosure measures and independent variables are proxies for underlying constructs and hence, while theory may specify a functional form for the underlying theoretical construct, it is unlikely to hold for empirical proxies. In order to cope with this problem a number of accounting disclosure studies have transformed variables so that the statistical analysis is more meaningful. One approach that has been advocated in such circumstances is to rank the data and then apply regression techniques, a method that has been used recently in a number of accounting disclosure studies. This paper reviews a number of transformations including the Rank Regression procedure. Because of the inherent properties of ranks and their use in regression analysis, an extension is proposed that provides an alternative mapping that replaces the data with their normal scores. The normal scores approach retains the advantages of using ranks but has other beneficial characteristics, particularly in hypothesis testing. Regressions based on untransformed data, on the log odds ratio of the dependent variable, on ranks and regression using normal scores, are applied to data on the disclosure of information in the annual reports of companies in Japan and Saudi Arabia. It is found that regression using normal scores has some advantages over ranks that, in part, depend on the structure of the data. However, the case studies demonstrate that no one procedure is best but that multiple approaches are helpful to ensure the results are robust across methods.  相似文献   

18.
We provide some examples of how quantile regression can be used to investigate heterogeneity in pay‐firm size and pay‐performance relationships for U.S. CEOs. For example, do conditionally (predicted) high‐wage managers have a stronger relationship between pay and performance than conditionally low‐wage managers? Our results using data over a decade show, for some standard specifications, there is considerable heterogeneity in the returns‐to‐firm performance across the conditional distribution of wages. Quantile regression adds substantially to our understanding of the pay‐performance relationship. This heterogeneity is masked when using more standard empirical techniques.  相似文献   

19.
回归分析方法在经济分析方法中占据着举足轻重的地位,也为现代西方经济学的科学化作出了突出贡献,当然,其中也有很多不足和问题有待解决。本文主要介绍了现代回归分析方法的适用范围和限制条件及随机扰动项的含义。重点分析了经济回归分析模型的误用中的"伪回归"与未考虑异方差和自相关问题,并举出了实例,总结了应用回归模型应注意的事项及带给我们的启示。  相似文献   

20.
Abstract

This case study illustrates the analysis of two possible regression models for bivariate claims data. Estimates or forecasts of loss distributions under these two models are developed using two methods of analysis: (1) maximum likelihood estimation and (2) the Bayesian method. These methods are applied to two data sets consisting of 24 and 1,500 paired observations, respectively. The Bayesian analyses are implemented using Markov chain Monte Carlo via WinBUGS, as discussed in Scollnik (2001). A comparison of the analyses reveals that forecasted total losses can be dramatically underestimated by the maximum likelihood estimation method because it ignores the inherent parameter uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号