首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

2.
In what contexts is it desirable that the government, rather than the private sector, takes on the role of an insurer and helps people reduce risks? Our discussion implies that while in a number of areas individuals benefit from well-designed insurance provided by their government, ill-designed public policies (for example existing pay-as-you-go pension systems) force individuals to insure against their government. It is further discussed how governments could improve their risk managing role in many areas by using income contingent loans, provided the country has high-quality institutions and governance. Such loans to artists, sportspeople, flood victims or collapsing financial institutions would replace the existing nonrepayable transfers, grants, subsidies and bailouts. Using a simple efficiency-equity-sustainability framework for comparing income contingent schemes with conventional public and private insurance policies, we document that this would enable governments to extend their insurance assistance to a greater number of people and institutions – in a way that is not only equitable but also efficient and fiscally sustainable.  相似文献   

3.
We investigate a novel database of 10,217 extreme operational losses from the Italian bank UniCredit. Our goal is to shed light on the dependence between the severity distribution of these losses and a set of macroeconomic, financial, and firm‐specific factors. To do so, we use generalized Pareto regression techniques, where both the scale and shape parameters are assumed to be functions of these explanatory variables. We perform the selection of the relevant covariates with a state‐of‐the‐art penalized‐likelihood estimation procedure relying on L1‐penalty terms. A simulation study indicates that this approach efficiently selects covariates of interest and tackles spurious regression issues encountered when dealing with integrated time series. Lastly, we illustrate the impact of different economic scenarios on the requested capital for operational risk. Our results have important implications in terms of risk management and regulatory policy.  相似文献   

4.
Recommendation problem has been extensively studied by researchers in the field of data mining, database and information retrieval. This study presents the design and realisation of an automated, personalised news recommendations system based on Chi-square statistics-based K-nearest neighbour (χ2SB-KNN) model. The proposed χ2SB-KNN model has the potential to overcome computational complexity and information overloading problems, reduces runtime and speeds up execution process through the use of critical value of χ2 distribution. The proposed recommendation engine can alleviate scalability challenges through combined online pattern discovery and pattern matching for real-time recommendations. This work also showcases the development of a novel method of feature selection referred to as Data Discretisation-Based feature selection method. This is used for selecting the best features for the proposed χ2SB-KNN algorithm at the preprocessing stage of the classification procedures. The implementation of the proposed χ2SB-KNN model is achieved through the use of a developed in-house Java program on an experimental website called OUC newsreaders’ website. Finally, we compared the performance of our system with two baseline methods which are traditional Euclidean distance K-nearest neighbour and Naive Bayesian techniques. The result shows a significant improvement of our method over the baseline methods studied.  相似文献   

5.
Summary The generalized ridge estimator, which considers generalizations of mean square error, is presented, and a mathematical rule of determining the optimalk-value is discussed. The generalized ridge estimator is examined in comparison with the least squares, the pseudoinverse, theJames-Stein-type shrinkage, and the principal component estimators, especially focusing their attention on improved adjustments for regression coefficients. An alternative estimation approach that better integrates a priori information is noted. Finally, combining the generalized ridge and robust regression methods is suggested.  相似文献   

6.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

7.
我国城镇企业职工基本养老保险个人账户在本质上体现的是养老权而不是财产权,基本养老保险个人财户权益的根本意义在于保障企业职工的社会养老权。我国从2001年开始实行做实企业职工基本养老保险个人账户试点,这不仅未能化解养老金支付风险,而且加大了养老保险基金保值增值难度,基金贬值程度也随着做实比例的提高而增大。另外,做实个人账户抬高了储蓄率并抑制了投资和消费需求,对经济发展产生了挤出效应。为解决上述问题,应建立更加公平可持续的养老金制度,通过名义账户制从根本上体现企业职工的财产权和社会养老权,解决做实账户制所出现  相似文献   

8.
Motivated by examples from the automobile industry, insurance, retailing, and multinational strategy, we study an organizational structure we refer to as "partial delegation." In a bargaining problem between an informed party and an uninformed party, partial delegation involves the informed party delegating bargaining to an agent while retaining control of its private information. We show that partial delegation enables the informed party to earn information rents without creating quantity distortions. First‐best quantities are traded in equilibrium. We argue that partial delegation allows an informed party to implement efficient trade with outside parties by endogenously improving its bargaining power.  相似文献   

9.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

10.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

11.
Long‐term insurance contracts are widespread, particularly in public health and the labor market. Such contracts typically involve monthly or annual premia which are related to the insured's risk profile. A given profile may change, based on observed outcomes which depend on the insured's prevention efforts. The aim of this paper is to analyze the latter relationship. In a two‐period optimal insurance contract in which the insured's risk profile is partly governed by her effort on prevention, we find that both the insured's risk aversion and prudence play a crucial role. If absolute prudence is greater than twice absolute risk aversion, moral hazard justifies setting a higher premium in the first period but also greater premium discrimination in the second period. This result provides insights on the trade‐offs between long‐term insurance and the incentives arising from risk classification, as well as between inter‐ and intragenerational insurance.  相似文献   

12.
In this article, we develop a modern perspective on Akaike's information criterion and Mallows's Cp for model selection, and propose generalisations to spherically and elliptically symmetric distributions. Despite the differences in their respective motivation, Cp and Akaike's information criterion are equivalent in the special case of Gaussian linear regression. In this case, they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the specificity of the distribution, but only on its spherical symmetry. The same kind of criterion can be derived for a family of elliptically contoured distribution, which allows correlations, when considering the invariant loss. More specifically, the unbiasedness property is relative to a distribution associated to the original density.  相似文献   

13.
IDI保险是工程质量类保险的一种,在国内主要应用于住宅项目。IDI保险需要在工程建设的全过程中进行质量管控和风险评估。常规水准检测、GPS等技术难以满足IDI保险对风险建筑进行大范围、快速、长期监测的需要。光学遥感观测技术是一种可以实现大面积地物监测的遥感观测技术,该技术可以为IDI保险的监测工作提供数据信息支持。论文整理了近几年关于光学遥感观测技术应用于建筑高度监测的文献,对分类法、边缘检测法、阈值法进行介绍和总结,说明光学遥感观测技术在IDI保险行业中有较高的应用价值和广阔的应用前景。  相似文献   

14.
Efficient sets with and without the expected utility hypothesis   总被引:3,自引:0,他引:3  
Consider a feasible set, X, of c.d.f.'s. Assume that the set of decision makers, who must choose from X, includes non-expected utility decision makers who are risk averse in some weaker notions. We show that in this case the efficient set of X expands relative to the expected utility case. We characterize the efficient sets for each notion of risk aversion including the expected utility case. It is also shown that the limited-coverage insurance policies, which are not efficient under the expected utility hypothesis, belong to the efficient set when weakly risk-averse non- expected utility functionals are assumed to exist.  相似文献   

15.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

16.
The phenomenon of adverse selection caused by asymmetric information dominates the insurance market. In this paper, based on principal-agent theory, we establish a two-period dynamic insurance contract model with a low compensation period. This model introduces the tools of a low compensation period and the increase and decrease in the bonus to identify the risk types of policyholders. We prove that this model can achieve a strict Pareto improvement relative to the two-period static insurance contract model with a low compensation period. Moreover, we also graphically analyze the conclusion, which can help insurance companies to design more comprehensive insurance contracts.  相似文献   

17.
A. S. Young 《Metrika》1987,34(1):185-194
Summary It has been asserted in the past that any Bayesian treatment of the model selection problem in regression using some form of continuous loss structure would always lead to using the largest possible model (Leamer 1979; Chow 1981). We show in this paper that, provided the distinction between the choice of a model and the estimation of its parameters is maintained, the Kullback-Leibler information measure can be used in a Bayesian context to derive a criterion which may lead to parsimony of parameters in regression analysis. The regression models are taken as restrictions of a general class of distributions which includes the truen-variate distribution of the variabley. Separate criteria for the cases of known and unknown variance ofy are obtained. In the limiting situation when prior opinions about the parameters are weak, these criteria reduce to special cases of the generalizedC p and AIC criteria (Atkinson 1981). Relationship with other criteria is discussed.  相似文献   

18.
Recently extended risk classification has become an important issue in life insurance and annuity markets. Using various risk factors, one can construct various risk classes. This enables insurers to provide more equitable life insurance and annuity benefits for individuals in different risk classes and to manage mortality/longevity risk more efficiently. This article discusses the development of a mortality model that reflects the impact of various risk factors on mortality. The model uses Markov process combined with generalized linear models. The model is used to illustrate how the various risk factors influence actuarial present values of life insurance and annuity benefits.  相似文献   

19.
The standard one-period model for insurance demand does not consider the interaction between the present and the future. Reflecting this observation, we analyze intertemporal insurance demand and saving in a two-period model with multiple loss states. When an individual has no access to a capital market, we first find that an actuarially fair premium does not guarantee full insurance in general, unlike in the standard approach. Income stream and discount factors are also important in determining insurance demand. Second, insurance is neither an inferior good nor a Giffen good. Third, an increase in concavity of the utility function does not always lead to an increase in insurance demand. The current income level and changes in downside risk aversion affect insurance demand. When the individual has access to a capital market, we further have the following observations. Fourth, an actuarially fair premium leads to full insurance. Fifth, insurance is an inferior good and can be a Giffen good under decreasing absolute risk aversion (DARA). An increase in the interest rate leads to a lower insurance demand and a higher saving when the relative risk aversion is less than unity. Lastly, an increase in concavity of the utility function leads to an increase in insurance demand and a decrease in saving. In conjunction, our findings point to the fact that the standard results are not obtainable if insurance demand is considered in isolation from the capital market.  相似文献   

20.
There are surveys that gather precise information on an outcome of interest, but measure continuous covariates by a discrete number of intervals, in which case the covariates are interval censored. For applications with a second independent dataset precisely measuring the covariates, but not the outcome, this paper introduces a semiparametrically efficient estimator for the coefficients in a linear regression model. The second sample serves to establish point identification. An empirical application investigating the relationship between income and body mass index illustrates the use of the estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号