首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
This paper analyzes the single period portfolio selection problem on the location-scale return family. The skew normal distribution, after recentering and reparameterization, is shown to be in this family. The recentered and reparameterized distribution, called factor-recentered skew normal, can be expressed as a skew factor model which is characterized by a location parameter and two scale parameters. Risk preference on scale parameter is non-monotonic and risk averse investors prefer larger (smaller) scale when the scale is negative (positive). The three-parameter efficient set is a part of conical surface bounded by two lines. Positive-skewness portfolios and negative-skewness portfolios do not coexist in the efficient set. Numerical cases under constant absolute risk aversion are analyzed with its closed-form certainty equivalent. An asset pricing formula which nests the CAPM is obtained.  相似文献   

2.

The presence of outliers in the data has implications for stochastic frontier analysis, and indeed any performance analysis methodology, because they may lead to imprecise parameter estimates and, crucially, lead to an exaggerated spread of efficiency predictions. In this paper we replace the normal distribution for the noise term in the standard stochastic frontier model with a Student’s t distribution, which generalises the normal distribution by adding a shape parameter governing the degree of kurtosis. This has the advantages of introducing flexibility in the heaviness of the tails, which can be determined by the data, as well as containing the normal distribution as a limiting case, and we outline how to test against the standard model. Monte Carlo simulation results for the maximum simulated likelihood estimator confirm that the model recovers appropriate frontier and distributional parameter estimates under various values of the true shape parameter. The simulation results also indicate the influence of a phenomenon we term ‘wrong kurtosis’ in the case of small samples, which is analogous to the issue of ‘wrong skewness’ previously identified in the literature. We apply a Student’s t-half normal cost frontier to data for highways authorities in England, and this formulation is found to be preferred by statistical testing to the comparator normal-half normal cost frontier model. The model yields a significantly narrower range of efficiency predictions, which are non-monotonic at the tails of the residual distribution.

  相似文献   

3.
In this paper we present inference methods which are based on an ‘incorrect’ criterion, in the sense that the optimization of this criterion does not directly provide a consistent estimator of the parameter of interest. Moreover, the argument of the criterion, called the auxiliary parameter, may have a larger dimension than that of the parameter of interest. A second step, based on simulations, provides a consistent and asymptotically normal estimator of the parameter of interest. Various testing procedures are also proposed. The methods described in this paper only require that the model can be simulated, therefore they should be useful for models whose complexity rules out a direct approach. Various fields of applications are suggested (microeconometrics, finance, macroeconometrics).  相似文献   

4.
In this paper maximum-likelihood estimates of the parameters of the two-level CES function, obtained by direct estimation of this function, are given. In addition the authors propose to show how a Bayesian analysis may help to find a solution to the difficulties related with, but not specific to, this particular estimation problem. It is shown that numerical integration of the posterior distribution may give an indication as to which parameter has to be pinpointed and at which value when multi-collinearity precludes unconditional maximization of the likelihood. It is suspected that this approach has a wider field of application.  相似文献   

5.
Tsung-Shan Tsou 《Metrika》2010,71(1):101-115
This paper introduces a way of modifying the bivariate normal likelihood function. One can use the adjusted likelihood to generate valid likelihood inferences for the regression parameter of interest, even if the bivariate normal assumption is fallacious. The retained asymptotic legitimacy requires no knowledge of the true underlying joint distributions so long as their second moments exist. The extension to the multivariate situations is straightforward in theory and yet appears to be arduous computationally. Nevertheless, it is illustrated that the implementation of this seemingly sophisticated procedure is almost effortless needing only outputs from existing statistical software. The efficacy of the proposed parametric approach is demonstrated via simulations.  相似文献   

6.
This paper introduces a distinct log-linear approach to the analysis of correlates of Guttman-scale positions. I introduce new models, which are referred to as the log-quadratic models. The models are obtained by imposing certain structural constraints on specific quasi-independence models used by Goodman for the Guttman-scale analysis. The derivation of the models owes in part to early studies of Guttman and Suchman on the intensity of attitudes as a structural component of the Guttman scale. The log-quadratic models employ three important structural parameters that characterize (1) the extent of scalability, (2) the mean Guttman-scale position, and (3) the extent of polarization of scale-type response patterns. The third parameter may also be interpreted in certain cases as a measure of attitudinal intensity. These parameters can be used efficiently as dependent variables for a multinomial logit analysis with status covariates. An application examines the effects of region, occupational status, marital status, age cohort and sex on the patterns of responses to a set of questions about mistreatment by tax authorities. A comparison of results with the application of Schwartz's latent-class models to the same data is also made.  相似文献   

7.
In spite of Taguchi's robust parameter design (Introduction to quality engineering: designing quality into products and processes, 1986, Asian Productivity Organization, Tokiyo), tolerance design is still important at the design stage of products and processes. Taguchi's proposal and related methods for tolerance design, however, do not efficiently use the information that can be obtained from the parameter design experiment. In this paper, we introduce a new method for tolerance design based on the response surface approach to parameter design. It is a flexible method because non-normal distributions of the noise factors and the quality characteristic are allowed. Moreover, it is unnecessary to perform a new physical experiment. Essentially, tolerances of noise factors are maximized, subject to constraints to ensure that the mean value of the quality characteristic remains on target and the fraction nonconforming is below a pre-specified maximum. Some aspects of model uncertainty are discussed and the method is illustrated by means of an example.  相似文献   

8.
Tamás Rudas 《Metrika》1999,50(2):163-172
A measure of the fit of a statistical model can be obtained by estimating the relative size of the largest fraction of the population where a distribution belonging to the model may be valid. This is the mixture index of fit that was suggested for models for contingency tables by Rudas, Clogg, Lindsay (1994) and it is extended here for models involving continuous observations. In particular, the approach is applied to regression models with normal and uniform error structures. Best fit, as measured by the mixture index of fit, is obtained with minimax estimation of the regression parameters. Therefore, whenever minimax estimation is used for these problems, the mixture index of fit provides a natural approach for measuring model fit and for variable selection. Received: September 1997  相似文献   

9.
This paper investigates the effect that covariate measurement error has on a treatment effect analysis built on an unconfoundedness restriction in which there is conditioning on error free covariates. The approach uses small parameter asymptotic methods to obtain the approximate effects of measurement error for estimators of average treatment effects. The approximations can be estimated using data on observed outcomes, the treatment indicator and error contaminated covariates without employing additional information from validation data or instrumental variables. The results can be used in a sensitivity analysis to probe the potential effects of measurement error on the evaluation of treatment effects.  相似文献   

10.
报亭的选址问题是一类目标优化问题,从采用精确重心方法来求解的数学模型来看,此目标问题是一个迭代型规划问题,不能直接用常规的Excel规划方法来求解。文中通过引入过渡变量来逐步进行迭代求解。在每一步通过比较前后2对坐标之间的误差,在有限的时间与迭代次数内,求解到了地址目标。求解结果与手工计算的相吻合。  相似文献   

11.
In this paper we compare classical econometrics, calibration and Bayesian inference in the context of the empirical analysis of factor demands. Our application is based on a popular flexible functional form for the firm's cost function, namely Diewert's Generalized Leontief function, and uses the well-known Berndt and Wood 1947–1971 KLEM data on the US manufacturing sector. We illustrate how the Gibbs sampling methodology can be easily used to calibrate parameter values and elasticities on the basis of previous knowledge from alternative studies on the same data, but with different functional forms. We rely on a system of mixed non-informative diffuse priors for some key parameters and informative tight priors for others. Within the Gibbs sampler, we employ rejection sampling to incorporate parameter restrictions, which are suggested by economic theory but in general rejected by economic data. Our results show that values of those parameters that relate to non-informative priors are almost equal to the standard SUR estimates, whereas differences come out for those parameters to which we have assigned informative priors. Moreover, discrepancies can be appreciated in some crucial parameter estimates obtained with or without rejection sampling.  相似文献   

12.
The score test statistic for testing whether an error covariance is zero is derived for a normal linear recursive model for fully observed, censored or grouped data. The test, which is obtained by regarding non-zero error covariances as arising from correlated random parameter variation, is shown to be closely related to the Information Matrix test. It turns out that the statistic, which is asymptotically N[0,1] under the null, examines the sample covariance of appropriately defined residuals.  相似文献   

13.
钟会生  张金团 《价值工程》2011,30(19):95-96
声速、声时、声幅、主频这四个声测参数是判断桩基完整性的主要依据。而在实际桩检中,各参数都不能达到足够的精度评判出桩身质量的好坏,必须经过综合比较加以确定,仅评某一参数的异常来作出判定容易得出相左的结论。并且PSD、声速参数可以归为同一参数。  相似文献   

14.
Characterizations of gamma-minimax predictors for the linear combinations of the unknown parameter and the random variable having the multinomial distribution under arbitrary squared error loss are established in two situations – when the sample size is fixed and when the sample size is a realization of a random variable. It is always assumed that the available vague prior information about the unknown parameter can be described by a class of priors whose vector of first moments belongs to a suitable convex and compact set. Several known gamma-minimax and minimax results can be obtained from the characterizations derived in the present paper.  相似文献   

15.
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns by taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and can lead to some well-documented drawbacks, including model misspecification, parameter uncertainty, and overfitting. To address these issues we first consider mortality modeling in a Bayesian negative-binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation and compare them to standard Bayesian model averaging (BMA) based on marginal likelihood. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets, along with a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on an out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness.  相似文献   

16.
We develop in this paper a novel portfolio selection framework with a feature of double robustness in both return distribution modeling and portfolio optimization. While predicting the future return distributions always represents the most compelling challenge in investment, any underlying distribution can be always well approximated by utilizing a mixture distribution, if we are able to ensure that the component list of a mixture distribution includes all possible distributions corresponding to the scenario analysis of potential market modes. Adopting a mixture distribution enables us to (1) reduce the problem of distribution prediction to a parameter estimation problem in which the mixture weights of a mixture distribution are estimated under a Bayesian learning scheme and the corresponding credible regions of the mixture weights are obtained as well and (2) harmonize information from different channels, such as historical data, market implied information and investors׳ subjective views. We further formulate a robust mean-CVaR portfolio selection problem to deal with the inherent uncertainty in predicting the future return distributions. By employing the duality theory, we show that the robust portfolio selection problem via learning with a mixture model can be reformulated as a linear program or a second-order cone program, which can be effectively solved in polynomial time. We present the results of simulation analyses and primary empirical tests to illustrate a significance of the proposed approach and demonstrate its pros and cons.  相似文献   

17.
Estimation of the one sided error component in stochastic frontier models may erroneously attribute firm characteristics to inefficiency if heterogeneity is unaccounted for. However, unobserved inefficiency heterogeneity has been little explored. In this work, we propose to capture it through a random parameter which may affect the location, scale, or both parameters of a truncated normal inefficiency distribution using a Bayesian approach. Our findings using two real data sets, suggest that the inclusion of a random parameter in the inefficiency distribution is able to capture latent heterogeneity and can be used to validate the suitability of observed covariates to distinguish heterogeneity from inefficiency. Relevant effects are also found on separating and shrinking individual posterior efficiency distributions when heterogeneity affects the location and scale parameters of the one-sided error distribution, and consequently affecting the estimated mean efficiency scores and rankings. In particular, including heterogeneity simultaneously in both parameters of the inefficiency distribution in models that satisfy the scaling property leads to a decrease in the uncertainty around the mean scores and less overlapping of the posterior efficiency distributions, which provides both more reliable efficiency scores and rankings.  相似文献   

18.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

19.
采用Ansys有限元程序进行三维有限元建模,结合超前搅拌桩为止水帷幕的复合土钉支护结构工程算例,模拟土体开挖,模拟土钉、土体、桩体共同作用,得出其变形趋势,并利用参数分析,得到土体参数与结构变形之间的关系曲线,对工程设计人员及施工人员有一定的指导意义.  相似文献   

20.
This paper presents a simple constructive proof of the existence and uniqueness of the equilibrium or optimal solution for a class of residential land use problems. Our approach is based on the concept of boundary rent curves, and provides a direct computational algorithm. Although our approach works only for a limited class of problems, the most land use problems for which definite results can be obtained from comparative statics or stability analysis are included in this class.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号