首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Decision-makers often collect and aggregate experts’ point predictions about continuous outcomes, such as stock returns or product sales. In this article, we model experts as Bayesian agents and show that means, including the (weighted) arithmetic mean, trimmed means, median, geometric mean, and essentially all other measures of central tendency, do not use all information in the predictions. Intuitively, they assume idiosyncratic differences to arise from error instead of private information and hence do not update the prior with all available information. Updating means in terms of unused information improves their expected accuracy but depends on the experts’ prior and information structure that cannot be estimated based on a single prediction per expert. In many applications, however, experts consider multiple stocks, products, or other related items at the same time. For such contexts, we introduce ANOVA updating – an unsupervised technique that updates means based on experts’ predictions of multiple outcomes from a common population. The technique is illustrated on several real-world datasets.  相似文献   

2.
A surprising number of important problems can be cast in the framework of estimating a mean and variance using data arising from a two-stage structure. The first stage is a random sampling of "units" with some quantity of interest associated with the unit. The second stage produces an estimate of that quantity and usually, but not always, an estimated standard error, which may change considerably across units. Heteroscedasticity in the estimates over different units can arise for a number of reasons, including variation associated with the unit and changing sampling effort over units. This paper presents a broad discussion of the problem of making inferences for the population mean and variance associated with the unobserved true values at the first stage of sampling. A careful discussion of the causes of heteroscedasticity is given, followed by an examination of ways in which inferences can be carried out in a manner that is robust to the nature of the within unit heteroscedasticity. Among the conclusions are that under any type of heteroscedasticity, an unbiased estimate of the mean and the variance of the estimated mean can be obtained by using the estimates as if they were true unobserved values from the first stage. The issue of using the mean versus a weighted average which tries to account for the heteroscedasticity is also discussed. An unbiased estimate of the population variance is given and the variance of this estimate and its covariance with the estimated mean is provided under various types of heteroscedasticity. The two-stage setting arises in many contexts including the one-way random effects models with replication, meta-analysis, multi-stage sampling from finite populations and random coefficients models. We will motivate and illustrate the problem with data arising from these various contexts with the goal of providing a unified framework for addressing such problems.  相似文献   

3.
王莹  张市芳  谢世强 《价值工程》2012,31(1):159-161
文章首先定义了直觉梯形模糊数及其运算法则,基于这些运算法则,给出了直觉梯形模糊几何集成算子,即直觉梯形模糊加权几何(IT-WG)算子、直觉梯形模糊有序加权几何(IT-OWG)算子以及直觉梯形模糊混合几何(IT-HG)算子。在此基础上,提出了在属性权重已知的情形下专家评价值以直觉梯形模糊信息给出的多属性群决策方法。最后以产品最优设计方案的选择为实例进行分析,所得结果验证了该方法的有效性。  相似文献   

4.
The influence of the choice of the weights on the value of an indexnumber.
Price and quantity indexnumbers are weighted averages of groups of price and quantity ratios and they are convenient instruments to indicate the general tendency of such groups, especially if the number of basic ratios is considerable. The frequent use of indexnumbers is due to the fact that they can often be applied to problems for which, strictly speaking, an indexnumber had to be used derived from the same group of ratios but based on a different set of weights.
Two typical examples of such problems are given.
The use of a set of weights differing from the appropriate one is only justified, however, when the indexnumber is rather insensitive to changes in the set of weights. A simple formula is derived showing that the relative change of an index-number due to a change in the set of weights is equal to the product of the (weighted) coefficient of variation of the basic ratios, the (weighted) standard deviation of the relative changes of the weights and the (weighted) coefficient of correlation of the ratios and of the relative changes. The system of weights used in the calculation of these three factors is the same and is equal to the set of true weights belonging to the problem under consideration.
The practical use of the formula is demonstrated at the problem of index-numbers of costs frequently encountered in the practice of cost accounting.  相似文献   

5.
For many companies, automatic forecasting has come to be an essential part of business analytics applications. The large amounts of data available, the short life-cycle of the analysis and the acceleration of business operations make traditional manual data analysis unfeasible in such environments. In this paper, an automatic forecasting support system that comprises several methods and models is developed in a general state space framework built in the SSpace toolbox written for Matlab. Some of the models included are well-known, such as exponential smoothing and ARIMA, but we also propose a new model family that has been used only very rarely in this context, namely unobserved components models. Additional novelties include the use of unobserved components models in an automatic identification environment and the comparison of their forecasting performances with those of exponential smoothing and ARIMA models estimated using different software packages. The new system is tested empirically on a daily dataset of all of the products sold by a franchise chain in Spain (166 products over a period of 517 days). The system works well in practice and the proposed automatic unobserved components models compare very favorably with other methods and other well-known software packages in forecasting terms.  相似文献   

6.
The present paper is concerned with the analysis of repeated transition frequency tables, for example, occupational mobility data measured in different cohorts. The association present in such a table will be modeled by a distance in Euclidean space. A large distance corresponds to a small association; a small distance corresponds to a large association. A more direct interpretation is that more transitions occur between categories that are close together in a social space. It is assumed that the same social structure (space) exists for the different slices (cohorts/time points) of a table, but that the dimensions of this space are weighted for the different slices, i.e., for each slice the dimensions are stretched or squeezed. We will propose a model, discuss an algorithm to obtain maximum likelihood estimates and apply the model to an empirical data set.  相似文献   

7.
This paper develops a statistical method for defining housing submarkets. The method is applied using household survey data for Sydney and Melbourne, Australia. First, principal component analysis is used to extract a set of factors from the original variables for both local government area (LGA) data and a combined set of LGA and individual dwelling data. Second, factor scores are calculated and cluster analysis is used to determine the composition of housing submarkets. Third, hedonic price equations are estimated for each city as a whole, fora prioriclassifications of submarkets, and for submarkets defined by the cluster analysis. The weighted mean squared errors from the hedonic equations are used to compare alternative classifications of submarkets. In Melbourne, the classification derived from aKmeans clustering procedure on individual dwelling data is significantly better than classifications derived from all other methods of constructing housing submarkets. In some other cases, the statistical analysis produces submarkets that are better than thea prioriclassification, but the improvement is not significant.  相似文献   

8.
We propose a simple way of predicting time series with recurring seasonal periods. Missing values of the time series are estimated and interpolated in a preprocessing step. We combine several forecasting methods by taking the weighted mean of forecasts that were generated with time-domain models which were validated on left-out parts of the time series. The hybrid model is a combination of a neural network ensemble, an ensemble of nearest trajectory models and a model for the 7-day cycle. We apply this approach to the NN5 time series competition data set.  相似文献   

9.
This paper derives series for capital utilization, labour effort and total factor productivity (TFP) for the UK from a general equilibrium model with variable utilization and labour adjustment costs. Capital utilization tracks survey‐based measures closely, but persistent movements in total hours worked mean our labour effort series is not as highly correlated with its comparators. Our estimated TFP series is less cyclical than the traditional Solow residual, although a weighted average of capital utilization and labour effort – aggregate factor utilization – and the Solow residual are not closely related.  相似文献   

10.
In this paper, we introduce a new algorithm for estimating non-negative parameters from Poisson observations of a linear transformation of the parameters. The proposed objective function fits both a weighted least squares (WLS) and a minimum χ2 estimation framework, and results in a convex optimization problem. Unlike conventional WLS methods, the weights do not need to be estimated from the datas, but are incorporated in the objective function. The iterative algorithm is derived from an alternating projection procedure in which "distance" is determined by the chi-squared test statistic, which is interpreted as a measure of the discrepancy between two distributions. This may be viewed as an alternative to the Kullback-Leibler divergence which corresponds to the maximum likelihood (ML) estimation. The algorithm is similar in form to, and shares many properties with, the expectation maximization algorithm for ML estimation. In particular, we show that every limit point of the algorithm is an estimator, and the sequence of projected (by the linear transformation into the data space) means converge. Despite the similarities, we show that the new estimators are quite distinct from ML estimators, and obtain conditions under which they are identical.  相似文献   

11.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

12.
The main purpose of this paper is to unify and extend the existing theory of 'estimated zeroes' in log-linear and logit models. To this end it is shown that every generalized linear model (GLM) can be embedded in a larger model with a compact parameter space and a continuous likelihood (a 'CGLM'). Clearly in a CGLM the maximum likelihood estimate (MLE) always exists, easing a major data analysis problem. In the mean-value parametrization, the construction of the CGLM is remarkably simple; except in a rather pathological and rare case, the estimated expected values are always finite., In the β-parametrization however, the compactification is more complex; the MLE need not correspond with a finite β, as is well known for estimated zeros in log-linear models. The boundary distributions of CGLMs are classified in four categories: 'Inadmissible', 'degenerate', 'Chentsov', and 'constrained'. For a large class of GLMs, including all GLMs with canonical link functions and probit models, the MLE in the corresponding CGLM exists and is unique. Even stronger, the likelihood has no other local maxima. We give equivalent algebraic and geometric conditions (in the vein of Haberman (1974, 1977) and Albert and Anderson (1984) respectively), necessary for the existence of the MLE in the GLM corresponding to a finite β. For a large class of GLMs these conditions are also sufficient. Even for log-linear models this seams to be a new result.  相似文献   

13.
A broad class of generalized linear mixed models, e.g. variance components models for binary data, percentages or count data, will be introduced by incorporating additional random effects into the linear predictor of a generalized linear model structure. Parameters are estimated by a combination of quasi-likelihood and iterated MINQUE (minimum norm quadratic unbiased estimation), the latter being numerically equivalent to REML (restricted, or residual, maximum likelihood). First, conditional upon the additional random effects, observations on a working variable and weights are derived by quasi-likelihood, using iteratively re-weighted least squares. Second, a linear mixed model is fitted to the working variable, employing the weights for the residual error terms, by iterated MINQUE. The latter may be regarded as a least squares procedure applied to squared and product terms of error contrasts derived from the working variable. No full distributional assumptions are needed for estimation. The model may be fitted with standardly available software for weighted regression and REML.  相似文献   

14.
This paper deals with the productivity index based on Hicks-Moorsteen (HM) productivity index for analysis of Stock Exchange firms. In this paper, a decomposition of Hicks-Moorsteen productivity index is provided that uses data envelopment analysis (DEA) models. On the other hand, it is offered a new interpretation and decomposition for components of HM productivity index that in previous studies were not performed any interpretation of it. These components have named the output production and the input consumption in terms of changing the efficiency frontier. In addition, by comparing the components of HM productivity index and productivity value, the optimal values of output production and input consumption are determined. Also, the analysis is determined how to identify the optimal value of a firm over a period of time. The proposed approach is applied to 26 pharmaceutical manufacturers that are from Tehran Stock Exchange. Data have been collected for six consecutive years (2009–2014) and have been considered financial ratios for evaluating companies. The results show that the Hicks-Moorsteen productivity index is more able to provide managerial aspects than other productivity indexes such as the Malmquist productivity index.  相似文献   

15.
This paper empirically supports the hypothesis that a sinusoidal model can be used successfully to decompose time-series data into its components. Since the length of the seasonal cycle is known, this study documents how one makes use of this known length to infer characteristics of the more general non-seasonal cycle. By examining the ratios of the lengths of the longer to the shorter sine waves in the resulting fit of a sinusoidal model, one is able to determine which sine waves are estimating the same cycle and what the average length of that cycle is. A non-linear trend is estimated by adding a sine wave to the linear trend.  相似文献   

16.
Small area estimation is concerned with methodology for estimating population parameters associated with a geographic area defined by a cross-classification that may also include non-geographic dimensions. In this paper, we develop constrained estimation methods for small area problems: those requiring smoothness with respect to similarity across areas, such as geographic proximity or clustering by covariates, and benchmarking constraints, requiring weighted means of estimates to agree across levels of aggregation. We develop methods for constrained estimation decision theoretically and discuss their geometric interpretation. The constrained estimators are the solutions to tractable optimisation problems and have closed-form solutions. Mean squared errors of the constrained estimators are calculated via bootstrapping. Our approach assumes the Bayes estimator exists and is applicable to any proposed model. In addition, we give special cases of our techniques under certain distributional assumptions. We illustrate the proposed methodology using web-scraped data on Berlin rents aggregated over areas to ensure privacy.  相似文献   

17.
The two-parameter Pareto distribution provides reasonably good fit to the distributions of income and property value, and explains many empirical phenomena. For the censored data, the two parameters are regularly estimated by the maximum likelihood estimator, which is complicated in computation process. This investigation proposes a weighted least square estimator to estimate the parameters. Such a method is comparatively concise and easy to perceive, and could be applied to either complete or truncated data. Simulation studies are conducted in this investigation to show the feasibility of the proposed method. This report will demonstrate that the weighted least square estimator gives better performance than unweighted least square estimators with simulation cases. We also illustrate that the weighted least square estimator is very close to maximum likelihood estimator with simulation studies.  相似文献   

18.
Abstract  The class of weighted M-estimators is defined. The ratio of the asymptotic variance of the weighted estimator to the asymptotic variance of the optimally weighted estimator is defined as the inefficiency. A K antorovich inequality is proved, its implications are investigated for the misweighted mean and misweighted median, and the results are applied to a batch of demographic data.  相似文献   

19.
Iterated weighted least squares (IWLS) is investigated for estimating the regression coefficients in a linear model with symmetrically distributed errors. The variances of the errors are not specified; it is not assumed that they are unknown functions of the explanatory variables nor that they are given in some parametric way.
IWLS is carried out in a random number of steps, of which the first one is OLS. In each step the error variance at time t is estimated with a weighted sum of m squared residuals in the neighbourhood of t and the coefficients are estimated using WLS. Furthermore an estimate of the co-variance matrix is obtained. If this estimate is minimal in some way the iteration process is stopped.
Asymptotic properties of IWLS are derived for increasing sample size n . Some particular cases show that the asymptotic efficiency can be increased by allowing more than two steps. Even asymptotic efficiency with respect to WLS with the true error variances can be obtained if m is not fixed but tends to infinity with n and if the heteroskedasticity is smooth.  相似文献   

20.

This paper proposes a semiparametric smooth-varying coefficient input distance frontier model with multiple outputs and multiple inputs, panel data, and determinants of technical inefficiency for the Indonesian banking industry during the period 2000 to 2015. The technology parameters are unknown functions of a set of environmental factors that shift the input distance frontier non-neutrally. The computationally simple constraint weighted bootstrapping method is employed to impose the regularity constraints on the distance function. As a by-product, total factor productivity (TFP) growth is estimated and decomposed into technical change, scale component, and efficiency change. The distance elasticities, marginal effects of the environmental factors on the distance elasticities, temporal behavior of technical efficiency, and also TFP growth and its components are investigated.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号