首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Suppose we are given a cake represented by the unit interval to be divided among agents evaluating the pieces of the cake by nonatomic probability measures. It is known that we can divide the unit interval into contiguous and connected pieces and assign them to the agents in such a way that the values of the pieces are equal according to the individual agents measures. Such division is said to be equitable and simple. In this paper we show that an equitable and simple division also exists in the case of dividing two-dimensional cake represented by the unit square. In this case, by simple division we mean dividing the unit square firstly by horizontal cuts, and then partition the resulting rectangles by vertical cuts. We give a method of obtaining a proportional and simple division of this cake. Furthermore, we prove the existence of proportional, equitable and simple division.  相似文献   

2.
We study Pareto optimal partitions of a “cake” among n players. Each player uses a countably additive non-atomic probability measure to evaluate the size of pieces of cake. We present two geometric pictures appropriate for this study and consider the connection between these pictures and the maximization of convex combinations of measures, which we studied in Barbanel and Zwicker [Barbanel, J.B., Zwicker, W., 1997. Two applications of a theorem of Dvoretsky, Wald, and Wolfovitz to cake division. Theory and Decision 43, 203–207].  相似文献   

3.
It is well known that linear equations subject to cross-equation aggregation restrictions can be ‘stacked’ and estimated simultaneously. However, if every equation contains the same set of regressors, a number of single-equation estimation procedures can be employed. The applicability of ordinary least squares is widely recognized but the article demonstrates that the class of applicable estimators is much broaders than OLS. Under specified conditions, the class includes instrumental variables, generalized least squares, ridge regression, two-stage least squares, k-class estimators, and indirect least squares. Transformations of the original equations and other related matters are discussed also.  相似文献   

4.
A geometric interpretation is developed for so-called reconciliation methodologies used to forecast time series that adhere to known linear constraints. In particular, a general framework is established that nests many existing popular reconciliation methods within the class of projections. This interpretation facilitates the derivation of novel theoretical results. First, reconciliation via projection is guaranteed to improve forecast accuracy with respect to a class of loss functions based on a generalised distance metric. Second, the Minimum Trace (MinT) method minimises expected loss for this same class of loss functions. Third, the geometric interpretation provides a new proof that forecast reconciliation using projections results in unbiased forecasts, provided that the initial base forecasts are also unbiased. Approaches for dealing with biased base forecasts are proposed. An extensive empirical study of Australian tourism flows demonstrates the theoretical results of the paper and shows that bias correction prior to reconciliation outperforms alternatives that only bias-correct or only reconcile forecasts.  相似文献   

5.
Non-negative matrix factorisation (NMF) is an increasingly popular unsupervised learning method. However, parameter estimation in the NMF model is a difficult high-dimensional optimisation problem. We consider algorithms of the alternating least squares type. Solutions to the least squares problem fall in two categories. The first category is iterative algorithms, which include algorithms such as the majorise–minimise (MM) algorithm, coordinate descent, gradient descent and the Févotte-Cemgil expectation–maximisation (FC-EM) algorithm. We introduce a new family of iterative updates based on a generalisation of the FC-EM algorithm. The coordinate descent, gradient descent and FC-EM algorithms are special cases of this new EM family of iterative procedures. Curiously, we show that the MM algorithm is never a member of our general EM algorithm. The second category is based on cone projection. We describe and prove a cone projection algorithm tailored to the non-negative least square problem. We compare the algorithms on a test case and on the problem of identifying mutational signatures in human cancer. We generally find that cone projection is an attractive choice. Furthermore, in the cancer application, we find that a mix-and-match strategy performs better than running each algorithm in isolation.  相似文献   

6.
An important problem in statistics is to study the effect of one or two factors on a dependent variable. This type of problem can be formulated as a regression problem (by using dummy (0,1) variables to represent the levels of factors) and the standard least squares (LS) analysis is well-known. The least absolute value (LAV) analysis is less well known, but certainly is becoming more widely used, especially in exploratory data analysis.The purpose of this report is to present a didactic treatment of visual display methods useful in exploratory data analysis. These visual display techniques (stem- and- leaf, box- and- whisker, and two-way plots) are presented for both the least squares and the least absolute value analyses of a two-way classification model.  相似文献   

7.
This paper studies a new type of barrier option, min–max multi-step barrier options with diverse multiple up or down barrier levels placed in the sub-periods of the option’s lifetime. We develop the explicit pricing formula of this type of option under the Black–Scholes model and explore its applications and possible extensions. In particular, the min–max multi-step barrier option pricing formula can be used to approximate double barrier option prices and compute prices of complex barrier options such as discrete geometric Asian barrier options. As a practical example of directly applying the pricing formula, we introduce and evaluate a re-bouncing equity-linked security. The main theorem of this work is capable of handling the general payoff function, from which we obtain the pricing formulas of various min–max multi-step barrier options. The min–max multi-step reflection principle, the boundary-crossing probability of min–max multi-step barriers with icicles, is also derived.  相似文献   

8.
A radically new approach to statistical modelling, which combines mathematical techniques of Bayesian statistics with the philosophy of the theory of competitive on-line algorithms, has arisen over the last decade in computer science (to a large degree, under the influence of Dawid's prequential statistics). In this approach, which we call "competitive on-line statistics", it is not assumed that data are generated by some stochastic mechanism; the bounds derived for the performance of competitive on-line statistical procedures are guaranteed to hold (and not just hold with high probability or on the average). This paper reviews some results in this area; the new material in it includes the proofs for the performance of the Aggregating Algorithm in the problem of linear regression with square loss.  相似文献   

9.
I derive the exact distribution of the exact determined instrumental variable estimator using a geometric approach. The approach provides a decomposition of the exact estimator. The results show that by geometric reasoning one may efficiently derive the distribution of the estimation error. The often striking non‐normal shape of the instrumental variable estimator, in the case of weak instruments and small samples, follows intuitively by the geometry of the problem. The method allows for intuitive interpretations of how the shape of the distribution is determined by instrument quality and endogeneity. The approach can also be used when deriving the exact distribution of any ratio of stochastic variables.  相似文献   

10.
A class of partially generalized least squares estimators and a class of partially generalized two-stage least squares estimators in regression models with heteroscedastic errors are proposed. By using these estimators a researcher can attain higher efficiency than that attained by the least squares or the two-stage least squares estimators without explicitly estimating each component of the heteroscedastic variances. However, the efficiency is not as high as that of the generalized least squares or the generalized two-stage least squares estimator calculated using the knowledge of the true variances. Hence the use of the term partial.  相似文献   

11.
Charles H. Reilly 《Socio》1991,25(4):251-267
Three satellite system synthesis problems—the satellite location problem, the signal protection problem, and the arc allotment problem—are described and formulated. In each of these problems, satellite administrations are to be allotted some portion of the geostationary orbit for the purpose of deploying their satellites, subject to angle of elevation and electromagnetic interference constraints. A simple heuristic procedure is suggested as a means for solving these three problems. Sufficient conditions for the successful application of the heuristic and solution-value bounds for each problem are presented. The heuristic is applied to each problem for a six-satellite example. Additionally, computational results are presented for an example with 183 satellites worldwide.  相似文献   

12.
Computation and analysis of multiple structural change models   总被引:2,自引:0,他引:2  
In a recent paper, Bai and Perron ( 1998 ) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least‐squares operations of order O(T2) for any number of breaks. Our method can be applied to both pure and partial structural change models. Second, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
This paper analyzes the net migration rate of the school age population in substate levels (minor civil divisions) between 1960 and 1970. The investigation includes a series of estimation procedures in order to carry out the analyses. The estimation procedures are unbiased techniques (ordinary least squares and maximum R2 improvement), biased techniques (ridge regression, generalized ridge regression and principal components regression), as well as robust techniques (least absolute deviation and Hill-Holland). For each of these estimation procedures, a model was developed, based on a sample of 50 minor civil divisions in the State of New Jersey. A cross validation was made on two subsequent samples of 50 minor civil divisions each in order to measure the amount of reduced shrinkage of the square of the multiple correlation.  相似文献   

14.
We reconsider the question of allocating resources in large teams using decentralized procedures. Our assumption on the distribution of procedures is that it approximates a given undelying distribution in producers space; thus we relax prior approaches which utilize replicas or iid samplings. We examine when solving the allocation problem for the underlying distribution yields an appropriate solution to the specific sample. We then show how market- like mechanisms may be used to get a decentralized decision process which is asymptotically optimal.  相似文献   

15.
The reproducibility crisis, that is, the fact that many scientific results are difficult to replicate, pointing to their unreliability or falsehood, is a hot topic in the recent scientific literature, and statistical methodologies, testing procedures and p‐values, in particular, are at the centre of the debate. Assessment of the extent of the problem–the reproducibility rate or the false discovery rate–and the role of contributing factors are still an open problem. Replication experiments, that is, systematic replications of existing results, may offer relevant information on these issues. We propose a statistical model to deal with such information, in particular to estimate the reproducibility rate and the effect of some study characteristics on its reliability. We analyse data from a recent replication experiment in psychology finding a reproducibility rate broadly coherent with other assessments from the same experiment. Our results also confirm the expected role of some contributing factor (unexpectedness of the result and room for bias) while they suggest that the similarity between original study and the replica is not so relevant, thus mitigating some criticism directed to replication experiments.  相似文献   

16.
RCP模式是为克服非经营性基础设施项目和准经营性基础设施项目的融资难题而出现的一种全新的项目融资模式,该模式的基本原理是用高收益的资源开发项目补偿非经营性或准经营性基础设施项目的投资,从而保证投资者获得合理的投资回报.本文对这种项目融资模式的基本原理及运作程序进行了详细分析,并对使用该模式进行项目融资的优缺点、使用中注意的问题进行了讨论.RCP项目融资模式的出现,对解决基础设施建设资金不足、加快基础设施项目的建设步伐具有重要的现实意义.  相似文献   

17.
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data.  相似文献   

18.
The economic theory of option pricing imposes constraints on the structure of call functions and state price densities. Except in a few polar cases, it does not prescribe functional forms. This paper proposes a nonparametric estimator of option pricing models which incorporates various restrictions (such as monotonicity and convexity) within a single least squares procedure. The bootstrap is used to produce confidence intervals for the call function and its first two derivatives and to calibrate a residual regression test of shape constraints. We apply the techniques to option pricing data on the DAX.  相似文献   

19.
The best guesses of unknown coefficients specified in Theil's model of introspection are like predictions and not like de Finetti's prevision and therefore not the values taken by random variables. Constrained least squares procedures can be formulated which are free of these difficulties. The ridge estimator is a simple version of a constrained least squares estimator which can be made operational even when little prior information is available. Our operational ridge estimators are nearly minimax and are not less stable than least squares in the presence of high multicollinearity. Finally, we have presented the ridge estimates for the Rotterdam demand model.  相似文献   

20.
We study the optimal stopping problems embedded in a typical mortgage. Despite a possible non-rational behaviour of the typical borrower of a mortgage, such problems are worth to be solved for the lender to hedge against the prepayment risk, and because many mortgage-backed securities pricing models incorporate this suboptimality via a so-called prepayment function which can depend, at time t, on whether the prepayment is optimal or not. We state the prepayment problem in the context of the optimal stopping theory and present an algorithm to solve the problem via weak convergence of computationally simple trees. Numerical results in the case of the Vasicek model and of the CIR model are also presented. The procedure is extended to the case when both the prepayment as well as the default are possible: in this case, we present a new method of building two-dimensional computationally simple trees, and we apply it to the optimal stopping problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号