首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
For ? an anonymous exactly strongly consistent social choice function (ESC SCF) and x an alternative, define bx=b(f)x to be the size of a minimal blocking coalition for x. By studying the correspondence between ? and {b(f)x}, we establish the existence, uniqueness and monotonicity of ESC SCF's. We also prove the following conjecture of B. Peleg: A non–constant anonymous ESC SCF depends on the knowledge of every player's full preference profile.  相似文献   

2.
Consider the model
A(L)xt=B(L)yt+C(L)zt=ut, t=1,…,T
, where
A(L)=(B(L):C(L))
is a matrix of polynomials in the lag operator so that Lrxt=xt?r, and yt is a vector of n endogenous variables,
B(L)=s=0k BsLs
B0In, and the remaining Bs are n × n square matrices,
C(L)=s=0k CsLs
, and Cs is n × m.Suppose that ut satisfies
R(L)ut=et
, where
R(L)=s=0rRs Ls
, R0=In, and Rs is a n × n square matrix. et may be white noise, or generated by a vector moving average stochastic process.Now writing
Ψ(L)=R(L)A(L)
, it is assumed that ignoring the implicit restrictions which follow from eq. (1), Ψ(L) can be consistently estimated, so that if the equation
Ψ(L)xt=et
has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true Ψ(L) eq. (1) has a unique solution for A(L) and R(L).There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that A(L) has a latent root λ in the sense that for some non-zero vector β,
β′A(λ)=0
.The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β,
R(λ)δ=0 and β′A(λ)=0
.Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation
Ψ(z)=R(z)A(z)
in any neighbourhood of the true values.  相似文献   

3.
The paper rationalizes certain functional forms for index numbers with functional forms for the underlying aggregator function. An aggregator functional form is said to be ‘flexible’ if it can provide a second order approximation to an arbitrary twice diffentiable linearly homogeneous function. An index number functional form is said to be ‘superlative’ if it is exact (i.e., consistent with) for a ‘flexible’ aggregator functional form. The paper shows that a certain family of index number formulae is exact for the ‘flexible’ quadratic mean of order r aggregator function, iΣjaijxir2xjr2)1r, defined by Den and others. For r equals 2, the resulting quantity index is Irving Fisher's ideal index. The paper also utilizes the Malmquist quantity index in order to rationalize the Törnqvist-Theil quantity indexin the nonhomothetic case. Finally, the paper attempts to justify the Jorgenson-Griliches productivity measurement technique for the case of discrete (as opposed to continuous) data.  相似文献   

4.
Unlike other popular measures of income inequality, the Gini coefficient is not decomposable, i.e., the coefficient G(X) of a composite population X=X1∪…∪Xr cannot be computed in terms of the sizes, mean incomes and Gini coefficients of the components Xi. In this paper upper and lower bounds (best possible for r=2) for G(X) in terms of these data are given. For example, G(X1∪…∪Xr)≧ΣαiG(Xi), where αi is the proportion of the pop ulation in Xi. One of the tools used, which may be of interest for other applications, is a lower bound for ∫0f(x)g(x)dx (converse to Cauchy's inequality) for monotone decreasing functions f and g.  相似文献   

5.
This paper considers some problems associated with estimation and inference in the normal linear regression model
yt=j=1m0 βjxtjt, vart)=σ2
, when m0 is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of m0 chosen to minimize the estimation criterion function
EC(m, T)=σ?2m + mg(T)
, with respect to m, where σ̂2m is the maximum likelihood estimate of σ2. It is shown that, subject to weak side conditions, if g(T)a.s.0 and Tg(T)a.s. then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of m0 are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When g(T) converges to a positive constant with probability one, then in large samples m0 will never be chosen too small, but the probability of choosing m0 too large remains positive.The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on m0 is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose m0 in (1), then in the limit the probability of choosing m0 too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of m0, however. These results are illustrated in a small sampling experiment.  相似文献   

6.
When a variable is dropped from a least-squares regression equation, there can be no change in sign of any coefficient that is more significant than the coefficient of the omitted variable. More generally, a constrained least squares estimate of a parameter βj must lie in the interval (β?j?V12jj|t|, β?j + V12jj|t|) where β?j is the unconstrained estimate of βj, V12jj is the standard error of β?j and t is the t-value for testing the (univariate) restriction.  相似文献   

7.
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) βi> and βi<0, where βi is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) θ=0, θ> and θ<0 where θ is the mean of a normal distribution, (4) β=0 and β≠0, where β is a vector of regression coefficients, (5) β2=0 vs. β2≠0 where β' =(β'1β2) is a vector regression coefficients and β1's value is unrestricted. In several cases, is a vector of regression coefficients and β1's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems.  相似文献   

8.
This paper provides explicit estimates of the eigenvalues of the covariance matrix of an autoregressive process of order one. Also explicit error bounds are established in closed form. Typically, such an error bound is given by εk = (4(n+1))12ρ2sin((n+1)), so that the approximations improve as the size of the matrix increases. In other words, the accuracy of the approximations increases as direct computations become more costly.  相似文献   

9.
This paper is concerned with the estimation of the model MED(y|x) = from a random sample of observations on (sgn y, x). Manski (1975) introduced the maximum score estimator of the normalized parameter vector β1 = β/?β?. In the present paper, strong consistency is proved. It is also proved that the maximum score estimate lies outside any fixed neighborhood of β1 with probability that goes to zero at exponential rate.  相似文献   

10.
11.
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows:
Mjk=Aj+CjQj+k?1t=j Htkr=t+1Dr
Fk= min1?j?k[Fj+Mjk];F0=0
where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations:
Mj,k=Aj+CjDj
Mj,k+1=Mjk+Dk+1(Cj+kt=jHt,k?j
Using this recursive relationship, the number of computations can be greatly reduced. Specifically, 32N2 ? 12N2 additions and 12N2 + 12N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds.  相似文献   

12.
13.
A simple method of obtaining asymptotic expansions for the densities of sufficient estimators is described. It is an extension of the one developed by O. Barndorff-Nielsen and D.R. Cox (1979) for exponential families. A series expansion in powers of n?1 is derived of which the first term has an error of order n?1 which can effectively be reduced to n-?32 by renormalization. The results obtained are similar to those given by H.E. Daniels's (1954) saddlepoint method but the derivations are simpler. A brief treatment of approximations to conditional densities is given. Theorems are proved which extend the validity of the multivariate Edgeworth expansion to parametric families of densities of statistics which need not be standardized sums of independent and identically distributed vectors. These extensions permit the treatment of problems arising in time series analysis. The technique is used by J. Durbin (1980) to obtain approximations to the densities of partial serial correlation coefficients.  相似文献   

14.
Insufficient attention has been focused on the ubiquitous problem of excess inventory levels. This paper develops two models of different complexity for determining if stock levels are economically unjustifiable and, if so, for determining inventory retention levels. The models indicate how much stock should be retained for regular use and how much should be disposed of at a salvage price for a given item.The first model illustrates the basic logic of this approach. The net benefit realized by disposing of some quantity of excess inventory is depicted by the following equation: HOLDING NET = SALVAGE + COST - REPURCHASE - REORDER BENEFIT REVENUE SAVINGS COSTS COSTSThis relationship can be depicted mathematically as a parabolic function of the time supply retained. Using conventional optimization, the following optimum solution is obtained:
to=P?Ps+CQPF+Q2R
where t0 is the optimum time supply retained; P is the per-unit purchase price; Ps is the per-unit salvage price; C is the ordering cost; Q is the usual item lot size; F is the holding cost fraction; and R is the annual demand for the item. Any items in excess of the optimum time supply should be sold at the salvage price.The second model adjusts holding cost savings, repurchase costs, and reorder costs to account for present value considerations and for inflation. The following optimum relationship is derived:
PFR2k?PFtR2e?kt+PFQ2+PQ(i?k)+C(i?k)e(i?k)Q/R?1e(i?k)t-PsR?PFR2k=0
where i is the expected inflation rate and k is the required rate of return. Unfortunately this relationship cannot be solved analytically for t0; Newton's Method can be used to find a numerical solution.The solutions obtained by the two models are compared. Not surprisingly, the present value correction tends to reduce the economic time supply to be retained, since repurchase costs and reorder costs are incurred in the future while salvage revenue and holding cost savings are realized immediately. Additionally, both models are used to derive a relationship to describe the minimum economic salvage value, which is the minimum salvage price for which excess inventory should be sold.The simple model, which does not correct for time values, can be used by any organization with the sophistication level to use an EOQ. The present value model which includes an inflation correction is more complex, but can readily be used on a microcomputer. These models are appropriate for independent demand items. It is believed that these models can reduce inventory investment and improve bottom line performance.  相似文献   

15.
16.
Experience using twenty-one actual economic series suggests that using the Box-Cox transform does not consistently produce superior forecasts. The procedure used was to consider transformations x(λ)=(xλ?1)λ, where λ is chosen by maximum likelihood, a linear ARIMA model fitted to x(λ) and forecasts produced, and finally forecasts constructed for the original series. A main problem found was that no value of λ appeared to produce normally distributed data and so the maximum likelihood procedure was inappropriate.  相似文献   

17.
This paper investigates a theoretical relationship between the rank-size rule and city size distributions. First, a method of relating a certain city size distribution to ranked city size is formulated by employing order statistics. Second, it is shown that there do not exist city size distributions which satisfy the rank-size rule. Third, an alternative rank-size rule is proposed as E(Pr)?(r)?(r?y)=c, which is equivalent to the Pareto city size distribution. Last, an alternative statistical test for the rank-size rule is proposed to overcome a shortcoming of the conventional test. Along this line, the Hokkaido region data is analyzed.  相似文献   

18.
This paper is concerned with the discrete time stochastic volatility model Yi=exp(Xi/2)ηiYi=exp(Xi/2)ηi, Xi+1=b(Xi)+σ(Xi)ξi+1Xi+1=b(Xi)+σ(Xi)ξi+1, where only (Yi)(Yi) is observed. The model is rewritten as a particular hidden model: Zi=Xi+εiZi=Xi+εi, Xi+1=b(Xi)+σ(Xi)ξi+1Xi+1=b(Xi)+σ(Xi)ξi+1, where (ξi)(ξi) and (εi)(εi) are independent sequences of i.i.d. noise. Moreover, the sequences (Xi)(Xi) and (εi)(εi) are independent and the distribution of εε is known. Then, our aim is to estimate the functions bb and σ2σ2 when only observations Z1,…,ZnZ1,,Zn are available. We propose to estimate bfbf and (b22)f(b2+σ2)f and study the integrated mean square error of projection estimators of these functions on automatically selected projection spaces. By ratio strategy, estimators of bb and σ2σ2 are then deduced. The mean square risk of the resulting estimators are studied and their rates are discussed. Lastly, simulation experiments are provided: constants in the penalty functions defining the estimators are calibrated and the quality of the estimators is checked on several examples.  相似文献   

19.
This paper considers a continuous three-phase polynomial regression model with two threshold points for dependent data with heteroscedasticity. We assume the model is polynomial of order zero in the middle regime, and is polynomial of higher orders elsewhere. We denote this model by 2 $$ {\mathcal{M}}_2 $$ , which includes models with one or no threshold points, denoted by 1 $$ {\mathcal{M}}_1 $$ and 0 $$ {\mathcal{M}}_0 $$ , respectively, as special cases. We provide an ordered iterative least squares (OiLS) method when estimating 2 $$ {\mathcal{M}}_2 $$ and establish the consistency of the OiLS estimators under mild conditions. When the underlying model is 1 $$ {\mathcal{M}}_1 $$ and is ( d 0 1 ) $$ \left({d}_0-1\right) $$ th-order differentiable but not d 0 $$ {d}_0 $$ th-order differentiable at the threshold point, we further show the O p ( N 1 / ( d 0 + 2 ) ) $$ {O}_p\left({N}^{-1/\left({d}_0+2\right)}\right) $$ convergence rate of the OiLS estimators, which can be faster than the O p ( N 1 / ( 2 d 0 ) ) $$ {O}_p\left({N}^{-1/\left(2{d}_0\right)}\right) $$ convergence rate given in Feder when d 0 3 $$ {d}_0\ge 3 $$ . We also apply a model-selection procedure for selecting κ $$ {\mathcal{M}}_{\kappa } $$ ; κ = 0 , 1 , 2 $$ \kappa =0,1,2 $$ . When the underlying model exists, we establish the selection consistency under the aforementioned conditions. Finally, we conduct simulation experiments to demonstrate the finite-sample performance of our asymptotic results.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号