首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article presents a unified treatment of simultaneous system estimation. A general class of full-information estimators is proposed, called K-matrix-class (KMC). It is shown that the K-matrix-class includes both full-information maximum-likelihood and three-stage least- squares estimators as special cases and that the k-class can be regarded as a subclass of the K-matrix-class. Conditions under which KMC estimators are consistent (similar to those of the k-class estimators) are given. Furthermore, as a full information-generalization of the double k-class estimators, the double K-matrix-class estimators (DKMC) are proposed.  相似文献   

2.
In credibility theory, an unobservable random vector Y is approximated by a random vector ? in a pre-assigned set A of admitted estimators for Y. The credibility approximation ? for Y is best in the sense that it is the vector V ? A minimizing the distance between V and Y. The credibility estimator depends on observable random variables but also on unknown structure parameters. In the practical models, the latter can be estimated from realizations of the observable random variables.  相似文献   

3.
A sufficient condition is derived in this paper for the consistency and asymptotic normality of the k-class estimators (k-stochastic or nonstochastic) as the concentration parameter increases indefinitely, with the sample size either staying fixed or also increasing. It is further shown that the limited-information maximum likelihood estimator satisfies this condition. Since large sample size implies a large concentration parameter, but not vice versa, the usual conditions for consistency and asymptotic normality of the k-class estimators as the sample size increases can be inferred from the results given in this paper. But more importantly, the results in this paper shed further light on the small-sample properties of the stochastic k-class estimators and can serve as a starting point for the derivation of asymptotic approximations for these estimators as the concentration parameter goes to infinity, while the sample size either stays fixed or also goes to infinity.  相似文献   

4.
In our earlier paper [Srivastava, Agnihotri and Dwivedi (1980)] the dominance of double k-class over k-class with respect to exact mean squared error matrix criteria is established. It is observed that given a member of k-class, one can pick up a member of double k-class that will provide an improved estimator of the coefficients. This result prompted us to study the exact finite sample properties of the double k-class estimator. For this, we have considered a structural equation containing two endogenous variables and have investigated the properties of double k-class estimators of the coefficients of explanatory endogenous variables assuming characterizing scalars to be non-stochastic.  相似文献   

5.
In this paper a simple modification of the usual k-class estimators has been suggested so that for 0 ≦ k ≦ 1 the problem of the non-existence of moments disappears. These modified estimators can be interpreted either as Bayes estimators or as constrained estimators subject to the restriction that the squared length of the coefficient vector is less than or equal to a given number.  相似文献   

6.
It is well known that linear equations subject to cross-equation aggregation restrictions can be ‘stacked’ and estimated simultaneously. However, if every equation contains the same set of regressors, a number of single-equation estimation procedures can be employed. The applicability of ordinary least squares is widely recognized but the article demonstrates that the class of applicable estimators is much broaders than OLS. Under specified conditions, the class includes instrumental variables, generalized least squares, ridge regression, two-stage least squares, k-class estimators, and indirect least squares. Transformations of the original equations and other related matters are discussed also.  相似文献   

7.
The TSLS and LIML estimators are evaluated by means of a new class of limited-information estimators, the so-called Ω-class estimators. Under certain assumptions the Ω-class estimator is a maximun-likelihood estimator. These assumptions are superfluous, however, if we view the Ω-class as a class of minimun-distance estimators; all the members are shown to be consistent under general conditions. Besides the TSLS and the LIML estimators some other interesting members are introduced, and it is shown that, under certain conditions, the Ω-class estimators are weighted averages of different TSLS estimators. The use of TSLS in small samples is criticized; an alternative estimator is proposed.  相似文献   

8.
In this paper ridgelike Bayesian estimators of structural coefficients have been used to form the partially restricted reduced form estimators. These partially restricted reduced form estimators are simple in form and possess finite sampling moments and risk in contrast to other restricted reduced form estimators that possess no finite moments and have infinite risk relative to quadratic loss functions. The usual k-class implied partially restricted reduced form estimators with 0≦k≦1 do not posses finite moments unless the degree of overidentification (or the excess of sample size over the number of coefficients) of the structural equation being estimated is suitably restricted.  相似文献   

9.
Given an arbitrary function x: RlRl satisfying Walras law and homogeneity, Debreu decomposed x into the sum of l ‘individually rational’ functions x(p)=Σlk=1[uvbar|x]k(p). Here we find explicit utility functions uk, constructed on the basis of a simple geometric intuition, which give rise to Debreu's excess demands [uvbar|x]k(p).  相似文献   

10.
In the usual linear model y = +u, the error vector u is not observable and the vector r of least squares residuals has a singular covariance matrix that depends on the design matrix X. We approximate u by a vectorr1 = G(JA'y+Kz) of uncorrelated ‘residuals’, where G and (J, K) are orthogonal matrices, A'X = 0 and A'A = I, while z is either 0 or a random vector uncorrelated with u satisfying E(z) = E(J'u) = 0, V(z) = V(J'u) = σ2I. We prove that r1-r is uncorrelated with r-u, for any such r1, extending the results of Neudecker (1969). Building on results of Hildreth (1971) and Tiao and Guttman (1967), we show that the BAUS residual vector rh = r+P1z, where P1 is an orthonormal basis for X, minimizes each characteristic root of V(r1-u), while the vector rb of Theil's BLUS residuals minimizes each characteristic root of V(Jra-r), cf. Grossman and Styan (1972). We find that tr V(rh-u) < tr V(Jrb-u) if and only if the average of the singular values of P1K is less than 12, and give examples to show that BAUS is often better than BLUS in this sense.  相似文献   

11.
Unlike other popular measures of income inequality, the Gini coefficient is not decomposable, i.e., the coefficient G(X) of a composite population X=X1∪…∪Xr cannot be computed in terms of the sizes, mean incomes and Gini coefficients of the components Xi. In this paper upper and lower bounds (best possible for r=2) for G(X) in terms of these data are given. For example, G(X1∪…∪Xr)≧ΣαiG(Xi), where αi is the proportion of the pop ulation in Xi. One of the tools used, which may be of interest for other applications, is a lower bound for ∫0f(x)g(x)dx (converse to Cauchy's inequality) for monotone decreasing functions f and g.  相似文献   

12.
Stein-Rule estimator for regression problems has been studied by several authors including Sclove (1968) and others listed in Vinod's (1978) survey. Ullah and Ullah (1978) provide the expressions for the mean squared error (MSE) of a double k-class (KK) estimator with parameters k1 and k2. When k2=1 the Stein-Rule estimator becomes a special case of KK and an optimal choice of k1 is known. This paper explores optimal theoretical choice of k1 and k2. We note that negative choices of k2 are permissible, and that thereis a large range of choices for K1 and k2 where the MSE of the Stein-Rule estimator can be reduced for regression problems based on multicollinear data. A simulation experiment is included.  相似文献   

13.
For ? an anonymous exactly strongly consistent social choice function (ESC SCF) and x an alternative, define bx=b(f)x to be the size of a minimal blocking coalition for x. By studying the correspondence between ? and {b(f)x}, we establish the existence, uniqueness and monotonicity of ESC SCF's. We also prove the following conjecture of B. Peleg: A non–constant anonymous ESC SCF depends on the knowledge of every player's full preference profile.  相似文献   

14.
15.
A simple method of obtaining asymptotic expansions for the densities of sufficient estimators is described. It is an extension of the one developed by O. Barndorff-Nielsen and D.R. Cox (1979) for exponential families. A series expansion in powers of n?1 is derived of which the first term has an error of order n?1 which can effectively be reduced to n-?32 by renormalization. The results obtained are similar to those given by H.E. Daniels's (1954) saddlepoint method but the derivations are simpler. A brief treatment of approximations to conditional densities is given. Theorems are proved which extend the validity of the multivariate Edgeworth expansion to parametric families of densities of statistics which need not be standardized sums of independent and identically distributed vectors. These extensions permit the treatment of problems arising in time series analysis. The technique is used by J. Durbin (1980) to obtain approximations to the densities of partial serial correlation coefficients.  相似文献   

16.
Asymptotic expansions of three alternative classes of structural variance estimators associated with the k-class estimators of structural coefficients are derived for two parameter sequences: a sequence in which the non-centrality parameter increases while the sample size stays fixed (called large-μ or small-disturbance sequence), and that in which the number of observations increases. The accuracy of approximations to small-sample distributions are numerically examined with help of Monte Carlo studies. Properties of the sum of squared residuals of an estimated structural equation are also found from our study.  相似文献   

17.
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows:
Mjk=Aj+CjQj+k?1t=j Htkr=t+1Dr
Fk= min1?j?k[Fj+Mjk];F0=0
where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations:
Mj,k=Aj+CjDj
Mj,k+1=Mjk+Dk+1(Cj+kt=jHt,k?j
Using this recursive relationship, the number of computations can be greatly reduced. Specifically, 32N2 ? 12N2 additions and 12N2 + 12N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds.  相似文献   

18.
19.
Consider the model
A(L)xt=B(L)yt+C(L)zt=ut, t=1,…,T
, where
A(L)=(B(L):C(L))
is a matrix of polynomials in the lag operator so that Lrxt=xt?r, and yt is a vector of n endogenous variables,
B(L)=s=0k BsLs
B0In, and the remaining Bs are n × n square matrices,
C(L)=s=0k CsLs
, and Cs is n × m.Suppose that ut satisfies
R(L)ut=et
, where
R(L)=s=0rRs Ls
, R0=In, and Rs is a n × n square matrix. et may be white noise, or generated by a vector moving average stochastic process.Now writing
Ψ(L)=R(L)A(L)
, it is assumed that ignoring the implicit restrictions which follow from eq. (1), Ψ(L) can be consistently estimated, so that if the equation
Ψ(L)xt=et
has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true Ψ(L) eq. (1) has a unique solution for A(L) and R(L).There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that A(L) has a latent root λ in the sense that for some non-zero vector β,
β′A(λ)=0
.The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β,
R(λ)δ=0 and β′A(λ)=0
.Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation
Ψ(z)=R(z)A(z)
in any neighbourhood of the true values.  相似文献   

20.
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) βi> and βi<0, where βi is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) θ=0, θ> and θ<0 where θ is the mean of a normal distribution, (4) β=0 and β≠0, where β is a vector of regression coefficients, (5) β2=0 vs. β2≠0 where β' =(β'1β2) is a vector regression coefficients and β1's value is unrestricted. In several cases, is a vector of regression coefficients and β1's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号