首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When a variable is dropped from a least-squares regression equation, there can be no change in sign of any coefficient that is more significant than the coefficient of the omitted variable. More generally, a constrained least squares estimate of a parameter βj must lie in the interval (β?j?V12jj|t|, β?j + V12jj|t|) where β?j is the unconstrained estimate of βj, V12jj is the standard error of β?j and t is the t-value for testing the (univariate) restriction.  相似文献   

2.
Consider the model
A(L)xt=B(L)yt+C(L)zt=ut, t=1,…,T
, where
A(L)=(B(L):C(L))
is a matrix of polynomials in the lag operator so that Lrxt=xt?r, and yt is a vector of n endogenous variables,
B(L)=s=0k BsLs
B0In, and the remaining Bs are n × n square matrices,
C(L)=s=0k CsLs
, and Cs is n × m.Suppose that ut satisfies
R(L)ut=et
, where
R(L)=s=0rRs Ls
, R0=In, and Rs is a n × n square matrix. et may be white noise, or generated by a vector moving average stochastic process.Now writing
Ψ(L)=R(L)A(L)
, it is assumed that ignoring the implicit restrictions which follow from eq. (1), Ψ(L) can be consistently estimated, so that if the equation
Ψ(L)xt=et
has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true Ψ(L) eq. (1) has a unique solution for A(L) and R(L).There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that A(L) has a latent root λ in the sense that for some non-zero vector β,
β′A(λ)=0
.The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β,
R(λ)δ=0 and β′A(λ)=0
.Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation
Ψ(z)=R(z)A(z)
in any neighbourhood of the true values.  相似文献   

3.
For ? an anonymous exactly strongly consistent social choice function (ESC SCF) and x an alternative, define bx=b(f)x to be the size of a minimal blocking coalition for x. By studying the correspondence between ? and {b(f)x}, we establish the existence, uniqueness and monotonicity of ESC SCF's. We also prove the following conjecture of B. Peleg: A non–constant anonymous ESC SCF depends on the knowledge of every player's full preference profile.  相似文献   

4.
Unlike other popular measures of income inequality, the Gini coefficient is not decomposable, i.e., the coefficient G(X) of a composite population X=X1∪…∪Xr cannot be computed in terms of the sizes, mean incomes and Gini coefficients of the components Xi. In this paper upper and lower bounds (best possible for r=2) for G(X) in terms of these data are given. For example, G(X1∪…∪Xr)≧ΣαiG(Xi), where αi is the proportion of the pop ulation in Xi. One of the tools used, which may be of interest for other applications, is a lower bound for ∫0f(x)g(x)dx (converse to Cauchy's inequality) for monotone decreasing functions f and g.  相似文献   

5.
This paper considers some problems associated with estimation and inference in the normal linear regression model
yt=j=1m0 βjxtjt, vart)=σ2
, when m0 is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of m0 chosen to minimize the estimation criterion function
EC(m, T)=σ?2m + mg(T)
, with respect to m, where σ̂2m is the maximum likelihood estimate of σ2. It is shown that, subject to weak side conditions, if g(T)a.s.0 and Tg(T)a.s. then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of m0 are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When g(T) converges to a positive constant with probability one, then in large samples m0 will never be chosen too small, but the probability of choosing m0 too large remains positive.The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on m0 is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose m0 in (1), then in the limit the probability of choosing m0 too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of m0, however. These results are illustrated in a small sampling experiment.  相似文献   

6.
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows:
Mjk=Aj+CjQj+k?1t=j Htkr=t+1Dr
Fk= min1?j?k[Fj+Mjk];F0=0
where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations:
Mj,k=Aj+CjDj
Mj,k+1=Mjk+Dk+1(Cj+kt=jHt,k?j
Using this recursive relationship, the number of computations can be greatly reduced. Specifically, 32N2 ? 12N2 additions and 12N2 + 12N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds.  相似文献   

7.
In the usual linear model y = +u, the error vector u is not observable and the vector r of least squares residuals has a singular covariance matrix that depends on the design matrix X. We approximate u by a vectorr1 = G(JA'y+Kz) of uncorrelated ‘residuals’, where G and (J, K) are orthogonal matrices, A'X = 0 and A'A = I, while z is either 0 or a random vector uncorrelated with u satisfying E(z) = E(J'u) = 0, V(z) = V(J'u) = σ2I. We prove that r1-r is uncorrelated with r-u, for any such r1, extending the results of Neudecker (1969). Building on results of Hildreth (1971) and Tiao and Guttman (1967), we show that the BAUS residual vector rh = r+P1z, where P1 is an orthonormal basis for X, minimizes each characteristic root of V(r1-u), while the vector rb of Theil's BLUS residuals minimizes each characteristic root of V(Jra-r), cf. Grossman and Styan (1972). We find that tr V(rh-u) < tr V(Jrb-u) if and only if the average of the singular values of P1K is less than 12, and give examples to show that BAUS is often better than BLUS in this sense.  相似文献   

8.
The paper rationalizes certain functional forms for index numbers with functional forms for the underlying aggregator function. An aggregator functional form is said to be ‘flexible’ if it can provide a second order approximation to an arbitrary twice diffentiable linearly homogeneous function. An index number functional form is said to be ‘superlative’ if it is exact (i.e., consistent with) for a ‘flexible’ aggregator functional form. The paper shows that a certain family of index number formulae is exact for the ‘flexible’ quadratic mean of order r aggregator function, iΣjaijxir2xjr2)1r, defined by Den and others. For r equals 2, the resulting quantity index is Irving Fisher's ideal index. The paper also utilizes the Malmquist quantity index in order to rationalize the Törnqvist-Theil quantity indexin the nonhomothetic case. Finally, the paper attempts to justify the Jorgenson-Griliches productivity measurement technique for the case of discrete (as opposed to continuous) data.  相似文献   

9.
This paper is concerned with the estimation of the model MED(y|x) = from a random sample of observations on (sgn y, x). Manski (1975) introduced the maximum score estimator of the normalized parameter vector β1 = β/?β?. In the present paper, strong consistency is proved. It is also proved that the maximum score estimate lies outside any fixed neighborhood of β1 with probability that goes to zero at exponential rate.  相似文献   

10.
This paper investigates a theoretical relationship between the rank-size rule and city size distributions. First, a method of relating a certain city size distribution to ranked city size is formulated by employing order statistics. Second, it is shown that there do not exist city size distributions which satisfy the rank-size rule. Third, an alternative rank-size rule is proposed as E(Pr)?(r)?(r?y)=c, which is equivalent to the Pareto city size distribution. Last, an alternative statistical test for the rank-size rule is proposed to overcome a shortcoming of the conventional test. Along this line, the Hokkaido region data is analyzed.  相似文献   

11.
Insufficient attention has been focused on the ubiquitous problem of excess inventory levels. This paper develops two models of different complexity for determining if stock levels are economically unjustifiable and, if so, for determining inventory retention levels. The models indicate how much stock should be retained for regular use and how much should be disposed of at a salvage price for a given item.The first model illustrates the basic logic of this approach. The net benefit realized by disposing of some quantity of excess inventory is depicted by the following equation: HOLDING NET = SALVAGE + COST - REPURCHASE - REORDER BENEFIT REVENUE SAVINGS COSTS COSTSThis relationship can be depicted mathematically as a parabolic function of the time supply retained. Using conventional optimization, the following optimum solution is obtained:
to=P?Ps+CQPF+Q2R
where t0 is the optimum time supply retained; P is the per-unit purchase price; Ps is the per-unit salvage price; C is the ordering cost; Q is the usual item lot size; F is the holding cost fraction; and R is the annual demand for the item. Any items in excess of the optimum time supply should be sold at the salvage price.The second model adjusts holding cost savings, repurchase costs, and reorder costs to account for present value considerations and for inflation. The following optimum relationship is derived:
PFR2k?PFtR2e?kt+PFQ2+PQ(i?k)+C(i?k)e(i?k)Q/R?1e(i?k)t-PsR?PFR2k=0
where i is the expected inflation rate and k is the required rate of return. Unfortunately this relationship cannot be solved analytically for t0; Newton's Method can be used to find a numerical solution.The solutions obtained by the two models are compared. Not surprisingly, the present value correction tends to reduce the economic time supply to be retained, since repurchase costs and reorder costs are incurred in the future while salvage revenue and holding cost savings are realized immediately. Additionally, both models are used to derive a relationship to describe the minimum economic salvage value, which is the minimum salvage price for which excess inventory should be sold.The simple model, which does not correct for time values, can be used by any organization with the sophistication level to use an EOQ. The present value model which includes an inflation correction is more complex, but can readily be used on a microcomputer. These models are appropriate for independent demand items. It is believed that these models can reduce inventory investment and improve bottom line performance.  相似文献   

12.
13.
This paper provides explicit estimates of the eigenvalues of the covariance matrix of an autoregressive process of order one. Also explicit error bounds are established in closed form. Typically, such an error bound is given by εk = (4(n+1))12ρ2sin((n+1)), so that the approximations improve as the size of the matrix increases. In other words, the accuracy of the approximations increases as direct computations become more costly.  相似文献   

14.
15.
16.
Experience using twenty-one actual economic series suggests that using the Box-Cox transform does not consistently produce superior forecasts. The procedure used was to consider transformations x(λ)=(xλ?1)λ, where λ is chosen by maximum likelihood, a linear ARIMA model fitted to x(λ) and forecasts produced, and finally forecasts constructed for the original series. A main problem found was that no value of λ appeared to produce normally distributed data and so the maximum likelihood procedure was inappropriate.  相似文献   

17.
18.
This paper establishes the relatively weak conditions under which causal inferences from a regression–discontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discontinuity in any pre-determined (or “baseline”) variables at the RD threshold. Specifically, consider a standard treatment evaluation problem in which treatment is assigned to an individual if and only if V>v0V>v0, but where v0v0 is a known threshold, and V is observable. V can depend on the individual's characteristics and choices, but there is also a random chance element: for each individual, there exists a well-defined probability distribution for V  . The density function—allowed to differ arbitrarily across the population—is assumed to be continuous. It is formally established that treatment status here is as good as randomized in a local neighborhood of V=v0V=v0. These ideas are illustrated in an analysis of U.S. House elections, where the inherent uncertainty in the final vote count is plausible, which would imply that the party that wins is essentially randomized among elections decided by a narrow margin. The evidence is consistent with this prediction, which is then used to generate “near-experimental” causal estimates of the electoral advantage to incumbency.  相似文献   

19.
20.
A simple method of obtaining asymptotic expansions for the densities of sufficient estimators is described. It is an extension of the one developed by O. Barndorff-Nielsen and D.R. Cox (1979) for exponential families. A series expansion in powers of n?1 is derived of which the first term has an error of order n?1 which can effectively be reduced to n-?32 by renormalization. The results obtained are similar to those given by H.E. Daniels's (1954) saddlepoint method but the derivations are simpler. A brief treatment of approximations to conditional densities is given. Theorems are proved which extend the validity of the multivariate Edgeworth expansion to parametric families of densities of statistics which need not be standardized sums of independent and identically distributed vectors. These extensions permit the treatment of problems arising in time series analysis. The technique is used by J. Durbin (1980) to obtain approximations to the densities of partial serial correlation coefficients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号