共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper considers some problems associated with estimation and inference in the normal linear regression model , when is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of chosen to minimize the estimation criterion function , with respect to m, where σ̂2m is the maximum likelihood estimate of . It is shown that, subject to weak side conditions, if and then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When converges to a positive constant with probability one, then in large samples will never be chosen too small, but the probability of choosing too large remains positive.The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose in (1), then in the limit the probability of choosing too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of , however. These results are illustrated in a small sampling experiment. 相似文献
2.
James R Evans 《Journal of Operations Management》1985,5(2):229-235
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows: where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations: Using this recursive relationship, the number of computations can be greatly reduced. Specifically, N2 ? N2 additions and N2 + N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds. 相似文献
3.
Insufficient attention has been focused on the ubiquitous problem of excess inventory levels. This paper develops two models of different complexity for determining if stock levels are economically unjustifiable and, if so, for determining inventory retention levels. The models indicate how much stock should be retained for regular use and how much should be disposed of at a salvage price for a given item.The first model illustrates the basic logic of this approach. The net benefit realized by disposing of some quantity of excess inventory is depicted by the following equation: HOLDING NET = SALVAGE + COST - REPURCHASE - REORDER BENEFIT REVENUE SAVINGS COSTS COSTSThis relationship can be depicted mathematically as a parabolic function of the time supply retained. Using conventional optimization, the following optimum solution is obtained: where t0 is the optimum time supply retained; P is the per-unit purchase price; Ps is the per-unit salvage price; C is the ordering cost; Q is the usual item lot size; F is the holding cost fraction; and R is the annual demand for the item. Any items in excess of the optimum time supply should be sold at the salvage price.The second model adjusts holding cost savings, repurchase costs, and reorder costs to account for present value considerations and for inflation. The following optimum relationship is derived: where i is the expected inflation rate and k is the required rate of return. Unfortunately this relationship cannot be solved analytically for t0; Newton's Method can be used to find a numerical solution.The solutions obtained by the two models are compared. Not surprisingly, the present value correction tends to reduce the economic time supply to be retained, since repurchase costs and reorder costs are incurred in the future while salvage revenue and holding cost savings are realized immediately. Additionally, both models are used to derive a relationship to describe the minimum economic salvage value, which is the minimum salvage price for which excess inventory should be sold.The simple model, which does not correct for time values, can be used by any organization with the sophistication level to use an EOQ. The present value model which includes an inflation correction is more complex, but can readily be used on a microcomputer. These models are appropriate for independent demand items. It is believed that these models can reduce inventory investment and improve bottom line performance. 相似文献
4.
Edward E. Leamer 《Journal of econometrics》1975,3(4):387-390
When a variable is dropped from a least-squares regression equation, there can be no change in sign of any coefficient that is more significant than the coefficient of the omitted variable. More generally, a constrained least squares estimate of a parameter βj must lie in the interval where is the unconstrained estimate of is the standard error of and t is the t-value for testing the (univariate) restriction. 相似文献
5.
6.
7.
Ishai Oren 《Journal of Mathematical Economics》1981,8(3):207-220
For ? an anonymous exactly strongly consistent social choice function (ESC SCF) and x an alternative, define to be the size of a minimal blocking coalition for x. By studying the correspondence between ? and , we establish the existence, uniqueness and monotonicity of ESC SCF's. We also prove the following conjecture of B. Peleg: A non–constant anonymous ESC SCF depends on the knowledge of every player's full preference profile. 相似文献
8.
9.
In the usual linear model , the error vector u is not observable and the vector r of least squares residuals has a singular covariance matrix that depends on the design matrix X. We approximate u by a vector of uncorrelated ‘residuals’, where G and are orthogonal matrices, and , while z is either 0 or a random vector uncorrelated with u satisfying , . We prove that is uncorrelated with , for any such r1, extending the results of Neudecker (1969). Building on results of Hildreth (1971) and Tiao and Guttman (1967), we show that the BAUS residual vector , where P1 is an orthonormal basis for X, minimizes each characteristic root of , while the vector rb of Theil's BLUS residuals minimizes each characteristic root of , cf. Grossman and Styan (1972). We find that if and only if the average of the singular values of is less than , and give examples to show that BAUS is often better than BLUS in this sense. 相似文献
10.
11.
12.
13.
W.E. Diewert 《Journal of econometrics》1976,4(2):115-145
The paper rationalizes certain functional forms for index numbers with functional forms for the underlying aggregator function. An aggregator functional form is said to be ‘flexible’ if it can provide a second order approximation to an arbitrary twice diffentiable linearly homogeneous function. An index number functional form is said to be ‘superlative’ if it is exact (i.e., consistent with) for a ‘flexible’ aggregator functional form. The paper shows that a certain family of index number formulae is exact for the ‘flexible’ quadratic mean of order r aggregator function, defined by Den and others. For r equals 2, the resulting quantity index is Irving Fisher's ideal index. The paper also utilizes the Malmquist quantity index in order to rationalize the Törnqvist-Theil quantity indexin the nonhomothetic case. Finally, the paper attempts to justify the Jorgenson-Griliches productivity measurement technique for the case of discrete (as opposed to continuous) data. 相似文献
14.
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model , where belongs to a large class of deterministic regressors and is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression, and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are -distributed. 相似文献
15.
Don Zagier 《Journal of Mathematical Economics》1983,12(2):103-118
Unlike other popular measures of income inequality, the Gini coefficient is not decomposable, i.e., the coefficient G() of a composite population cannot be computed in terms of the sizes, mean incomes and Gini coefficients of the components i. In this paper upper and lower bounds (best possible for r=2) for G() in terms of these data are given. For example, , where αi is the proportion of the pop ulation in i. One of the tools used, which may be of interest for other applications, is a lower bound for ∫∞0f(x)g(x)dx (converse to Cauchy's inequality) for monotone decreasing functions f and g. 相似文献
16.
17.
18.
R.J. Stroeker 《Journal of econometrics》1983,22(3):269-279
This paper provides explicit estimates of the eigenvalues of the covariance matrix of an autoregressive process of order one. Also explicit error bounds are established in closed form. Typically, such an error bound is given by , so that the approximations improve as the size of the matrix increases. In other words, the accuracy of the approximations increases as direct computations become more costly. 相似文献
19.