共查询到20条相似文献,搜索用时 46 毫秒
1.
In the usual linear model , the error vector u is not observable and the vector r of least squares residuals has a singular covariance matrix that depends on the design matrix X. We approximate u by a vector of uncorrelated ‘residuals’, where G and are orthogonal matrices, and , while z is either 0 or a random vector uncorrelated with u satisfying , . We prove that is uncorrelated with , for any such r1, extending the results of Neudecker (1969). Building on results of Hildreth (1971) and Tiao and Guttman (1967), we show that the BAUS residual vector , where P1 is an orthonormal basis for X, minimizes each characteristic root of , while the vector rb of Theil's BLUS residuals minimizes each characteristic root of , cf. Grossman and Styan (1972). We find that if and only if the average of the singular values of is less than , and give examples to show that BAUS is often better than BLUS in this sense. 相似文献
2.
W.E. Diewert 《Journal of econometrics》1976,4(2):115-145
The paper rationalizes certain functional forms for index numbers with functional forms for the underlying aggregator function. An aggregator functional form is said to be ‘flexible’ if it can provide a second order approximation to an arbitrary twice diffentiable linearly homogeneous function. An index number functional form is said to be ‘superlative’ if it is exact (i.e., consistent with) for a ‘flexible’ aggregator functional form. The paper shows that a certain family of index number formulae is exact for the ‘flexible’ quadratic mean of order r aggregator function, defined by Den and others. For r equals 2, the resulting quantity index is Irving Fisher's ideal index. The paper also utilizes the Malmquist quantity index in order to rationalize the Törnqvist-Theil quantity indexin the nonhomothetic case. Finally, the paper attempts to justify the Jorgenson-Griliches productivity measurement technique for the case of discrete (as opposed to continuous) data. 相似文献
3.
Ishai Oren 《Journal of Mathematical Economics》1981,8(3):207-220
For ? an anonymous exactly strongly consistent social choice function (ESC SCF) and x an alternative, define to be the size of a minimal blocking coalition for x. By studying the correspondence between ? and , we establish the existence, uniqueness and monotonicity of ESC SCF's. We also prove the following conjecture of B. Peleg: A non–constant anonymous ESC SCF depends on the knowledge of every player's full preference profile. 相似文献
4.
This paper is concerned with the discrete time stochastic volatility model Yi=exp(Xi/2)ηi, Xi+1=b(Xi)+σ(Xi)ξi+1, where only (Yi) is observed. The model is rewritten as a particular hidden model: Zi=Xi+εi, Xi+1=b(Xi)+σ(Xi)ξi+1, where (ξi) and (εi) are independent sequences of i.i.d. noise. Moreover, the sequences (Xi) and (εi) are independent and the distribution of ε is known. Then, our aim is to estimate the functions b and σ2 when only observations Z1,…,Zn are available. We propose to estimate bf and (b2+σ2)f and study the integrated mean square error of projection estimators of these functions on automatically selected projection spaces. By ratio strategy, estimators of b and σ2 are then deduced. The mean square risk of the resulting estimators are studied and their rates are discussed. Lastly, simulation experiments are provided: constants in the penalty functions defining the estimators are calibrated and the quality of the estimators is checked on several examples. 相似文献
5.
6.
J.D. Sargan 《Journal of econometrics》1981,16(1):160-161
Consider the model , where is a matrix of polynomials in the lag operator so that , and is a vector of n endogenous variables, , and the remaining are square matrices, , and is .Suppose that satisfies , where , , and is a square matrix. may be white noise, or generated by a vector moving average stochastic process.Now writing , it is assumed that ignoring the implicit restrictions which follow from eq. (1), can be consistently estimated, so that if the equation has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true eq. (1) has a unique solution for and .There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that has a latent root λ in the sense that for some non-zero vector β, .The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β, .Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation in any neighbourhood of the true values. 相似文献
7.
Edward E. Leamer 《Journal of econometrics》1975,3(4):387-390
When a variable is dropped from a least-squares regression equation, there can be no change in sign of any coefficient that is more significant than the coefficient of the omitted variable. More generally, a constrained least squares estimate of a parameter βj must lie in the interval where is the unconstrained estimate of is the standard error of and t is the t-value for testing the (univariate) restriction. 相似文献
8.
9.
This paper considers some problems associated with estimation and inference in the normal linear regression model , when is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of chosen to minimize the estimation criterion function , with respect to m, where σ̂2m is the maximum likelihood estimate of . It is shown that, subject to weak side conditions, if and then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When converges to a positive constant with probability one, then in large samples will never be chosen too small, but the probability of choosing too large remains positive.The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose in (1), then in the limit the probability of choosing too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of , however. These results are illustrated in a small sampling experiment. 相似文献
10.
11.
James R Evans 《Journal of Operations Management》1985,5(2):229-235
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows: where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations: Using this recursive relationship, the number of computations can be greatly reduced. Specifically, N2 ? N2 additions and N2 + N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds. 相似文献
12.
Experience using twenty-one actual economic series suggests that using the Box-Cox transform does not consistently produce superior forecasts. The procedure used was to consider transformations , where λ is chosen by maximum likelihood, a linear ARIMA model fitted to x(λ) and forecasts produced, and finally forecasts constructed for the original series. A main problem found was that no value of λ appeared to produce normally distributed data and so the maximum likelihood procedure was inappropriate. 相似文献
13.
R.J. Stroeker 《Journal of econometrics》1983,22(3):269-279
This paper provides explicit estimates of the eigenvalues of the covariance matrix of an autoregressive process of order one. Also explicit error bounds are established in closed form. Typically, such an error bound is given by , so that the approximations improve as the size of the matrix increases. In other words, the accuracy of the approximations increases as direct computations become more costly. 相似文献
14.
Arnold Zellner 《Journal of econometrics》1981,16(1):151-152
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) and , where is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) , and where θ is the mean of a normal distribution, (4) and , where β is a vector of regression coefficients, (5) vs. where is a vector regression coefficients and 's value is unrestricted. In several cases, is a vector of regression coefficients and 's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems. 相似文献
15.
16.
Nonlinearities in the drift and diffusion coefficients influence temporal dependence in diffusion models. We study this link using three measures of temporal dependence: ρ-mixing, β-mixing and α-mixing. Stationary diffusions that are ρ-mixing have mixing coefficients that decay exponentially to zero. When they fail to be ρ-mixing, they are still β-mixing and α-mixing; but coefficient decay is slower than exponential. For such processes we find transformations of the Markov states that have finite variances but infinite spectral densities at frequency zero. The resulting spectral densities behave like those of stochastic processes with long memory. Finally we show how state dependent, Poisson sampling alters the temporal dependence. 相似文献
17.
This paper introduces the concept of risk parameter in conditional volatility models of the form ?t=σt(θ0)ηt and develops statistical procedures to estimate this parameter. For a given risk measure r, the risk parameter is expressed as a function of the volatility coefficients θ0 and the risk, r(ηt), of the innovation process. A two-step method is proposed to successively estimate these quantities. An alternative one-step approach, relying on a reparameterization of the model and the use of a non Gaussian QML, is proposed. Asymptotic results are established for smooth risk measures, as well as for the Value-at-Risk (VaR). Asymptotic comparisons of the two approaches for VaR estimation suggest a superiority of the one-step method when the innovations are heavy-tailed. For standard GARCH models, the comparison only depends on characteristics of the innovations distribution, not on the volatility parameters. Monte-Carlo experiments and an empirical study illustrate the superiority of the one-step approach for financial series. 相似文献
18.
We introduce and study the f0-relevance property of a coherent measure of risk on a positions vector space with vector ordering. We show that it is equivalent to a special no arbitrage condition on bounded positions spaces. Continuity from below leads to representations of f0-relevant coherent measures of risk based on equivalent functionals in Banach subspaces of the order dual. We define and describe f0-martingales in a lattice, and present a solution to the hedging price problem: the asset price process is an order convergent f0-martingale. Under the f0-relevance hypothesis we study the relationship between worst conditional mean and value at risk. 相似文献
19.
An expected rank-size rule: A theoretical relationship between the rank size rule and city size distributions 总被引:1,自引:0,他引:1
Atsuyuki Okabe 《Regional Science and Urban Economics》1979,9(1):21-40
This paper investigates a theoretical relationship between the rank-size rule and city size distributions. First, a method of relating a certain city size distribution to ranked city size is formulated by employing order statistics. Second, it is shown that there do not exist city size distributions which satisfy the rank-size rule. Third, an alternative rank-size rule is proposed as , which is equivalent to the Pareto city size distribution. Last, an alternative statistical test for the rank-size rule is proposed to overcome a shortcoming of the conventional test. Along this line, the Hokkaido region data is analyzed. 相似文献
20.
Charles F. Manski 《Journal of econometrics》1985,27(3):313-333
This paper is concerned with the estimation of the model MED(y|x) = xβ from a random sample of observations on (sgn y, x). Manski (1975) introduced the maximum score estimator of the normalized parameter vector . In the present paper, strong consistency is proved. It is also proved that the maximum score estimate lies outside any fixed neighborhood of β1 with probability that goes to zero at exponential rate. 相似文献