首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 430 毫秒
1.
Experience using twenty-one actual economic series suggests that using the Box-Cox transform does not consistently produce superior forecasts. The procedure used was to consider transformations x(λ)=(xλ?1)λ, where λ is chosen by maximum likelihood, a linear ARIMA model fitted to x(λ) and forecasts produced, and finally forecasts constructed for the original series. A main problem found was that no value of λ appeared to produce normally distributed data and so the maximum likelihood procedure was inappropriate.  相似文献   

2.
When a variable is dropped from a least-squares regression equation, there can be no change in sign of any coefficient that is more significant than the coefficient of the omitted variable. More generally, a constrained least squares estimate of a parameter βj must lie in the interval (β?j?V12jj|t|, β?j + V12jj|t|) where β?j is the unconstrained estimate of βj, V12jj is the standard error of β?j and t is the t-value for testing the (univariate) restriction.  相似文献   

3.
Consider the model
A(L)xt=B(L)yt+C(L)zt=ut, t=1,…,T
, where
A(L)=(B(L):C(L))
is a matrix of polynomials in the lag operator so that Lrxt=xt?r, and yt is a vector of n endogenous variables,
B(L)=s=0k BsLs
B0In, and the remaining Bs are n × n square matrices,
C(L)=s=0k CsLs
, and Cs is n × m.Suppose that ut satisfies
R(L)ut=et
, where
R(L)=s=0rRs Ls
, R0=In, and Rs is a n × n square matrix. et may be white noise, or generated by a vector moving average stochastic process.Now writing
Ψ(L)=R(L)A(L)
, it is assumed that ignoring the implicit restrictions which follow from eq. (1), Ψ(L) can be consistently estimated, so that if the equation
Ψ(L)xt=et
has a moving average error stochastic process, suitable conditions [see E.J. Hannan] for the identification of the unconstrained model are satisfied, and that the appropriate conditions (lack of multicollinearity) on the data second moments matrices discussed by Hannan are also satisfied. Then the essential conditions for identification of the A(L) and R(L) can be considered by requiring that for the true Ψ(L) eq. (1) has a unique solution for A(L) and R(L).There are three types of lack of identification to be distinguished. In the first there are a finite number of alternative factorisations. Apart from a factorisation condition which will be satisfied with probability one a necessary and sufficient condition for lack of identification is that A(L) has a latent root λ in the sense that for some non-zero vector β,
β′A(λ)=0
.The second concept of lack of identification corresponds to the Fisher conditions for local identifiability on the derivatives of the constraints. It is shown that a necessary and sufficient condition that the model is locally unidentified in this sense is that R(L) and A(L) have a common latent root, i.e., that for some vectors δ and β,
R(λ)δ=0 and β′A(λ)=0
.Firstly it is shown that only if further conditions are satisfied will this lead to local unidentifiability in the sense that there are solutions of the equation
Ψ(z)=R(z)A(z)
in any neighbourhood of the true values.  相似文献   

4.
This paper develops a bootstrap theory for models including autoregressive time series with roots approaching to unity as the sample size increases. In particular, we consider the processes with roots converging to unity with rates slower than n-1n-1. We call such processes weakly   integrated processes. It is established that the bootstrap relying on the estimated autoregressive model is generally consistent for the weakly integrated processes. Both the sample and bootstrap statistics of the weakly integrated processes are shown to yield the same normal asymptotics. Moreover, for the asymptotically pivotal statistics of the weakly integrated processes, the bootstrap is expected to provide an asymptotic refinement and give better approximations for the finite sample distributions than the first order asymptotic theory. For the weakly integrated processes, the magnitudes of potential refinements by the bootstrap are shown to be proportional to the rate at which the root of the underlying process converges to unity. The order of boostrap refinement can be as large as o(n-1/2+?)o(n-1/2+?) for any ?>0?>0. Our theory helps to explain the actual improvements observed by many practitioners, which are made by the use of the bootstrap in analyzing the models with roots close to unity.  相似文献   

5.
This paper provides explicit estimates of the eigenvalues of the covariance matrix of an autoregressive process of order one. Also explicit error bounds are established in closed form. Typically, such an error bound is given by εk = (4(n+1))12ρ2sin((n+1)), so that the approximations improve as the size of the matrix increases. In other words, the accuracy of the approximations increases as direct computations become more costly.  相似文献   

6.
For ? an anonymous exactly strongly consistent social choice function (ESC SCF) and x an alternative, define bx=b(f)x to be the size of a minimal blocking coalition for x. By studying the correspondence between ? and {b(f)x}, we establish the existence, uniqueness and monotonicity of ESC SCF's. We also prove the following conjecture of B. Peleg: A non–constant anonymous ESC SCF depends on the knowledge of every player's full preference profile.  相似文献   

7.
This paper considers some problems associated with estimation and inference in the normal linear regression model
yt=j=1m0 βjxtjt, vart)=σ2
, when m0 is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of m0 chosen to minimize the estimation criterion function
EC(m, T)=σ?2m + mg(T)
, with respect to m, where σ̂2m is the maximum likelihood estimate of σ2. It is shown that, subject to weak side conditions, if g(T)a.s.0 and Tg(T)a.s. then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of m0 are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When g(T) converges to a positive constant with probability one, then in large samples m0 will never be chosen too small, but the probability of choosing m0 too large remains positive.The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on m0 is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose m0 in (1), then in the limit the probability of choosing m0 too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of m0, however. These results are illustrated in a small sampling experiment.  相似文献   

8.
In the usual linear model y = +u, the error vector u is not observable and the vector r of least squares residuals has a singular covariance matrix that depends on the design matrix X. We approximate u by a vectorr1 = G(JA'y+Kz) of uncorrelated ‘residuals’, where G and (J, K) are orthogonal matrices, A'X = 0 and A'A = I, while z is either 0 or a random vector uncorrelated with u satisfying E(z) = E(J'u) = 0, V(z) = V(J'u) = σ2I. We prove that r1-r is uncorrelated with r-u, for any such r1, extending the results of Neudecker (1969). Building on results of Hildreth (1971) and Tiao and Guttman (1967), we show that the BAUS residual vector rh = r+P1z, where P1 is an orthonormal basis for X, minimizes each characteristic root of V(r1-u), while the vector rb of Theil's BLUS residuals minimizes each characteristic root of V(Jra-r), cf. Grossman and Styan (1972). We find that tr V(rh-u) < tr V(Jrb-u) if and only if the average of the singular values of P1K is less than 12, and give examples to show that BAUS is often better than BLUS in this sense.  相似文献   

9.
This paper is concerned with the estimation of the model MED(y|x) = from a random sample of observations on (sgn y, x). Manski (1975) introduced the maximum score estimator of the normalized parameter vector β1 = β/?β?. In the present paper, strong consistency is proved. It is also proved that the maximum score estimate lies outside any fixed neighborhood of β1 with probability that goes to zero at exponential rate.  相似文献   

10.
We correct the limit theory presented in an earlier paper by Hu and Phillips [2004a. Nonstationary discrete choice. Journal of Econometrics 120, 103–138] for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n1/4n1/4 and n3/4n3/4), all parameters including the thresholds converge at the rate n3/4n3/4. The presence of nonzero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pile-up, or increasing density, towards the limits of the domain of definition.  相似文献   

11.
Unlike other popular measures of income inequality, the Gini coefficient is not decomposable, i.e., the coefficient G(X) of a composite population X=X1∪…∪Xr cannot be computed in terms of the sizes, mean incomes and Gini coefficients of the components Xi. In this paper upper and lower bounds (best possible for r=2) for G(X) in terms of these data are given. For example, G(X1∪…∪Xr)≧ΣαiG(Xi), where αi is the proportion of the pop ulation in Xi. One of the tools used, which may be of interest for other applications, is a lower bound for ∫0f(x)g(x)dx (converse to Cauchy's inequality) for monotone decreasing functions f and g.  相似文献   

12.
Insufficient attention has been focused on the ubiquitous problem of excess inventory levels. This paper develops two models of different complexity for determining if stock levels are economically unjustifiable and, if so, for determining inventory retention levels. The models indicate how much stock should be retained for regular use and how much should be disposed of at a salvage price for a given item.The first model illustrates the basic logic of this approach. The net benefit realized by disposing of some quantity of excess inventory is depicted by the following equation: HOLDING NET = SALVAGE + COST - REPURCHASE - REORDER BENEFIT REVENUE SAVINGS COSTS COSTSThis relationship can be depicted mathematically as a parabolic function of the time supply retained. Using conventional optimization, the following optimum solution is obtained:
to=P?Ps+CQPF+Q2R
where t0 is the optimum time supply retained; P is the per-unit purchase price; Ps is the per-unit salvage price; C is the ordering cost; Q is the usual item lot size; F is the holding cost fraction; and R is the annual demand for the item. Any items in excess of the optimum time supply should be sold at the salvage price.The second model adjusts holding cost savings, repurchase costs, and reorder costs to account for present value considerations and for inflation. The following optimum relationship is derived:
PFR2k?PFtR2e?kt+PFQ2+PQ(i?k)+C(i?k)e(i?k)Q/R?1e(i?k)t-PsR?PFR2k=0
where i is the expected inflation rate and k is the required rate of return. Unfortunately this relationship cannot be solved analytically for t0; Newton's Method can be used to find a numerical solution.The solutions obtained by the two models are compared. Not surprisingly, the present value correction tends to reduce the economic time supply to be retained, since repurchase costs and reorder costs are incurred in the future while salvage revenue and holding cost savings are realized immediately. Additionally, both models are used to derive a relationship to describe the minimum economic salvage value, which is the minimum salvage price for which excess inventory should be sold.The simple model, which does not correct for time values, can be used by any organization with the sophistication level to use an EOQ. The present value model which includes an inflation correction is more complex, but can readily be used on a microcomputer. These models are appropriate for independent demand items. It is believed that these models can reduce inventory investment and improve bottom line performance.  相似文献   

13.
The paper rationalizes certain functional forms for index numbers with functional forms for the underlying aggregator function. An aggregator functional form is said to be ‘flexible’ if it can provide a second order approximation to an arbitrary twice diffentiable linearly homogeneous function. An index number functional form is said to be ‘superlative’ if it is exact (i.e., consistent with) for a ‘flexible’ aggregator functional form. The paper shows that a certain family of index number formulae is exact for the ‘flexible’ quadratic mean of order r aggregator function, iΣjaijxir2xjr2)1r, defined by Den and others. For r equals 2, the resulting quantity index is Irving Fisher's ideal index. The paper also utilizes the Malmquist quantity index in order to rationalize the Törnqvist-Theil quantity indexin the nonhomothetic case. Finally, the paper attempts to justify the Jorgenson-Griliches productivity measurement technique for the case of discrete (as opposed to continuous) data.  相似文献   

14.
Finite sample distributions of studentized inequality measures differ substantially from their asymptotic normal distribution in terms of location and skewness. We study these aspects formally by deriving the second-order expansion of the first and third cumulant of the studentized inequality measure. We state distribution-free expressions for the bias and skewness coefficients. In the second part we improve over first-order theory by deriving Edgeworth expansions and normalizing transforms. These normalizing transforms are designed to eliminate the second-order term in the distributional expansion of the studentized transform and converge to the Gaussian limit at rate O(n−1)O(n1). This leads to improved confidence intervals and applying a subsequent bootstrap leads to a further improvement to order O(n−3/2)O(n3/2). We illustrate our procedure with an application to regional inequality measurement in Côte d’Ivoire.  相似文献   

15.
We consider a set K of differentiated commodities. A preference relation on the set of consumption plans is strictly monotonic whenever to consume more of at least one commodity is more preferred. It is an easy task to find examples of strictly monotonic preference relations when K   is finite or countable. However, it is not easy for spaces like ?([0,1])?([0,1]), the space of bounded functions on the unit interval.  相似文献   

16.
An infinite-order asymptotic expansion is given for the autocovariance function of a general stationary long-memory process with memory parameter d∈(−1/2,1/2)d(1/2,1/2). The class of spectral densities considered includes as a special case the stationary and invertible ARFIMA(p,d,qp,d,q) model. The leading term of the expansion is of the order O(1/k1−2d)O(1/k12d), where kk is the autocovariance order, consistent with the well known power law decay for such processes, and is shown to be accurate to an error of O(1/k3−2d)O(1/k32d). The derivation uses Erdélyi’s [Erdélyi, A., 1956. Asymptotic Expansions. Dover Publications, Inc, New York] expansion for Fourier-type integrals when there are critical points at the boundaries of the range of integration - here the frequencies {0,2π}{0,2π}. Numerical evaluations show that the expansion is accurate even for small kk in cases where the autocovariance sequence decays monotonically, and in other cases for moderate to large kk. The approximations are easy to compute across a variety of parameter values and models.  相似文献   

17.
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows:
Mjk=Aj+CjQj+k?1t=j Htkr=t+1Dr
Fk= min1?j?k[Fj+Mjk];F0=0
where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations:
Mj,k=Aj+CjDj
Mj,k+1=Mjk+Dk+1(Cj+kt=jHt,k?j
Using this recursive relationship, the number of computations can be greatly reduced. Specifically, 32N2 ? 12N2 additions and 12N2 + 12N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds.  相似文献   

18.
We examine the asymptotic properties of the coefficient of determination, R2R2, in models with α-stableα-stable   random variables. If the regressor and error term share the same index of stability α<2α<2, we show that the R2R2  statistic does not converge to a constant but has a nondegenerate distribution on the entire [0,1][0,1] interval. We provide closed-form expressions for the cumulative distribution function and probability density function of this limit random variable, and we show that the density function is unbounded at 0 and 1. If the indices of stability of the regressor and error term are unequal, we show that the coefficient of determination converges in probability to either 0 or 1, depending on which variable has the smaller index of stability, irrespective of the value of the slope coefficient. In an empirical application, we revisit the Fama and MacBeth (1973) two-stage regression and demonstrate that in the infinite-variance case the R2R2  statistic of the second-stage regression converges to 0 in probability even if the slope coefficient is nonzero. We deduce that a small value of the R2R2  statistic should not, in itself, be used to reject the usefulness of a regression model.  相似文献   

19.
20.
In a sample-selection model with the ‘selection’ variable QQ and the ‘outcome’ variable YY, YY is observed only when Q=1Q=1. For a treatment DD affecting both QQ and YY, three effects are of interest: ‘participation  ’ (i.e., the selection) effect of DD on QQ, ‘visible performance  ’ (i.e., the observed outcome) effect of DD on Y≡QYYQY, and ‘invisible performance  ’ (i.e., the latent outcome) effect of DD on YY. This paper shows the conditions under which the three effects are identified, respectively, by the three corresponding mean differences of QQ, YY, and Y|Q=1Y|Q=1 (i.e., Y|Q=1Y|Q=1) across the control (D=0D=0) and treatment (D=1D=1) groups. Our nonparametric estimators for those effects adopt a two-sample framework and have several advantages over the usual matching methods. First, there is no need to select the number of matched observations. Second, the asymptotic distribution is easily obtained. Third, over-sampling the control/treatment group is allowed. Fourth, there is a built-in mechanism that takes into account the ‘non-overlapping support problem’, which the usual matching deals with by choosing a ‘caliper’. Fifth, a sensitivity analysis to gauge the presence of unobserved confounders is available. A simulation study is conducted to compare the proposed methods with matching methods, and a real data illustration is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号