首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
A model of an urban area producing one good for export is presented and solved to yield the employment density function. The production technology is a constant elasticity of substitution (σ) production function, unlike other models which use a Cobb-Douglas function. The shape of the employment density function proves to be sensitive to the elasticity of substitution and can diverge markedly from the usually assumed negative exponential form. The employment density function is a constant when σ = 0; always declines for positive σ but at an increasing rate if σ ≤ 12; at either an increasing rate near the city center and decreasing thereafter or always at a decreasing rate if 12 < σ < 1; and at a decreasing rate if σ ≥ 1. When the employment density function is differentiated with respect to σ, the sign of the derivative is positive near the city center and generally negative in the outer regions; the city seems to centralize. A decline in transport costs does not always suburbanize employment; it is contingent on the nature of the production technology.  相似文献   

3.
By aggregating simple, possibly dependent, dynamic micro-relationships, it is shown that the aggregate series may have univariate long-memory models and obey integrated, or infinite length transfer function relationships. A long-memory time series model is one having spectrum or order ω-2d for small frequencies ω, d>0. These models have infinite variance for d≧12 but finite variance for d<12. For d=1 the series that need to be differenced to achieve stationarity occur, but this case is not found to occur from aggregation. It is suggested that if series obeying such models occur in practice, from aggregation, then present techniques being used for analysis are not appropriate.  相似文献   

4.
This short note shows that models with an uncongestible or mildly congestible public good and a (non-atomic) continuum of consumers have an interesting but unfortunate property. Only an infinite level of public good provision in the continuum economy can be approximated by feasible public good levels in a sequence of economies with finite populations. We discuss the theoretical and practical problems this creates for familiar models that employ a continuum of consumers and a finite level of an uncongestible public good.  相似文献   

5.
Much work is econometrics and statistics has been concerned with comparing Bayesian and non-Bayesian estimation results while much less has involved comparisons of Bayesian and non- Bayesian analyses of hypotheses. Some issues arising in this latter area that are mentioned and discussed in the paper are: (1) Is it meaningful to associate probabilities with hypotheses? (2) What concept of probability is to be employed in analyzing hypotheses? (3) Is a separate theory of testing needed? (4) Must a theory of testing be capable of treating both sharp and non-sharp hypotheses? (5) How is prior information incorporated in testing? (6) Does the use of power functions in practice necessitate the use of prior information? (7) How are significance levels determined when sample sizes are large and what are the interpretations of P-values and tail areas? (8) How are conflicting results provided by asymptotically equivalent testing procedures to be reconciled? (9) What is the rationale for the ‘5% accept-reject syndrome’ that afflicts econometrics and applied statistics? (10) Does it make sense to test a null hypothesis with no alternative hypothesis present? and (11) How are the results of analyses of hypotheses to be combined with estimation and prediction procedures? Brief discussions of these issues with references to the literature are provided.Since there is much controversy concerning how hypotheses are actually analyzed in applied work, the results of a small survey relating to 22 articles employing empirical data published in leading economic and econometric journals in 1978 are presented. The major results of this survey indicate that there is wide-spread use of the 1% and 5% levels of significance in non- Bayesian testing with no systematic relation between choice of significance level and sample size. Also, power considerations are not generally discussed in empirical studies. In fact there was a discussion of power in only one of the articles surveyed. Further, there was very little formal or informal use of prior information employed in testing hypotheses and practically no attention was given to the effects of tests or pre-tests on the properties of subsequent tests or estimation results. These results indicate that there is much room for improvement in applied analyses of hypotheses.Given the findings of the survey of applied studies, it is suggested that Bayesian procedures for analyzing hypotheses may be helpful in improving applied analyses. In this connection, the paper presents a review of some Bayesian procedures and results for analyzing sharp and non-sharp hypotheses with explicit use of prior information. In general, Bayesian procedures have good sampling properties and enable investigators to compute posterior probabilities and posterior odds ratios associated with alternative hypotheses quite readily. The relationships of several posterior odds ratios to usual non-Bayesian testing procedures is clearly demonstrated. Also, a relation between the P-value or tail area and a posterior odds ratio is described in detail in the important case of hypotheses about a mean of a normal distribution.Other examples covered in the paper include posterior odds ratios for the hypotheses that (1) βi> and βi<0, where βi is a regression coefficient, (2) data are drawn from either of two alternative distributions, (3) θ=0, θ> and θ<0 where θ is the mean of a normal distribution, (4) β=0 and β≠0, where β is a vector of regression coefficients, (5) β2=0 vs. β2≠0 where β' =(β'1β2) is a vector regression coefficients and β1's value is unrestricted. In several cases, is a vector of regression coefficients and β1's value is unrestricted. In several cases, tabulations of odds ratios are provided. Bayesian versions of the Chow-test for equality of regression coefficients and of the Goldfeld-Quandt test for equality of disturbance variances are given. Also, an application of Bayesian posterior odds ratios to a regression model selection problem utilizing the Hald data is reported.In summary, the results reported in the paper indicate that operational Bayesian procedures for analyzing many hypotheses encountered in model selection problems are available. These procedures yield posterior odds ratios and posterior probabilities for competing hypotheses. These posterior odds ratios represent the weight of the evidence supporting one model or hypothesis relative to another. Given a loss structure, as is well known one can choose among hypotheses so as to minimize expected loss. Also, with posterior probabilities available and an estimation or prediction loss function, it is possible to choose a point estimate or prediction that minimizes expected loss by averaging over alternative hypotheses or models. Thus it is seen that the Bayesian approach for analyzing competing models or hypotheses provides a unified framework that is extremely useful in solving a number of model selection problems.  相似文献   

6.
We perform a large-scale empirical study in order to compare the forecasting performances of single-regime and Markov-switching GARCH (MSGARCH) models from a risk management perspective. We find that MSGARCH models yield more accurate Value-at-Risk, expected shortfall, and left-tail distribution forecasts than their single-regime counterparts for daily, weekly, and ten-day equity log-returns. Also, our results indicate that accounting for parameter uncertainty improves the left-tail predictions, independently of the inclusion of the Markov-switching mechanism.  相似文献   

7.
Insufficient attention has been focused on the ubiquitous problem of excess inventory levels. This paper develops two models of different complexity for determining if stock levels are economically unjustifiable and, if so, for determining inventory retention levels. The models indicate how much stock should be retained for regular use and how much should be disposed of at a salvage price for a given item.The first model illustrates the basic logic of this approach. The net benefit realized by disposing of some quantity of excess inventory is depicted by the following equation: HOLDING NET = SALVAGE + COST - REPURCHASE - REORDER BENEFIT REVENUE SAVINGS COSTS COSTSThis relationship can be depicted mathematically as a parabolic function of the time supply retained. Using conventional optimization, the following optimum solution is obtained:
to=P?Ps+CQPF+Q2R
where t0 is the optimum time supply retained; P is the per-unit purchase price; Ps is the per-unit salvage price; C is the ordering cost; Q is the usual item lot size; F is the holding cost fraction; and R is the annual demand for the item. Any items in excess of the optimum time supply should be sold at the salvage price.The second model adjusts holding cost savings, repurchase costs, and reorder costs to account for present value considerations and for inflation. The following optimum relationship is derived:
PFR2k?PFtR2e?kt+PFQ2+PQ(i?k)+C(i?k)e(i?k)Q/R?1e(i?k)t-PsR?PFR2k=0
where i is the expected inflation rate and k is the required rate of return. Unfortunately this relationship cannot be solved analytically for t0; Newton's Method can be used to find a numerical solution.The solutions obtained by the two models are compared. Not surprisingly, the present value correction tends to reduce the economic time supply to be retained, since repurchase costs and reorder costs are incurred in the future while salvage revenue and holding cost savings are realized immediately. Additionally, both models are used to derive a relationship to describe the minimum economic salvage value, which is the minimum salvage price for which excess inventory should be sold.The simple model, which does not correct for time values, can be used by any organization with the sophistication level to use an EOQ. The present value model which includes an inflation correction is more complex, but can readily be used on a microcomputer. These models are appropriate for independent demand items. It is believed that these models can reduce inventory investment and improve bottom line performance.  相似文献   

8.
We consider an N-period planning horizon with known demands Dt ordering cost At, procurement cost, Ct and holding cost Ht in period t. The dynamic lot-sizing problem is one of scheduling procurement Qt in each period in order to meet demand and minimize cost.The Wagner-Whitin algorithm for dynamic lot sizing has often been misunderstood as requiring inordinate computational time and storage requirements. We present an efficient computer implementation of the algorithm which requires low core storage, thus enabling it to be potentially useful on microcomputers.The recursive computations can be stated as follows:
Mjk=Aj+CjQj+k?1t=j Htkr=t+1Dr
Fk= min1?j?k[Fj+Mjk];F0=0
where Mjk is the cost incurred by procuring in period j for all periods j through k, and Fk is the minimal cost for periods 1 through k. Our implementation relies on the following observations regarding these computations:
Mj,k=Aj+CjDj
Mj,k+1=Mjk+Dk+1(Cj+kt=jHt,k?j
Using this recursive relationship, the number of computations can be greatly reduced. Specifically, 32N2 ? 12N2 additions and 12N2 + 12N multiplications are required. This is insensitive to the data.A FORTRAN implementation on an Amdahl 470 yielded computation times (in 10?3 seconds) of T = ?.249 + .0239N + .00446N2. Problems with N = 500 were solved in under two seconds.  相似文献   

9.
Single-equation instrumental variable estimators (e.g., the k-class) are frequently employed to estimate econometric equations. This paper employs Kadane's (1971) small-σ method and a squared-error matrix loss function to characterize a single-equation class of optimal instruments, A. A is optimal (asymptotically for a small scalar multiple, σ, of the model's disturbance) in that all of its members are preferred to all non-members. From this characterization it is shown all k-class estimators and certain iterative estimators belong to A. However, non-iterative principal component estimators [e.g., Kloek and Mennes (1960)] are unlikely to belong to A. These latter instrumental variable estimators have been advocated [see Amemiya (1966) and Kloek and Mennes (1960)] for estimating ‘large’ econometric models.  相似文献   

10.
In the theory of revealed preference and in the approach to integrability theory of Hurwicz and Uzawa certain conditions are proposed implying the existence of a utility function generating the given demand function. This article presents a hypothesis which, under supposition of some well-known axioms of those models, is necessary and sufficient for the existence of a continuous utility function. This hypothesis implies the existence of a utility function u with the property that all of the boundary points of the set {x|u(x)≧α} for every α?R are lower boundary points, being fundamental for the continuity of the utility function.  相似文献   

11.
This paper considers some problems associated with estimation and inference in the normal linear regression model
yt=j=1m0 βjxtjt, vart)=σ2
, when m0 is unknown. The regressors are taken to be stochastic and assumed to satisfy V. Grenander's (1954) conditions almost surely. It is further supposed that estimation and inference are undertaken in the usual way, conditional on a value of m0 chosen to minimize the estimation criterion function
EC(m, T)=σ?2m + mg(T)
, with respect to m, where σ&#x0302;2m is the maximum likelihood estimate of σ2. It is shown that, subject to weak side conditions, if g(T)a.s.0 and Tg(T)a.s. then this estimate is weakly consistent. It follows that estimates conditional on the chosen value of m0 are asymptotically efficient, and inference undertaken in the usual way is justified in large samples. When g(T) converges to a positive constant with probability one, then in large samples m0 will never be chosen too small, but the probability of choosing m0 too large remains positive.The results of the paper are stronger than similar ones [R. Shibata (1976), R.J. Bhansali and D.Y. Downham (1977)] in that a known upper bound on m0 is not assumed. The strengthening is made possible by the assumptions of strictly exogenous regressors and normally distributed disturbances. The main results are used to show that if the model selection criteria of H. Akaike (1974), T. Amemiya (1980), C.L. Mallows (1973) or E. Parzen (1979) are used to choose m0 in (1), then in the limit the probability of choosing m0 too large is at least 0.2883. The approach taken by G. Schwarz (1978) leads to a consistent estimator of m0, however. These results are illustrated in a small sampling experiment.  相似文献   

12.
In credibility theory, an unobservable random vector Y is approximated by a random vector ? in a pre-assigned set A of admitted estimators for Y. The credibility approximation ? for Y is best in the sense that it is the vector V ? A minimizing the distance between V and Y. The credibility estimator depends on observable random variables but also on unknown structure parameters. In the practical models, the latter can be estimated from realizations of the observable random variables.  相似文献   

13.
This paper presents a difference in the comparative statics of general equilibrium models with land when there are finitely many agents, and when there is a continuum of agents. Restricting attention to quasi-linear and Cobb–Douglas utility, it is shown that with finitely many agents, an increase in the (marginal) commuting cost increases land rent per unit (that is, land rent averaged over the consumer's equilibrium parcel) paid by the consumer located at each fixed distance from the central business district. In contrast, with a continuum of agents, average land rent goes up for consumers at each fixed distance close to the central business district, is constant at some intermediate distance, and decreases for locations farther away. Therefore, there is a qualitative difference between the two types of models, and this difference is potentially testable.  相似文献   

14.
In the usual linear model y = +u, the error vector u is not observable and the vector r of least squares residuals has a singular covariance matrix that depends on the design matrix X. We approximate u by a vectorr1 = G(JA'y+Kz) of uncorrelated ‘residuals’, where G and (J, K) are orthogonal matrices, A'X = 0 and A'A = I, while z is either 0 or a random vector uncorrelated with u satisfying E(z) = E(J'u) = 0, V(z) = V(J'u) = σ2I. We prove that r1-r is uncorrelated with r-u, for any such r1, extending the results of Neudecker (1969). Building on results of Hildreth (1971) and Tiao and Guttman (1967), we show that the BAUS residual vector rh = r+P1z, where P1 is an orthonormal basis for X, minimizes each characteristic root of V(r1-u), while the vector rb of Theil's BLUS residuals minimizes each characteristic root of V(Jra-r), cf. Grossman and Styan (1972). We find that tr V(rh-u) < tr V(Jrb-u) if and only if the average of the singular values of P1K is less than 12, and give examples to show that BAUS is often better than BLUS in this sense.  相似文献   

15.
Unlike other popular measures of income inequality, the Gini coefficient is not decomposable, i.e., the coefficient G(X) of a composite population X=X1∪…∪Xr cannot be computed in terms of the sizes, mean incomes and Gini coefficients of the components Xi. In this paper upper and lower bounds (best possible for r=2) for G(X) in terms of these data are given. For example, G(X1∪…∪Xr)≧ΣαiG(Xi), where αi is the proportion of the pop ulation in Xi. One of the tools used, which may be of interest for other applications, is a lower bound for ∫0f(x)g(x)dx (converse to Cauchy's inequality) for monotone decreasing functions f and g.  相似文献   

16.
17.
Experience using twenty-one actual economic series suggests that using the Box-Cox transform does not consistently produce superior forecasts. The procedure used was to consider transformations x(λ)=(xλ?1)λ, where λ is chosen by maximum likelihood, a linear ARIMA model fitted to x(λ) and forecasts produced, and finally forecasts constructed for the original series. A main problem found was that no value of λ appeared to produce normally distributed data and so the maximum likelihood procedure was inappropriate.  相似文献   

18.
Given an arbitrary function x: RlRl satisfying Walras law and homogeneity, Debreu decomposed x into the sum of l ‘individually rational’ functions x(p)=Σlk=1[uvbar|x]k(p). Here we find explicit utility functions uk, constructed on the basis of a simple geometric intuition, which give rise to Debreu's excess demands [uvbar|x]k(p).  相似文献   

19.
A simple method of obtaining asymptotic expansions for the densities of sufficient estimators is described. It is an extension of the one developed by O. Barndorff-Nielsen and D.R. Cox (1979) for exponential families. A series expansion in powers of n?1 is derived of which the first term has an error of order n?1 which can effectively be reduced to n-?32 by renormalization. The results obtained are similar to those given by H.E. Daniels's (1954) saddlepoint method but the derivations are simpler. A brief treatment of approximations to conditional densities is given. Theorems are proved which extend the validity of the multivariate Edgeworth expansion to parametric families of densities of statistics which need not be standardized sums of independent and identically distributed vectors. These extensions permit the treatment of problems arising in time series analysis. The technique is used by J. Durbin (1980) to obtain approximations to the densities of partial serial correlation coefficients.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号