首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The purpose of this paper is to develop a dynamic Leontief model of an msector economic system in which the production of all goods requires one time period and one primary factor, but no capital stocks of any good, and in which the total value of outputs from all sectors is required to grow at a specified rate in each period. The requirement of a fixed rate of total value growth is less restrictive than the familiar condition of balanced growth across all sectors, and it permits the definition and analysis of interesting finite-period optimization problems. Specific results of the paper include the following: (1) the proof that a value-added maximization problem with an unrestricted initial state will experience consumption in exactly one sector in each time period, and willyield an optimal value function which is linear in the variables that describe the terminal state of the system; (2) the development of an efficient Dantzig-Wolfe procedure for analysis of the total value-added maximization problem where both the initial and terminal states are specified; (3) the derivation of testable properties that will guarantee the attainability of a specified target state from a specified initial state of the system; (4) a formal comparison of some basic characteristics of total value growth and balanced value growth.  相似文献   

2.
Consider a stochastic control problem where the state variable follows a Brownian motion. The flow reward is a function of the state, which can be regulated with a lump-sum and linear cost of adjustment. Using a discrete approximation, a simple exposition of the control problem is developed. The value matching conditions that hold for any given control parameters, and the smooth pasting conditions that hold for the optimal control are derived. Some extensions and economic applications of the method are also discussed.  相似文献   

3.
This paper considers identification and estimation of a fixed-effects model with an interval-censored dependent variable. In each time period, the researcher observes the interval (with known endpoints) in which the dependent variable lies but not the value of the dependent variable itself. Two versions of the model are considered: a parametric model with logistic errors and a semiparametric model with errors having an unspecified distribution. In both cases, the error disturbances can be heteroskedastic over cross-sectional units as long as they are stationary within a cross-sectional unit; the semiparametric model also allows for serial correlation of the error disturbances. A conditional-logit-type composite likelihood estimator is proposed for the logistic fixed-effects model, and a composite maximum-score-type estimator is proposed for the semiparametric model. In general, the scale of the coefficient parameters is identified by these estimators, meaning that the causal effects of interest are estimated directly in cases where the latent dependent variable is of primary interest (e.g., pure data-coding situations). Monte Carlo simulations and an empirical application to birthweight outcomes illustrate the performance of the parametric estimator.  相似文献   

4.
The decision maker receives signals imperfectly correlated with an unobservable state variable and must take actions whose payoffs depend on the state. The state randomly changes over time. In this environment, we examine the performance of simple linear updating rules relative to Bayesian learning. We show that a range of parameters exists for which linear learning results in exactly the same decisions as Bayesian learning, although not in the same beliefs. Outside this parameter range, we use simulations to demonstrate that the consumption level attainable under the optimal linear rule is virtually indistinguishable from the one attainable under Bayes’ rule, although the respective decisions will not always be identical. These results suggest that simple rules of thumb can have an advantage over Bayesian updating when more complex calculations are more costly to perform than less complex ones. We demonstrate the implications of such an advantage in an evolutionary model where agents “learn to learn.”  相似文献   

5.
We analyze the possibility of eventual extinction of a replenishable economic asset (natural resource or capital) whose stocks follow a stationary Markov process with zero as an absorbing state. In particular, the stochastic process of stocks is determined by a given sequence of i.i.d. random variables with bounded support and a positive-valued transition function that maps the current level of the stock and the current realization of the random variable to the next period’s stock. Such processes arise naturally in stochastic dynamic models of economic growth and exploitation of natural resources. Under a minimal set of assumptions, the paper identifies conditions for almost sure extinction from all initial stocks as well as conditions under which the stocks enter every neighborhood of zero infinitely often almost surely. Our results emphasize the crucial role played by the nature of the transition function under the worst realization of the random shock and clarifies the role of the “average” rate of growth in the context of extinction.  相似文献   

6.
We propose a categorical time-varying coefficient translog cost function, where each coefficient is expressed as a nonparametric function of a categorical time variable, thereby allowing each time period to have its own set of coefficients. Our application to U.S. electricity firms reveals that this model offers two major advantages over the traditional time trend representation of technical change: (1) it is capable of producing estimates of productivity growth that closely track those obtained using the Törnqvist approximation to the Divisia index; and (2) it can solve a well-known problem commonly referred to as “the problem of trending elasticities”.  相似文献   

7.
Dr. J. Fischer 《Metrika》1982,29(1):227-247
Based on sample values of a one-dimensional random variable a nonparametric maximum likelihood estimate for the unknown probability density is introduced as the solution of an optimization problem in an appropriate Hilbert space. This solution turns out to be a polynomial spline function, and a complete characterization is given using recent results on the differentiability of the optimal value of a parametrized family of optimization problems. An important feature of this estimate is that its support interval results in a quite natural way from the formulation of the problem and is not fixed in advance. The estimator is shown to have a certain consistency property for a special class of density functions. Numerical results will be given in a subsequent paper.  相似文献   

8.
This paper is concerned with inference about a function g that is identified by a conditional quantile restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite-dimensional parametric family against a nonparametric alternative. The test is not subject to the ill-posed inverse problem of nonparametric instrumental variable estimation. Under mild conditions, the test is consistent against any alternative model. In large samples, its power is arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is proportional to n−1/2, where n is the sample size. Monte Carlo simulations illustrate the finite-sample performance of the test.  相似文献   

9.
The possibility that the effect of monetary policy on output may depend on whether credit conditions are tight or loose can be expressed as a non-linearity in the relation between real money supply and output, of which a simple case is a threshold effect. In this case, consistent with the credit-rationing model of Blinder (1987), the monetary variable has a more powerful effect if it is below some threshold than when it is above. Testing for the importance of this threshold is straightforward if the appropriate threshold value is known a priori, but where the value is not known and must be chosen based on the sample, the testing problem becomes more difficult. We apply recently-developed tests applicable in this situation to both US and Canadian data, and find substantial evidence of a threshold effect, particularly in US data. However, the estimated threshold values are high.  相似文献   

10.
The paper focuses on the time series aggregate consumption function for the Hungarian economy. The empirical econometric analysis presented produces several new results. First, it shows that the income and consumption variables used in this type of model by previous studies are I(2) variables. Consequently, error correction models formulated in terms of their first differences are mis-specified. Second, it provides a strong empirical evidence supporting the view that consumption (and thus saving) was (real) interest rate elastic during the period under investigation, having impact both on the long run and on the short relationships between income and consumption. Third, it provides empirical evidence on choosing the proper income variable in the consumption function. The model selection results clearly supports the model with unadjusted total real money income variable. Fourth, it shows that for the period 1960–1986 a correctly specified and stable error correction model can be established. Finally, the analysis shows that when used for the period beyond 1986, this model suffers from a structural break.  相似文献   

11.
In this paper, some new indices for ordinal data are introduced. These indices have been developed so as to measure the degree of concentration on the “small” or the “large” values of a variable whose level of measurement is ordinal. Their advantage in relation to other approaches is that they ascribe unequal weights to each class of values. Although, they constitute a useful tool in various fields of applications, the focus here is on their use in sample surveys and specifically in situations where one is interested in taking into account the “distance” of the responses from the “neutral” category in a given question. The properties of these indices are examined and methods for constructing confidence intervals for their actual values are discussed. The performance of these methods is evaluated through an extensive simulation study.  相似文献   

12.
This paper is devoted to the study of infinite horizon continuous time optimal control problems with incentive compatibility constraints that arise in many economic problems, for instance in defining the second best Pareto optimum for the joint exploitation of a common resource, as in Benhabib and Radner [Benhabib, J., Radner, R., 1992. The joint exploitation of a productive asset: a game theoretic approach. Economic Theory, 2: 155–190]. An incentive compatibility constraint is a constraint on the continuation of the payoff function at every time. We prove that the dynamic programming principle holds, the value function is a viscosity solution of the associated Hamilton–Jacobi–Bellman (HJB) equation, and that it is the minimal supersolution satisfying certain boundary conditions. When the incentive compatibility constraint only depends on the present value of the state variable, we prove existence of optimal strategies, and we show that the problem is equivalent to a state constraints problem in an endogenous state region which depends on the data of the problem. Some economic examples are analyzed.  相似文献   

13.
In the last decade, a number of models for the dynamic facility location problem have been proposed. The various models contain differing assumptions regarding the revenues and costs realized in the opening, operation, and closure of a facility as well as considering which of the facility sites are candidates for acquisition or disposal at the beginning of a time period. Since the problem becomes extremely large for practical applications, much of the research has been directed toward developing efficient solution techniques. Most of the models and solutions assume that the facilities will be disposed of at the end of the time horizon since distant future conditions usually can't be forecasted with any reasonable degree of accuracy. The problem with this approach is that the “optimal” solution is optimal for only one hypothesized post horizon facility configuration and may become nonoptimal under a different configuration. Post-optimality analysis is needed to assure management that the “optimal” decision to open or close a facility at a given point in time won't prove to be “nonoptimal” when the planning horizon is extended or when design parameters in subsequent time periods change. If management has some guarantee that the decision to open or close a facility in a given time period won't change, it can safely direct attention to the accuracy of the design parameters within that time period.This paper proposes a mixed integer linear programming model to determine which of a finite set of warehouse sites will be operating in each time period of a finite planning horizon. The model is general in the sense that it can reflect a number of acquisition alternatives—purchase, lease or rent. The principal assumptions of the model are: a) Warehouses are assumed to have infinite capacity in meeting customer demand, b) In each time period, any non-operating warehouse is a candidate for becoming operational, and likewise any operating warehouse is a candidate for disposal, c) During a given time period, the fixed costs of becoming operational at a site are greater than the disposal value at that site to reflect the nonrecoverable costs involved in operating a warehouse. These costs are separate from the acquisition and liquidation values of the site. d) During a time period the operation of a warehouse incurs overhead and maintenance costs as well as a depreciation in the disposal value.To solve the model, it is first simplified and a partial optimal solution is obtained by the iterative examination by both lower and upper bounds on the savings realized if a site is opened in a given time period. An attempt is made to fix each warehouse open or closed in each time period. The bounds are based on the delta and omega tests proposed by Efroymson and Ray (1966) and Khumawala (1972) with adjustment for changes in the value of the warehouse between the beginning and end of a time period. A complete optimal solution is obtained by solving the reduced model with Benders' decomposition procedure. The optimal solution is then tested to determine which time periods contain “tentative” decisions that may be affected by post horizon data by analyzing the relationship between the lower (or upper) bounds used in the model simplification time period. If the warehouse decisions made in a time period satisfy these relationships and are thus unaffected by data changes in subsequent time periods, then the decisions made in earlier time periods will also be unaffected by future changes.  相似文献   

14.
I consider a semiparametric version of the nonseparable triangular model of Chesher [Chesher, A., 2003. Identification in nonseparable models. Econometrica 71, 1405–1441]. The proposed model is linear in coefficients, where the coefficients are unknown functions of unobserved latent variables. Using a control variable idea and quantile regression methods, I propose a simple two-step estimator for the coefficients evaluated at particular values of the latent variables. Under the condition that the instruments are locally relevant (i.e. they affect a particular conditional quantile of interest of the endogenous variable) I establish consistency and asymptotic normality. Simulation experiments confirm the theoretical results.  相似文献   

15.
This paper investigates how monetary policy shock affects the stock market of the United States (US) conditional on states of investor sentiment. In this regard, we use a recently developed estimator that uses high-frequency surprises as a proxy for the structural monetary policy shocks, which in turn is achieved by integrating the current short-term rate surprises, which are least affected by an information effect, into a vector autoregressive (VAR) model as an exogenous variable. When allowing for time-varying model parameters, we find that, compared to the low investor sentiment regime, the negative reaction of stock returns to contractionary monetary policy shocks is stronger in the state associated with relatively higher investor sentiment. Our results are robust to alternative sample period (which excludes the zero lower bound) and model specification and also have important implications for academicians, investors, and policymakers.  相似文献   

16.
S. Subba Rao 《Metrika》1969,14(1):101-116
Summary This paper considers the effect of postponable interruptions on aM/G/1 queueing process where customers balk with probabilityn/N (n=0, 1, 2, …,N) and renege after having waited for a random length of time. The busy period process in which transitions from any state (m, n) to any other state are permitted without an intermediate passage to the empty state, is investigated employing the supplementary variable method and discrete transforms. Later, the general process in which busy periods alternate with idle periods has been discussed in terms of the busy period process and renewal distributions and finally the ergodic properties of the general process studied by appealing to some results of renewal theory.  相似文献   

17.
This paper is the sequel to Wijngaard and Stidham (1986). The topic is a countable state average reward semi-Markov decision process with a transition mechanism that is skip-free to the right. The applications are controlled GI /M/1 queues. Skip-free to the right means that state n cannot be reached from the states i (< n ) without reaching first state n −1. In such decision processes the reversed optimality equation can be used to estimate the optimal average reward by bi-section. Wijngaard and Stidham (1986) show that it is possible to use upper and lower bounds on the value function for this bi-section. This paper considers queueing problems where this bi-section can not be used in a standard way. Instead of an upper bound on the value function is is possible now to use the character of the optimal strategy.  相似文献   

18.
The traditional fuzzy regression model involves two solving processes. First, the extension principle is used to derive the membership function of extrapolated values, and then, attempts are made to include every collected value with a membership degree of at least h in the fuzzy regression interval. However, the membership function of extrapolated values is sometimes highly complex, and it is difficult to determine the h value, i.e., the degree of fit between the input values and the extrapolative fuzzy output values, when the information obtained from the collected data is insufficient. To solve this problem, we proposed a simplified fuzzy regression equation based on Carlsson and Fullér’s possibilistic mean and variance method and used it for modeling the constraints and objective function of a fuzzy regression model without determining the membership function of extrapolative values and the value of h. Finally, we demonstrated the application of our model in forecasting pneumonia mortality. Thus, we verified the effectiveness of the proposed model and confirmed the potential benefits of our approach, in which the forecasting error is very small.  相似文献   

19.
This paper adopts a spatial probit approach to explain interaction effects among cross‐sectional units when the dependent variable takes the form of a binary response variable and transitions from state 0 to 1 occur at different moments in time. The model has two spatially lagged variables: one for units that are still in state 0 and one for units that had already transferred to state 1. The parameters are estimated on observations for those units that are still in state 0 at the start of the different time periods, whereas observations on units after they transferred to state 1 are discarded, just as in the literature on duration modeling. Furthermore, neighboring units that had not yet transferred may have a different impact from units that had already transferred. We illustrate our approach with an empirical study of the adoption of inflation targeting for a sample of 58 countries over the period 1985–2008. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
This paper considers an optimal reinsurance and investment strategies for an insurer under mean–variance criterion within a game theoretic framework. Specially, it is assumed that the surplus process is governed by a Cramér–Lundberg model, and apart from purchasing reinsurance, the insurer is allowed to invest in a financial market with multiple assets that all can be risky, whose price processes are modeled by the jump–diffusion process. Due to the market without cash, the method of separating the variables is not viable any more. We turn to an alternative approach to solve the extended Hamilton–Jacobi–Bellman equation, and closed-form expressions of the optimal strategies and value function are not only derived but also proved to be uniqueness. Moreover, some special cases of our model are provided and several numerical analyses for our results are presented as well. Under this criterion, different from existing literature, we find that (i) the value function is not linear but quadratic with respect to the current wealth; (ii) the optimal reinsurance and investment strategies depend on the wealth process; (iii) the parameters of risky assets(insurance market) have impacts on the optimal reinsurance(investment) policy; (iv) the safety loading of the insurer affects the optimal strategies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号