首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper offers a methodology to address the endogeneity of inputs in the directional technology distance function (DTDF)‐based formulation of banking technology which explicitly accommodates the presence of undesirable nonperforming loans—an inherent characteristic of the bank's production due to its exposure to credit risk. Specifically, we model nonperforming loans as an undesirable output in the bank's production process. Since the stochastic DTDF describing banking technology is likely to suffer from the endogeneity of inputs, we propose addressing this problem by considering a system consisting of the DTDF and the first‐order conditions from the bank's cost minimization problem. The first‐order conditions also allow us to identify the ‘cost‐optimal’ directional vector for the banking DTDF, thus eliminating the uncertainty associated with an ad hoc choice of the direction. We apply our cost system approach to the data on large US commercial banks for the 2001–2010 period, which we estimate via Bayesian Markov chain Monte Carlo methods subject to theoretical regularity conditions. We document dramatic distortions in banks' efficiency, productivity growth and scale elasticity estimates when the endogeneity of inputs is assumed away and/or the DTDF is fitted in an arbitrary direction. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This paper studies a stylized model of local interaction where agents choose from an ever increasing set of vertically ranked actions, e.g. technologies. The driving forces of the model are infrequent upward shifts (‘updates’), followed by a rapid process of local imitation (‘diffusion’). Our main focus is on the regularities displayed by the long-run distribution of diffusion waves and their implication on the performance of the system. By integrating analytical techniques and numerical simulations, we come to the following two main conclusions. (1) If non-coordination costs are sufficiently high, the system behaves critically, in the sense customarily used in physics. (2) The performance of the system is optimal at the frontier of the critical region. Heuristically, this may be interpreted as an indication that (performance-sensitive) evolutionary forces induce the system to be placed ‘at the edge of order and chaos’.  相似文献   

3.
The problem of option hedging in the presence of proportional transaction costs can be formulated as a singular stochastic control problem. Hodges and Neuberger [1989. Optimal replication of contingent claims under transactions costs. Review of Futures Markets 8, 222–239] introduced an approach that is based on maximization of the expected utility of terminal wealth. We develop a new algorithm to solve the corresponding singular stochastic control problem and introduce a new approach to option hedging which is closer in spirit to the pathwise replication of Black and Scholes [1973. The pricing of options and corporate liabilities. Journal of Political Economy 81, 637–654]. This new approach is based on minimization of a Black–Scholes-type measure of pathwise risk, defined in terms of a market delta, subject to an upper bound on the hedging cost. We provide an efficient backward induction algorithm for the problem of cost-constrained risk minimization, whose associated singular stochastic control problem is shown to be equivalent to an optimal stopping problem. This algorithm is then modified to solve the singular stochastic control problem associated with utility maximization, which cannot be reduced to an optimal stopping problem. We propose to choose an optimal parameter (risk-aversion coefficient or Lagrange multiplier) in either approach by minimizing the mean squared hedging error and demonstrate that with this “best” choice of the parameter, both approaches have similar performance. We also discuss the different notions of risk in both approaches and propose a volatility adjustment for the risk-minimization approach, which is analogous to that introduced by Zakamouline [2006. European option pricing and hedging with both fixed and proportional transaction costs. Journal of Economic Dynamics and Control 30, 1–25] for the utility maximization approach, thereby providing a unified treatment of both approaches.  相似文献   

4.
A fuzzy-QFD approach to supplier selection   总被引:5,自引:0,他引:5  
This article suggests a new method that transfers the house of quality (HOQ) approach typical of quality function deployment (QFD) problems to the supplier selection process. To test its efficacy, the method is applied to a supplier selection process for a medium-to-large industry that manufactures complete clutch couplings.The study starts by identifying the features that the purchased product should have (internal variables “WHAT”) in order to satisfy the company's needs, then it seeks to establish the relevant supplier assessment criteria (external variables “HOW”) in order to come up with a final ranking based on the fuzzy suitability index (FSI). The whole procedure was implemented using fuzzy numbers; the application of a fuzzy algorithm allowed the company to define by means of linguistic variables the relative importance of the “WHAT”, the “HOWWHAT” correlation scores, the resulting weights of the “HOW” and the impact of each potential supplier.Special attention is paid to the various subjective assessments in the HOQ process, and symmetrical triangular fuzzy numbers are suggested to capture the vagueness in people's verbal assessments.  相似文献   

5.
We consider a neo-classical model of optimal economic growth with c.r.r.a. utility in which the traditional deterministic trends representing population growth, technological progress, depreciation and impatience are replaced by Brownian motions with drift. When transformed to ‘intensive’ units, this is equivalent to a stochastic model of optimal saving with diminishing returns to capital. For the intensive model, we give sufficient conditions for optimality of a consumption plan (open-loop control) comprising a finite welfare condition, a martingale condition for shadow prices and a transversality condition as t→∞. We then replace these by conditions for optimality of a plan generated by a consumption function (closed-loop control), i.e. a function expressing log-consumption as a time-invariant, deterministic function of log-capital . Making use of the exponential martingale formula we replace the martingale condition by a non-linear, non-autonomous second-order o.d.e. which an optimal consumption function must satisfy; this has the form , where . Economic considerations suggest certain limiting values which and should satisfy as , thus defining a two-point boundary value problem (b.v.p.) — or rather, a family of problems, depending on the values of parameters. We prove two theorems showing that a consumption function which solves the appropriate b.v.p. generates an optimal plan. Proofs that a unique solution of each b.v.p. exists are given in a separate paper (Part B).  相似文献   

6.
Due to the complexity of present day supply chains it is important to select the simplest supply chain scheduling decision support system (DSS) which will determine and place orders satisfactorily. We propose to use a generic design framework, termed the explicit filter methodology, to achieve this objective. In doing so we compare the explicit filter approach to the implicit filter approach utilised in previous OR research the latter focusing on minimising a cost function. Although the eventual results may well be similar with both approaches it is much clearer to the designer, both why and how, an ordering system will reduce the Bullwhip effect via the explicit filter approach. The “explicit filter” approach produces a range of DSS designs corresponding to best practice. These may be “mixed and matched” to generate a number of competitive delivery pipelines to suit the specific business scenario.  相似文献   

7.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

8.
Over the past 40 years extensive research has been conducted in both Europe and North America on the internationalization processes of firms. The present article traces the explicit and implicit impact of the work of the Carnegie School on this research. It shows that the ideas presented in March and Simon [(1958). Organizations. New York: John Wiley & Sons, Inc.] and Cyert and March [(1963). A Behavioral Theory of the Firm. Englewood Cliffs, NJ: Prentice-Hall Inc.] regarding the general approach to managers and decision-making, as well as their concepts of bounded rationality, uncertainty, and limited/problemistic search, all fit very well with empirical findings concerning such internationalization processes. The possible benefit to this research area of a more extensive use of other contributions stemming from the Carnegie School is also discussed.  相似文献   

9.
This paper is devoted to the study of infinite horizon continuous time optimal control problems with incentive compatibility constraints that arise in many economic problems, for instance in defining the second best Pareto optimum for the joint exploitation of a common resource, as in Benhabib and Radner [Benhabib, J., Radner, R., 1992. The joint exploitation of a productive asset: a game theoretic approach. Economic Theory, 2: 155–190]. An incentive compatibility constraint is a constraint on the continuation of the payoff function at every time. We prove that the dynamic programming principle holds, the value function is a viscosity solution of the associated Hamilton–Jacobi–Bellman (HJB) equation, and that it is the minimal supersolution satisfying certain boundary conditions. When the incentive compatibility constraint only depends on the present value of the state variable, we prove existence of optimal strategies, and we show that the problem is equivalent to a state constraints problem in an endogenous state region which depends on the data of the problem. Some economic examples are analyzed.  相似文献   

10.
Linear-quadratic approximation, external habit and targeting rules   总被引:1,自引:0,他引:1  
We examine the linear-quadratic approximation of nonlinear dynamic stochastic optimization problems. A discrete-time version of Magill [1977a. A local analysis of N-sector capital accumulation under uncertainty. Journal of Economic Theory 15(2), 211–219] is generalized to models with forward-looking variables paying special attention to second-order conditions. This is the ‘large distortions’ case in the literature. We apply the approach to monetary policy in a DSGE model with external habit in consumption. We then develop a condition for ‘target-implementability’, a concept related to ‘targeting rules’. Finally, we extend the approach to a comparison between cooperative and non-cooperative equilibria in a two-country model and show that the ‘small distortions’ approximation is inappropriate for this exercise.  相似文献   

11.
Assuming differentiable monotonicity and differentiable convexity of utility functions, we show that if y and z are allocations of a pure exchange economy with z optimal and preferred to y by every agent, then there is a trade curve of finite length from y to z. We make no assumption on utility functions designed to ‘keep away from the boundary’. The conclusion need not hold if z is not optimal, unless a special boundary condition is assumed and (l, m) ≠ (2, 2).  相似文献   

12.
Material requirements planning (MRP) is a planning and information system that has widespread application in discrete-parts manufacturing. The purpose of this article is to introduce ideas that can improve the flow of material through complex manufacturing systems operating under MRP, and that can increase the applicability of MRP within diverse manufacturing environments.MRP models the flow of material by assuming that items flow from work station to work station in the same batches that are used in production. That is, once work starts on a batch of a certain item at a certain work station, the entire batch will be produced before any part of the batch will be transported to the next work station on its routing plan. Clearly, efficiency can be increased if some parallelism can be introduced. The form of parallelism investigated here is overlapping operations.Overlapping operations occurs when the transportation of partial batches to a downstream work station is allowed while work proceeds to complete the batch at the upstream work station. The potential efficiencies to be gained are the following:
• Reduced work-in-process inventory
• Reduced floor space requirements
• Reduced size of transfer vehicles
Additional costs may accrue through additional cost of transportation of partial batches and through additional costs of control.Some MRP software vendors provide the data processing capability for overlapping operations. However, the user is given little or no guidance on overlapping percentages or amounts. It is our intent to provide a simple, robust technique to MRP users who would like to overlap operations and gain some or all of the above efficiencies.An optimal lot-sizing technique is derived by considering a generic two work station segment of a manufacturing system. Under the assumptions of constant demand and identical production rates, a cost function that considers setup costs, inventory holding costs and transportation costs is derived. This cost function is minimized subject to the constraint that the production batch is an integer multiple of the transfer batch. We solve for the optimal production batch, the optimal transfer batch, and the integer number relating them. Solutions are obtained as closed form, easy to-evaluate formulas.By introducing more parallelism, overlapping operations can reduce lead time. However, this will not happen without modification of MRP logic to accommodate such reduced lead time. We derive a formula that shows how a significant lead time compression can easily be obtained and implemented in MRP.We consider an example to illustrate the application of the technique on typical data from the electronics industry. The outcome showed a cost savings of approximately 22.5% over the standard MRP approach.Overlapping operations allows the applicability of MRP to an increasing number of situations that are not modeled faithfully by conventional MRP logic. Three such situations that occur often are the following:
• Limited size of transfer vehicles dictate that several transfers should be planned.
• Lead time requirements prohibit nonoverlapped operations.
Our analysis suggests how to accommodate these difficult practical situations into MRP.Overlapping operations in material requirements planning provides an enhancement that allows wider applicability, shortened lead times, and lower total costs. It may be applied selectively to any two work stations where it is deemed appropriate. Due to the structure of the cost function, it is possible to make the transfer lot-sizing decisions independent of the production lot-sizing decisions. Therefore, significant improvements can be made through overlapping with minimum disruption to the existing MRP system machinery. It is our conviction that overlapping operations is an important concept that can and will impact MRP. We suggest the approach presented here as a systematic way to implement overlapping.  相似文献   

13.
An enterprise is owned jointly by m agents, the ith agent's share being θi > 0 where ∑iθi=1. The enterprise is able to produce some non-negative n-vector x of goods where x lies in some convex production set X. An operation consists of choosing a vector from X and distributing it among the agents. The problem is to find an operations such that the value of the ith agent's bundle measured in a given price system is proportional to θi and such that the operation is (Pareto) optimal with respect to the agent's preferences. It is shown under standard assumptions that operations which are both optimal and proportional always exist. It is also shown that these operations are unique if (a) X is given by a separable production function, and (b) when X represents production of a single good over n time periods.  相似文献   

14.
15.
I propose a technique, counting ‘equations’ and ‘unknowns’, for determining when the posterior distributions of the parameters of a linear regression process converge to their true values. This is applied to examples and to the infinite-horizon optimal control of this linear regression process with learning, and in particular to the problem of a monopolist seeking to maximize profits with unknown demand curve. Such a monopolist has a tradeoff between choosing an action to maximize the current-period reward and to maximize the information value of that action. I use the above technique to determine the monopolist's limiting behavior and to determine whether in the limit it learns the true parameter values of the demand curve.  相似文献   

16.
Parameter estimation under model uncertainty is a difficult and fundamental issue in econometrics. This paper compares the performance of various model averaging techniques. In particular, it contrasts Bayesian model averaging (BMA) — currently one of the standard methods used in growth empirics — with a new method called weighted-average least squares (WALS). The new method has two major advantages over BMA: its computational burden is trivial and it is based on a transparent definition of prior ignorance. The theory is applied to and sheds new light on growth empirics where a high degree of model uncertainty is typically present.  相似文献   

17.
In this paper identification problems in linear structural models are discussed. A special but practically important form of these problems, the structural identifiability, is defined. Thereafter it is shown that this problem can be studied — without empirical data — by a set system defined on the set of free parameters of a model. This set system is the identification structure, or the matroid, of the model. It gives insight into the dependence of unidentified parameters. The advantages of this approach in research planning and in correcting nonidentifiable models is demonstrated. Finally we introduce the FORTRAN program LISRAN, which is designed to analyze a given linear structural model and to provide its identification structure.  相似文献   

18.
Consider the design problem for the approximately linear model with serially correlated errors. The correlated structure is the qth degree moving average process, MA(q), especially for q = 1, 2. The optimal design is derived by using Bayesian approach. The Bayesian designs derived with various priors are compared with the classical designs with respect to some specific correlated structures. The results show that any prior knowledge about the sign of the MA(q) process parameters leads to designs that are considerately more efficient than the classical ones based on homoscedastic assumptions.  相似文献   

19.
In tackling administrative reform and in the hope of improving the effective allocation of resources, most European governments have shown a growing interest in adopting private sector management models in the public administration. The assumption underlying this paper is that the decisive variables in the different national contexts have to do with the relationships between the central and the peripheral administrative levels, and the way in which administrative actors at the two levels interpret their roles and participate in the reform process. The paper examines the case of the reform of the Italian Ministry of Finance. In seeking to improve its performance and the services it provides, the ministry reform is intended to introduce a management system in which the key concepts are the planning, programming and control of administrative action and results. According to reform rhetoric, shaping a new class of administrative managers at the local level is the crux of the question. However, research results hint that the “creation” of this new local executive staff is yet to be completed. The working hypothesis advanced is that this is due to local executives’ lack of confidence in the “system”, inasmuch as the reform process has so far been characterised by a tendency to give them responsibility without autonomy and autonomy without control. The greater their lack of trust, the lesser their willingness to risk the consequences of failure and the greater their tendency to stick to defensive positions and to return to previous “bureaucratic” conceptions and ways of operating.  相似文献   

20.
Development of partnership with suppliers is widely recognised today as a potent tool for supply chain improvement. To develop an effective partnership, it is necessary to have a small supply base and an effort to reduce the supply base to a manageable level. Despite its overwhelming importance, models of supply base reduction are rare. Supplier sorting methods, used for pre-selection of suppliers and sometimes seen as methods for supply base reduction, have limitations ranging from (1) requirement of an exhaustive database of historical information (case-based reasoning), (2) inability to predefine the number of elements in a cluster (cluster analysis) and (3) inability to identify suppliers who are both highly capable as well as high performers (data envelopment analysis). In the present work, we develop a systematic framework for carrying out the supply base reduction process. The study assumes two important dimensions of suppliers—performance and capability. Performance of a supplier represents short-term effects on the achievement of supply chain objectives while supplier capability indicates long-term effects. Many of the performance and capability factors are imprecise in nature. In order to account for the imprecision involved in numerous subjective characteristics of suppliers, we use fuzzy set approach to measure the imprecision of these factors and rank a potential list of suppliers against their performance and capability. We then display their ranks in a ‘capability–performance matrix’ that helps a decision maker arrange the suppliers in decreasing order of preference. The desired numbers of suppliers are finally selected on the basis of this ordered list. The suggested framework will be of immense help to the practising managers in reducing the supply base—a prerequisite for building a strong supplier partnership and developing an effective supply chain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号