首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

2.
This paper investigates the relationship between secure implementability (Saijo et al. in Theor Econ 2:203–229, 2007) and full implementability in truthful strategies (Nicolò in Rev Econ Des 8:373–382, 2004). Although secure implementability is in general stronger than full implementability in truthful strategies, this paper shows that both properties are equivalent under the social choice function that satisfies non-wastefulness (Li and Xue in Econ Theory, doi:10.1007/s00199-012-0724-0) in pure exchange economies with Leontief utility functions.  相似文献   

3.
This paper shows how scale efficiency can be measured from an arbitrary parametric hyperbolic distance function with multiple outputs and multiple inputs. It extends the methods introduced by Ray (J Product Anal 11:183–194, 1998), and Balk (J Product Anal 15:159–183, 2001) and Ray (2003) that measure scale efficiency from a single-output multi-input distance function and from a multi-output and multi-input distance function, respectively. The method developed in the present paper is different from Ray’s and Balk’s in that it allows for simultaneous contraction of inputs and expansion of outputs. Theorems applicable to an arbitrary parametric hyperbolic distance function are introduced first, and then their uses in measuring scale efficiency are illustrated with the translog functional form.  相似文献   

4.
In response to a question raised by Knox Lovell, we develop a method for estimating directional output distance functions with endogenously determined direction vectors based on exogenous normalization constraints. This is reminiscent of the Russell measure proposed by Färe and Lovell (J Econ Theory 19:150–162, 1978). Moreover it is related to the slacks-based directional distance function introduced by Färe and Grosskopf (Eur J Oper Res 200:320–322, 2010a, Eur J Oper Res 206:702, 2010b). Here we show how to use the slacks-based function to estimate the optimal directions.  相似文献   

5.
While the high prevalence of mental illness in workplaces is more readily documented in the literature than it was ten or so years ago, it continues to remain largely within the medical and health sciences fields. This may account for the lack of information about mental illness in workplaces (Dewa et al. Healthcare Papers 5:12–25, 2004) by operational managers and human resource departments even though such illnesses effect on average 17 % to 20 % of employees in any 12-month period (MHCC 2012; SAMHSA 2010; ABS 2007). As symptoms of mental illness have the capacity to impact negatively on employee work performance and/or attendance, the ramifications on employee performance management systems can be significant, particularly when employees choose to deliberately conceal their illness, such that any work concerns appear to derive from issues other than illness (Dewa et al. Healthcare Papers 5:12–25, 2004; De Lorenzo 2003). When employee non-disclosure of a mental illness impacts negatively in the workplace, it presents a very challenging issue in relation to performance management for both operational managers and human resource staff. Without documented medical evidence to show that impaired work performance and/or attendance is attributable to a mental illness, the issue of performance management arises. Currently, when there is no documented medical illness, performance management policies are often brought into place to improve employee performance and/or attendance by establishing achievable employee targets. Yet, given that in any twelve-month period at least a fifth of the workforce sustains a mental illness (MHCC 2012; SAMHSA 2010; ABS 2007), and that non-disclosure is significant (Barney et al. BMC Public Health 9:1–11, 2009; Munir et al. Social Science & Medicine 60:1397–1407, 2005) such targets may be unachievable for employees with a hidden mental illness. It is for these reasons that this paper reviews the incidence of mental illness in western economies, its costs, and the reasons why it is often concealed and proposes the adoption of what are termed ‘Buffer Stage’ policies as an added tool that organisations may wish to utilise in the management of hidden medical illnesses such as mental illness.  相似文献   

6.
We consider multiple-principal multiple-agent models of moral hazard: principals compete through mechanisms in the presence of agents who take unobservable actions. In this context, we provide a rationale for restricting principals to make use of simple mechanisms, which correspond to direct mechanisms in the standard framework of Myerson (J Math Econ 10:67–81, 1982). Our results complement those of Han (J Econ Theory 137(1):610–626, 2007) who analyzes a complete information setting where agents’ actions are fully contractible.  相似文献   

7.
We study a keyword auction model where bidders have constrained budgets. In the absence of budget constraints, Edelman et al. (Am Econ Rev 97(1):242–259, 2007) and Varian (Int J Ind Organ 25(6):1163–1178, 2007) analyze “locally envy-free equilibrium” or “symmetric Nash equilibrium” bidding strategies in generalized second-price auctions. However, bidders often have to set their daily budgets when they participate in an auction; once a bidder’s payment reaches his budget, he drops out of the auction. This raises an important strategic issue that has been overlooked in the previous literature: Bidders may change their bids to inflict higher prices on their competitors because under generalized second-price, the per-click price paid by a bidder is the next highest bid. We provide budget thresholds under which equilibria analyzed in Edelman et al. (Am Econ Rev 97(1):242–259, 2007) and Varian (Int J Ind Organ 25(6):1163–1178, 2007) are sustained as “equilibria with budget constraints” in our setting. We then consider a simple environment with one position and two bidders and show that a search engine’s revenue with budget constraints may be larger than its revenue without budget constraints.  相似文献   

8.
Classical optimal strategies are notorious for producing remarkably volatile portfolio weights over time when applied with parameters estimated from data. This is predominantly explained by the difficulty to estimate expected returns accurately. In Lindberg (Bernoulli 15:464–474, 2009), a new parameterization of the drift rates was proposed with the aim to circumventing this difficulty, and a continuous time mean–variance optimal portfolio problem was solved. This approach was further developed in Alp and Korn (Decis Econ Finance 34:21–40, 2011a) to a jump-diffusion setting. In the present paper, we solve a different portfolio problem under the market parameterization in Lindberg (Bernoulli 15:464–474, 2009). Here, the admissible investment strategies are given as the amounts of money to be held in each stock and are allowed to be adapted stochastic processes. In the references above, the admissible strategies are the deterministic and bounded fractions of the total wealth. The optimal strategy we derive is not the same as in Lindberg (Bernoulli 15:464–474, 2009), but it can still be viewed as investing equally in each of the n Brownian motions in the model. As a consequence of the problem assumptions, the optimal final wealth can become non-negative. The present portfolio problem is solved also in Alp and Korn (Submitted, 2011b), using the L 2-projection approach of Schweizer (Ann Probab 22:1536–1575, 1995). However, our method of proof is direct and much easier accessible.  相似文献   

9.
Two recent meta-analyses use variants of the Baily et al. (Brookings Papers Econ Act Microecon 1:187–267, 1992) (BHC) decompositions to ask whether recent robust growth in aggregate labor productivity (ALP) across 25 countries is due to lower barriers to input reallocation. They find weak gains from measured reallocation and strong within-plant productivity gains. We show these findings may be because BHC indices decompose ALP growth using plant-level output-per-labor (OL) as a proxy for the marginal product of labor and changes in OL as a proxy for changes in plant-level productivity. We provide simple examples to show that (1) reallocation growth from labor should track marginal changes in labor weighted by the marginal product of labor, (2) BHC reallocation growth can be positively correlated, negatively correlated, or uncorrelated with actual growth arising from the reallocation of inputs, and that (3) BHC indices can mistake growth from reallocation as growth from productivity, principally because OL is neither a perfect index of marginal products nor plant-level productivity. We then turn to micro-level data from Chile, Colombia, and Slovenia, and we find for the first two that BHC indices report weak or negative growth from labor reallocation. Using the reallocation definition based on marginal products we find a positive and robust role for labor reallocation in all three countries and a reduced role of plant-level technical efficiency in growth. We close by exploring potential corrections to the BHC decompositions but here we have limited success.  相似文献   

10.
The economy of Fiji has witnessed a pervasive role of information and communications technology (ICT) on one hand and an increase in lifestyle diseases on the other. The government however has put in policies to exploit the gains from ICT and increased budget allocation to combat some of the burgeoning health problems in their effort to modernize the economy. In this paper, we explore the short-run and long run effects of health expenditure and ICT on per worker output within the augmented Solow framework (Q J Econ 70:65–94, 1956) and the autoregressive distributed lag bounds procedure (Pesaran et al. in J Appl Econ 16:289–326, 2001) over the period 1979–2010. The results show that health expenditure has a positive and significant effect in the short-run only (0.11 %). ICT has positive and significant effect both in the short-run (0.90 %) and the long-run (0.62 %). Further, the Granger-causality tests reveals a strong bi-directional causality between health expenditure and per worker output, a unidirectional strong causation from capital per worker to ICT development, and a weak causation from ICT to per worker output.  相似文献   

11.
This paper carries out an empirical investigation into the contribution of rural transformation, which can produce efficiency gains over and above those associated with technical progress, to total factor productivity in China during the post-reform period 1980–2010. For the first time for China, the roles of rural transformation and technical progress are examined whilst structural breaks are taken into account. We employ Bai and Perron (Econometrica 66:47–68, 1998; J Appl Econom 18:1–22, 2003a; Econom J 6:72–78, 2003b) methods which allow for multiple structural breaks at unknown dates and can be applied for both pure and partial structural changes. We also evaluate the robustness of our results by employing alternative production functions and two capital series. Two structural breaks near the Tiananmen Square incident in 1989 and the implementation of further reforms and opening-up measures in 1995 were identified for both capital series. We found the contribution of rural transformation to total factor productivity to be significant and positive across all regimes. However, its importance to the growth of total factor productivity has been declining over time, while that of technical progress has been increasing.  相似文献   

12.
The purpose of this note is twofold. First, we survey the study of the percolation phase transition on the Hamming hypercube $\{0,1\}^{m}$ obtained in the series of papers (Borgs et al. in Random Struct Algorithms 27:137–184, 2005; Borgs et al. in Ann Probab 33:1886–1944, 2005; Borgs et al. in Combinatorica 26:395–410, 2006; van der Hofstad and Nachmias in Hypercube percolation, Preprint 2012). Secondly, we explain how this study can be performed without the use of the so-called “lace expansion” technique. To that aim, we provide a novel simple proof that the triangle condition holds at the critical probability.  相似文献   

13.
Qiqing Yu  Yuting Hsu  Kai Yu 《Metrika》2014,77(8):995-1011
The non-parametric likelihood L(F) for censored data, including univariate or multivariate right-censored, doubly-censored, interval-censored, or masked competing risks data, is proposed by Peto (Appl Stat 22:86–91, 1973). It does not involve censoring distributions. In the literature, several noninformative conditions are proposed to justify L(F) so that the GMLE can be consistent (see, for examples, Self and Grossman in Biometrics 42:521–530 1986, or Oller et al. in Can J Stat 32:315–326, 2004). We present the necessary and sufficient (N&S) condition so that \(L(F)\) is equivalent to the full likelihood under the non-parametric set-up. The statement is false under the parametric set-up. Our condition is slightly different from the noninformative conditions in the literature. We present two applications to our cancer research data that satisfy the N&S condition but has dependent censoring.  相似文献   

14.
This paper investigates the existence of heterogeneous technologies in the US commercial banking industry through the nondynamic panel threshold effects estimation technique proposed by Hansen (Econometrica 64:413–430, 1999, Econometrica 68:575–603, 2000a). We employ the total assets as a threshold variable, which is typically considered as a proxy for bank’s size in the banking literature. We modify the threshold effects model to allow for time-varying effects, wherein these are modeled by a time polynomial of degree two as in Cornwell et al. (J Econom 46:185–200, 1990) model. Threshold effects estimation allows us to sort banks into discrete groups based on their size in a structural and consistent manner. We determine seven such distinct technology-groups within which banks are allowed to share the same technology parameters. We provide estimates of individual and group efficiency scores, as well as of those of returns to scale and measures of technological change for each group. The presence of the threshold(s) is tested via bootstrap procedure outlined in Hansen (Econometrica 64:413–430, 1999) and the relationship between bank size and efficiency ratios is investigated.  相似文献   

15.
Individuals living in society are bound together by a social network and, in many social and economic situations, individuals learn by observing the behavior of others in their local environment. This process is called social learning. Learning in incomplete networks, where different individuals have different information, is especially challenging: because of the lack of common knowledge individuals must draw inferences about the actions others have observed, as well as about their private information. This paper reports an experimental investigation of learning in three-person networks and uses the theoretical framework of Gale and Kariv (Games Econ Behav 45:329–346, 2003) to interpret the data generated by the experiments. The family of three-person networks includes several non-trivial architectures, each of which gives rise to its own distinctive learning patterns. To test the usefulness of the theory in interpreting the data, we adapt the Quantal Response Equilibrium (QRE) model of Mckelvey and Palfrey (Games Econ Behav 10:6–38, 1995; Exp Econ 1:9–41, 1998). We find that the theory can account for the behavior observed in the laboratory in a variety of networks and informational settings. This provides important support for the use of QRE to interpret experimental data.  相似文献   

16.
This paper proposes a new two-step stochastic frontier approach to estimate technical efficiency (TE) scores for firms in different groups adopting distinct technologies. Analogous to Battese et al. (J Prod Anal 21:91–103, 2004), the metafrontier production function allows for calculating comparable TE measures, which can be decomposed into group specific TE measures and technology gap ratios. The proposed approach differs from Battese et al. (J Prod Anal 21:91–103, 2004) and O’Donnell et al. (Empir Econ 34:231–255, 2008) mainly in the second step, where a stochastic frontier analysis model is formulated and applied to obtain the estimates of the metafrontier, instead of relying on programming techniques. The so-derived estimators have the desirable statistical properties and enable the statistical inferences to be drawn. While the within-group variation in firms’ technical efficiencies is frequently assumed to be associated with firm-specific exogenous variables, the between-group variation in technology gaps can be specified as a function of some exogenous variables to take account of group-specific environmental differences. Two empirical applications are illustrated and the results appear to support the use of our model.  相似文献   

17.
This paper considers the provision of some important municipal services and applies the non-parametric double bootstrap Simar and Wilson (J Econ 136(1):31–64, 2007) model based on a truncated-regression to estimate the effect of a group of relevant factors, which include the political sign of the governing party and the type of management, on robust DEA (Data Envelopment Analysis) estimates. Previous conditions, like separability, must hold for meaningful first- stage efficiency estimates and second-stage regression. After some confusion in the literature, Simar and Wilson (J Prod Anal 36(2):205–218, 2011b) clarify that their work of 2007 actually defines a statistical model where truncated (but not censored, i.e., Tobit, not Ordinary Least Square) regression yields a consistent estimation of model features. They demonstrate that conventional, likelihood-based approaches to inference are invalid, and they develop a bootstrap approach that yields valid inference in second stage regressions when these are appropriate. The results reveal a significant relation between efficiency and all the variables analysed and that municipalities governed by progressive parties are more efficient.  相似文献   

18.
Social scientists often consider multiple empirical models of the same process. When these models are parametric and non-nested, the null hypothesis that two models fit the data equally well is commonly tested using methods introduced by Vuong (Econometrica 57(2):307–333, 1989) and Clarke (Am J Political Sci 45(3):724–744, 2001; J Confl Resolut 47(1):72–93, 2003; Political Anal 15(3):347–363, 2007). The objective of each is to compare the Kullback–Leibler Divergence (KLD) of the two models from the true model that generated the data. Here we show that both of these tests are based upon a biased estimator of the KLD, the individual log-likelihood contributions, and that the Clarke test is not proven to be consistent for the difference in KLDs. As a solution, we derive a test based upon cross-validated log-likelihood contributions, which represent an unbiased KLD estimate. We demonstrate the CVDM test’s superior performance via simulation, then apply it to two empirical examples from political science. We find that the test’s selection can diverge from those of the Vuong and Clarke tests and that this can ultimately lead to differences in substantive conclusions.  相似文献   

19.
Qingming Zou  Zhongyi Zhu 《Metrika》2014,77(2):225-246
The single-index model is an important tool in multivariate nonparametric regression. This paper deals with M-estimators for the single-index model. Unlike the existing M-estimator for the single-index model, the unknown link function is approximated by B-spline and M-estimators for the parameter and the nonparametric component are obtained in one step. The proposed M-estimator of unknown function is shown to attain the convergence rate as that of the optimal global rate of convergence of estimators for nonparametric regression according to Stone (Ann Stat 8:1348–1360, 1980; Ann Stat 10:1040–1053, 1982), and the M-estimator of parameter is $\sqrt{n}$ -consistent and asymptotically normal. A small sample simulation study showed that the M-estimators proposed in this paper are robust. An application to real data illustrates the estimator’s usefulness.  相似文献   

20.
Based on the seminal paper of Farrell (J R Stat Soc Ser A (General) 120(3):253–290, 1957), researchers have developed several methods for measuring efficiency. Nowadays, the most prominent representatives are nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA), both introduced in the late 1970s. Researchers have been attempting to develop a method which combines the virtues—both nonparametric and stochastic—of these “oldies”. The recently introduced Stochastic non-smooth envelopment of data (StoNED) by Kuosmanen and Kortelainen (J Prod Anal 38(1):11–28, 2012) is such a promising method. This paper compares the StoNED method with the two “oldies” DEA and SFA and extends the initial Monte Carlo simulation of Kuosmanen and Kortelainen (J Prod Anal 38(1):11–28, 2012) in several directions. We show, among others, that, in scenarios without noise, the rivalry is still between the “oldies”, while in noisy scenarios, the nonparametric StoNED PL now constitutes a promising alternative to the SFA ML.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号