首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Dealing with weighted additive models in Data Envelopment Analysis guarantees that any projection of an inefficient unit belongs to the strong efficient frontier, among other interesting properties. Recently, constant returns to scale (CRS) range-bounded models have been introduced for defining a new additive-type efficiency measure (see Cooper et al. in J Prod Anal 35(2):85–94, 2011). This paper continues such earlier work further, considering a more general setting. In particular, we show that under free disposability of inputs and outputs, CRS bounded additive models require a double set of slacks. The second set of slacks allows us to properly characterize all the Pareto-efficient points associated to the bounded technology. We further introduce the CRS partially-bounded additive models.  相似文献   

2.
Variable selection for additive partially linear models with measurement error is considered. By the backfitting technique, we first propose a variable selection procedure for the parametric components based on the smoothly clipped absolute deviation (SCAD) penalization, and one-step spare estimates for parametric components are also presented. The resulting estimates perform asymptotic normality as well as an oracle property. Then, two-stage backfitting estimators are also presented for the nonparametric components by using the local linear method, and the structures of asymptotic biases and covariances of the proposed estimators are the same as those in partially linear model with measurement error. The finite sample performance of the proposed procedures is illustrated by simulation studies.  相似文献   

3.
Non-parametric models for spatial efficiency   总被引:1,自引:0,他引:1  
This research develops a nonconvex model for measuring the spatial efficiency of siting decisions and demonstrates the virtues of such measurements in comparison to those of convex approaches. Working with a case study from the public sector, we develop relative spatial efficiency (RSE) models which access the sufficiency of a location decision in relation to a best practice decision on the efficient (or most accessible) frontier. The paper also compares the results of the nonconvex methodology with that of the convex model and suggests the strengths and weaknesses of each in terms of the type of support they offer to decisionmakers concerned with actual siting decisions.The authors express their gratitude to Professors, Knox Lovell, Gerard Rushton, and two anonymous referrees for their helpful comments on an earlier draft.  相似文献   

4.
We analyse additive regression model fitting via the backfitting algorithm. We show that in the case of a large class of curve estimators, which includes regressograms, simple step-by-step formulae can be given for the back-fitting algorithm. The result of each cycle of the algorithm may be represented succinctly in terms of a sequence of d projections in n-dimensional space, where d is the number of design coordinates and n is sample size. It follows from our formulae that the limit of the algorithm is simply the projection of the data onto that vector space which is orthogonal to the space of all n-vectors fixed by each of the projections. The formulae also provide the convergence rate of the algorithm, the variance of the backfitting estimator, consistency of the estimator, and the relationship of the estimator to that obtained by directly minimizing mean squared distance.  相似文献   

5.
In a seminal contribution, Ross (1976) showed that a static finite state-space market can be completed by supplementing the primitive securities with ordinary call and put options. Galvani (2009) extends this result to norm separable LpLp-spaces, with 1≤p<∞1p<. This study concludes that options maintain the same spanning power in the space of bounded payoffs topologized by the duality with the space of the state price densities. In particular, under mild assumptions on the probability space, options written on a claim that is a.s. equal to an injective function complete the market.  相似文献   

6.
This paper proposes a tail-truncated stochastic frontier model that allows for the truncation of technical efficiency from below. The truncation bound implies the inefficiency threshold for survival. Specifically, this paper assumes a uniform distribution of technical inefficiency and derives the likelihood function. Even though this distributional assumption imposes a strong restriction that technical inefficiency has a uniform probability density over [0, θ], where θ is the threshold parameter, this model has two advantages: (1) the reduction in the number of parameters compared with more complicated tail-truncated models allows better performance in numerical optimization; and (2) it is useful for empirical studies of the distribution of efficiency or productivity, particularly the truncation of the distribution. The Monte Carlo simulation results support the argument that this model approximates the distribution of inefficiency precisely, as the data-generating process not only follows the uniform distribution but also the truncated half-normal distribution if the inefficiency threshold is small.  相似文献   

7.
The generalised additive models (GAM) are widely used in data analysis. In the application of the GAM, the link function involved is usually assumed to be a commonly used one without justification. Motivated by a real data example with binary response where the commonly used link function does not work, we propose a generalised additive models with unknown link function (GAMUL) for various types of data, including binary, continuous and ordinal. The proposed estimators are proved to be consistent and asymptotically normal. Semiparametric efficiency of the estimators is demonstrated in terms of their linear functionals. In addition, an iterative algorithm, where all estimators can be expressed explicitly as a linear function of Y, is proposed to overcome the computational hurdle for the GAM type model. Extensive simulation studies conducted in this paper show the proposed estimation procedure works very well. The proposed GAMUL are finally used to analyze a real dataset about loan repayment in China, which leads to some interesting findings.  相似文献   

8.
In this paper a sample of UK mechanical engineering companies is ranked by a number of one-factor and two-factor efficiency measures. Of the one-factor measures, profitability proves to be most closely related to the more theoretically correct two-factor measures. Whilst this empirical finding supports a priori reasoning, care must be taken with its interpretation. For when profitability differences between companies do not reflect efficiency differences, conventional two-factor efficiency measures will themselves be unreliable indicators of true efficiency.  相似文献   

9.
10.
11.
The conventional cost efficiency model assumes that all the input prices are fixed and known exactly at each decision making unit. In practice, however, exact knowledge of prices is difficult and prices may be subject to variations over very short periods of time. In this paper, we develop a new DEA model, which can be transformed into a special case of bi-level linear program, to calculate the lower bound of CE efficiency from the pessimistic viewpoints based on the shortcomings of the existing approaches. As the input (price) cone of the pessimistic CE model tightens, the objective function converges to the traditional Farrell cost efficiency measure. Numerical examples are used to demonstrate the proposed approach and compare the results with those obtained with the existing approaches.  相似文献   

12.
Estimation of technical efficiency is widely used in empirical research using both cross-sectional and panel data. Although several stochastic frontier models for panel data are available, only a few of them are normally applied in empirical research. In this article we chose a broad selection of such models based on different assumptions and specifications of heterogeneity, heteroskedasticity and technical inefficiency. We applied these models to a single dataset from Norwegian grain farmers for the period 2004–2008. We also introduced a new model that disentangles firm effects from persistent (time-invariant) and residual (time-varying) technical inefficiency. We found that efficiency results are quite sensitive to how inefficiency is modeled and interpreted. Consequently, we recommend that future empirical research should pay more attention to modeling and interpreting inefficiency as well as to the assumptions underlying each model when using panel data.  相似文献   

13.
Conclusions In this paper we have proposed new techniques for simplifying the estimation of disequilibrium models by avoiding constrained maximum likelihood methods (which cannot avoid numerous theoretical and practical difficulties mentioned above) including an unrealistic assumption of the independence of errors in demand and supply system of equations. In the proposed first stage, one estimates the relative magnitude of the residuals from the demand and supply equations nonparametrically, even though they suffer from omitted variables bias, because the coefficient of the omitted variable is known to be the same in both equations. The reason for using nonparametric methods is that they do not depend on parametric functional forms of biased (bent inward) demand and supply equations. The first stage compares the absolute values of residuals from conditional expectations in order to classify the data points as belonging to the demand or the supply curve. We estimate the economically meaningful scale elasticity and distribution parameters at the second stage from classified (separated) data.We extend nonparametric kernel estimation to the r = 4 case to improve the speed of convergence, as predicted by Singh's [1981] theory. In the first stage, r = 4 results give generally improved R2 and ¦t¦ values in our study of the Dutch data—used by many authors concerned with the estimation of floorspace productivity. We find that one can obtain reasonable results by our approximate but simpler two stage methods. Detailed results are reported for four types of Dutch retail establishments. More research is needed to gain further experience and to extend the methodology to other disequilibrium models and other productivity estimation problems.This paper was processed by W. Eichhorn.  相似文献   

14.
《Journal of econometrics》1999,88(2):341-363
Optimal estimation of missing values in ARMA models is typically performed by using the Kalman filter for likelihood evaluation, ‘skipping’ in the computations the missing observations, obtaining the maximum likelihood (ML) estimators of the model parameters, and using some smoothing algorithm. The same type of procedure has been extended to nonstationary ARIMA models in Gómez and Maravall (1994). An alternative procedure suggests filling in the holes in the series with arbitrary values and then performing ML estimation of the ARIMA model with additive outliers (AO). When the model parameters are not known the two methods differ, since the AO likelihood is affected by the arbitrary values. We develop the proper likelihood for the AO approach in the general non-stationary case and show the equivalence of this and the skipping method. Finally, the two methods are compared through simulation, and their relative advantages assessed; the comparison also includes the AO method with the uncorrected likelihood.  相似文献   

15.
This paper is an empirical study of the uncertainty associated with technical efficiency estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data.  相似文献   

16.
Summary Recently, Bischoff and Fieger (1992) considered the classical problem of estimating a bounded normal mean when the loss is thep-th power of the error. They proved that forp>-2 a two point prior is least favourable in case the parameter interval is small enough. In the present paper it is shown that this result remains valid forp>1. Moreover, the normal family is generalized to location parameter families. Finally, it is proved that no two point prior is least favourable for absolute error loss, i.e., forp=1.  相似文献   

17.
In studies on democracy and democratization, the Freedom House Index (FHI) is frequently used to measure the concept of polyarchy. This approach creates an often noticed discrepancy between the conceptual and measurement levels. The concept of polyarchy is regarded as a minimalist definition of democracy, which mainly refers to the procedural aspects of political systems, while FHI indicates a maximalist definition of democracy. This article presents a proposal to improve the conceptual validity when FHI is used to measure the concept of polyarchy. The proposal suggests that the FHI is adjusted in two aspects. First, some sub-categories within the FHI are excluded based on their lack of relevance for the concept of polyarchy. Second, the principle of aggregation is changed from simple arithmetic addition to multiplication, which corresponds to the idea that all democratic institutions, according to the concept of polyarchy, are necessary for the democratic system. These two suggestions create a revisited index that is based on the FHI. As illustrated with empirical analyses, the revisited index provides a quite different view of democratization at the global and state levels.  相似文献   

18.
19.
Suppose that the econometrician is interested in comparing two misspecified moment restriction models, where the comparison is performed in terms of some chosen measure of fit. This paper is concerned with describing an optimal test of the Vuong (1989) and Rivers and Vuong (2002) type null hypothesis that the two models are equivalent under the given measure of fit (the ranking may vary for different measures). We adopt the generalized Neyman–Pearson optimality criterion, which focuses on the decay rates of the type I and II error probabilities under fixed non-local alternatives, and derive an optimal but practically infeasible test. Then, as an illustration, by considering the model comparison hypothesis defined by the weighted Euclidean norm of moment restrictions, we propose a feasible approximate test statistic to the optimal one and study its asymptotic properties. Local power properties, one-sided test, and comparison under the generalized empirical likelihood-based measure of fit are also investigated. A simulation study illustrates that our approximate test is more powerful than the Rivers–Vuong test.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号