首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 388 毫秒
1.
This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.  相似文献   

2.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

3.
Recent studies have stressed the importance of privatization and openness to foreign competition for bank efficiency and economic growth. We study bank efficiency in Turkey, an emerging economy with great heterogeneity in bank types and ownership structures. Earlier studies of Turkish banking had three limitations: (i) excessive reliance on cost‐function frontier analyses, wherein volume of loans is a measure of banking output; (ii) pooling all banks or imposing ad hoc heterogeneity assumptions; and (iii) lack of a comprehensive panel data set for proper analysis of productivity and heterogeneity. We use an estimation–classification procedure to find likelihood‐driven classification of bank technologies in an 11‐year panel. In addition, we augment traditional cost‐frontier analysis with a labour‐efficiency analysis. We conclude that state banks are not particularly inefficient overall, but that they do utilize labour inefficiently. This partially supports recent calls for privatization. We also conclude that special finance houses (or Islamic banks) utilize the same technology as conventional domestic banks, and do so relatively efficiently. This suggests that they do not cause harm to the financial system. Finally, we conclude that foreign banks utilize a different technology from domestic ones. This suggests that one should not overstate their value to the financial sector. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Time series data arise in many medical and biological imaging scenarios. In such images, a time series is obtained at each of a large number of spatially dependent data units. It is interesting to organize these data into model‐based clusters. A two‐stage procedure is proposed. In stage 1, a mixture of autoregressions (MoAR) model is used to marginally cluster the data. The MoAR model is fitted using maximum marginal likelihood (MMaL) estimation via a minorization–maximization (MM) algorithm. In stage 2, a Markov random field (MRF) model induces a spatial structure onto the stage 1 clustering. The MRF model is fitted using maximum pseudolikelihood (MPL) estimation via an MM algorithm. Both the MMaL and MPL estimators are proved to be consistent. Numerical properties are established for both MM algorithms. A simulation study demonstrates the performance of the two‐stage procedure. An application to the segmentation of a zebrafish brain calcium image is presented.  相似文献   

5.
Despite the solid theoretical foundation on which the gravity model of bilateral trade is based, empirical implementation requires several assumptions which do not follow directly from the underlying theory. First, unobserved trade costs are assumed to be a (log‐)linear function of observables. Second, the effects of trade costs on trade flows are assumed to be constant across country pairs. Maintaining consistency with the underlying theory, but relaxing these assumptions, we estimate gravity models—in levels and logs—using two data sets via nonparametric methods. The results are striking. Despite the added flexibility of the nonparametric models, parametric models based on these assumptions offer equally or more reliable in‐sample predictions and out‐of‐sample forecasts in the majority of cases, particularly in the levels model. Moreover, formal statistical tests fail to reject either parametric functional form. Thus, concerns in the gravity literature over functional form appear unwarranted, and estimation of the gravity model in levels is recommended. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
This paper studies the efficient estimation of large‐dimensional factor models with both time and cross‐sectional dependence assuming (N,T) separability of the covariance matrix. The asymptotic distribution of the estimator of the factor and factor‐loading space under factor stationarity is derived and compared to that of the principal component (PC) estimator. The paper also considers the case when factors exhibit a unit root. We provide feasible estimators and show in a simulation study that they are more efficient than the PC estimator in finite samples. In application, the estimation procedure is employed to estimate the Lee–Carter model and life expectancy is forecast. The Dutch gender gap is explored and the relationship between life expectancy and the level of economic development is examined in a cross‐country comparison.  相似文献   

7.
Consider a linear regression model and suppose that our aim is to find a confidence interval for a specified linear combination of the regression parameters. In practice, it is common to perform a Durbin–Watson pretest of the null hypothesis of zero first‐order autocorrelation of the random errors against the alternative hypothesis of positive first‐order autocorrelation. If this null hypothesis is accepted then the confidence interval centered on the ordinary least squares estimator is used; otherwise the confidence interval centered on the feasible generalized least squares estimator is used. For any given design matrix and parameter of interest, we compare the confidence interval resulting from this two‐stage procedure and the confidence interval that is always centered on the feasible generalized least squares estimator, as follows. First, we compare the coverage probability functions of these confidence intervals. Second, we compute the scaled expected length of the confidence interval resulting from the two‐stage procedure, where the scaling is with respect to the expected length of the confidence interval centered on the feasible generalized least squares estimator, with the same minimum coverage probability. These comparisons are used to choose the better confidence interval, prior to any examination of the observed response vector.  相似文献   

8.
Structural shocks in multivariate dynamic systems are hidden and often identified with reference to a priori economic reasoning. Based on a non‐Gaussian framework of independent shocks, this work provides an approach to discriminate between alternative identifying assumptions on the basis of dependence diagnostics. Relying on principles of Hodges–Lehmann estimation, we suggest a decomposition of reduced form covariance matrices that yields implied least dependent (structural) shocks. A Monte Carlo study underlines the discriminatory strength of the proposed identification strategy. Applying the approach to a stylized model of the Euro Area economy, independent shocks conform with features of demand, supply and monetary policy shocks.  相似文献   

9.
Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper, we propose general‐to‐specific (Gets) model selection procedures to overcome these limitations. It is shown that single‐equation procedures are generally efficient for the reduction of recursive SVAR models. The small‐sample properties of the proposed reduction procedure (as implemented using PcGets) are evaluated in a realistic Monte Carlo experiment. The impulse responses generated by the selected SVAR are found to be more precise and accurate than those of the unrestricted VAR. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (Review of Economics and Statistics, Vol. 78, pp. 16–34, 1996) . The results are consistent with the Monte Carlo and question the validity of the impulse responses generated by the full system.  相似文献   

10.
This article deals with heterogeneity and spatial dependence in economic growth analysis by developing a two‐stage strategy that identifies clubs by a mapping analysis and estimates a club convergence model with spatial dependence. Since estimation of this class of convergence models in the presence of regional heterogeneity poses both identification and collinearity problems, we develop an entropy‐based estimation procedure that simultaneously takes account of ill‐posed and ill‐conditioned inference problems. The two‐step strategy is applied to assess the existence of club convergence and to estimate a two‐club spatial convergence model across Italian regions over the period 1970 to 2000.  相似文献   

11.
Studies of childhood development have suggested human capital is accumulated in complex and nonlinear ways. Nonetheless, empirical analyses of this process often impose a linear functional form. This paper investigates which technology assumptions matter in quantitative models of human capital production. I propose a general‐to‐restricted procedure to test the production technology, placing constraints on a modified McCarthy function, from which transcendental, constant elasticity of substitution, log‐linear and linear models are obtained as special cases. Applying the procedure to data on child height from the Young Lives surveys, as well as cognitive skills, I find that the technology of human capital production is neither log‐linear nor linear‐in‐parameters; rather, past and present inputs act as complements. I recommend that maintained hypotheses underlying functional form choices should be tested on a routine basis.  相似文献   

12.
Small area estimation is a widely used indirect estimation technique for micro‐level geographic profiling. Three unit level small area estimation techniques—the ELL or World Bank method, empirical best prediction (EBP) and M‐quantile (MQ) — can estimate micro‐level Foster, Greer, & Thorbecke (FGT) indicators: poverty incidence, gap and severity using both unit level survey and census data. However, they use different assumptions. The effects of using model‐based unit level census data reconstructed from cross‐tabulations and having no cluster level contextual variables for models are discussed, as are effects of small area and cluster level heterogeneity. A simulation‐based comparison of ELL, EBP and MQ uses a model‐based reconstruction of 2000/2001 data from Bangladesh and compares bias and mean square error. A three‐level ELL method is applied for comparison with the standard two‐level ELL that lacks a small area level component. An important finding is that the larger number of small areas for which ELL has been able to produce sufficiently accurate estimates in comparison with EBP and MQ has been driven more by the type of census data available or utilised than by the model per se.  相似文献   

13.
A simulation study was conducted to investigate the effect of non normality and unequal variances on Type I error rates and test power of the classical factorial anova F‐test and different alternatives, namely rank transformation procedure (FR), winsorized mean (FW), modified mean (FM) and permutation test (FP) for testing interaction effects. Simulation results showed that as long as no significant deviation from normality and homogeneity of the variances exists, generally all of the tests displayed similar results. However, if there is significant deviation from the assumptions, the other tests are observed to be affected at considerably high levels except FR and FP tests. As a result, when the assumptions of factorial anova F‐test are not met or, in the case those assumptions are not tested whether met, it can be concluded that using FR and FP tests is more suitable than the classical factorial anova F‐test.  相似文献   

14.
Small area estimation typically requires model‐based methods that depend on isolating the contribution to overall population heterogeneity associated with group (i.e. small area) membership. One way of doing this is via random effects models with latent group effects. Alternatively, one can use an M‐quantile ensemble model that assigns indices to sampled individuals characterising their contribution to overall sample heterogeneity. These indices are then aggregated to form group effects. The aim of this article is to contrast these two approaches to characterising group effects and to illustrate them in the context of small area estimation. In doing so, we consider a range of different data types, including continuous data, count data and binary response data.  相似文献   

15.
During the last three decades, integer‐valued autoregressive process of order p [or INAR(p)] based on different operators have been proposed as a natural, intuitive and maybe efficient model for integer‐valued time‐series data. However, this literature is surprisingly mute on the usefulness of the standard AR(p) process, which is otherwise meant for continuous‐valued time‐series data. In this paper, we attempt to explore the usefulness of the standard AR(p) model for obtaining coherent forecasting from integer‐valued time series. First, some advantages of this standard Box–Jenkins's type AR(p) process are discussed. We then carry out our some simulation experiments, which show the adequacy of the proposed method over the available alternatives. Our simulation results indicate that even when samples are generated from INAR(p) process, Box–Jenkins's model performs as good as the INAR(p) processes especially with respect to mean forecast. Two real data sets have been employed to study the expediency of the standard AR(p) model for integer‐valued time‐series data.  相似文献   

16.
Model averaging has become a popular method of estimation, following increasing evidence that model selection and estimation should be treated as one joint procedure. Weighted‐average least squares (WALS) is a recent model‐average approach, which takes an intermediate position between frequentist and Bayesian methods, allows a credible treatment of ignorance, and is extremely fast to compute. We review the theory of WALS and discuss extensions and applications.  相似文献   

17.
The familiar logit and probit models provide convenient settings for many binary response applications, but a larger class of link functions may be occasionally desirable. Two parametric families of link functions are investigated: the Gosset link based on the Student t latent variable model with the degrees of freedom parameter controlling the tail behavior, and the Pregibon link based on the (generalized) Tukey λ family, with two shape parameters controlling skewness and tail behavior. Both Bayesian and maximum likelihood methods for estimation and inference are explored, compared and contrasted. In applications, like the propensity score matching problem discussed below, where it is critical to have accurate estimates of the conditional probabilities, we find that misspecification of the link function can create serious bias. Bayesian point estimation via MCMC performs quite competitively with MLE methods; however nominal coverage of Bayes credible regions is somewhat more problematic.  相似文献   

18.
In this paper, we introduce a Bayesian panel probit model with two flexible latent effects: first, unobserved individual heterogeneity that is allowed to vary in the population according to a nonparametric distribution; and second, a latent serially correlated common error component. In doing so, we extend the approach developed in Albert and Chib (Journal of the American Statistical Association 1993; 88 : 669–679; in Bayesian Biostatistics, Berry DA, Stangl DK (eds), Marcel Dekker: New York, 1996), and in Chib and Carlin (Statistics and Computing 1999; 9 : 17–26) by releasing restrictive parametric assumptions on the latent individual effect and eliminating potential spurious state dependence with latent time effects. The model is found to outperform more traditional approaches in an extensive series of Monte Carlo simulations. We then apply the model to the estimation of a patent equation using firm‐level data on research and development (R&D). We find a strong effect of technology spillovers on R&D but little evidence of product market spillovers, consistent with economic theory. The distribution of latent firm effects is found to have a multimodal structure featuring within‐industry firm clustering. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
We consider the recent novel two‐step estimator of Iaryczower and Shum (American Economic Review 2012; 102 : 202–237), who analyze voting decisions of US Supreme Court justices. Motivated by the underlying theoretical voting model, we suggest that where the data under consideration display variation in the common prior, estimates of the structural parameters based on their methodology should generally benefit from including interaction terms between individual and time covariates in the first stage whenever there is individual heterogeneity in expertise. We show numerically, via simulation and re‐estimation of the US Supreme Court data, that the first‐order interaction effects that appear in the theoretical model can have an important empirical implication. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号