共查询到20条相似文献,搜索用时 15 毫秒
1.
Min-Hsiao Tsai 《Metrika》2009,70(3):355-367
Consider the problem of discriminating between two rival response surface models and estimating parameters in the identified
model. To construct designs serving for both model discrimination and parameter estimation, the M
γ-optimality criterion, which puts weight γ (0≤γ≤1) for model discrimination and 1 − γ for parameter estimation, is adopted.
The corresponding M
γ-optimal product design is explicitly derived in terms of canonical moments. With the application of the maximin principle
on the M
γ-efficiency of any M
γ'-optimal product design, a criterion-robust optimal product design is proposed. 相似文献
2.
This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some
environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which
second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78,
2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan
(Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different
assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar
and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case,
bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications
of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the
initial DEA estimates. 相似文献
3.
Liang and Ng (Metrika 68:83–98, 2008) proposed a componentwise conditional distribution method for L p -uniform sampling on L p -norm n-spheres. On the basis of properties of a special family of L p -norm spherical distributions we suggest a wide class of algorithms for sampling uniformly distributed points on n-spheres and n-balls in L p spaces, generalizing the approach of Harman and Lacko (J Multivar Anal 101:2297–2304, 2010), and including the method of Liang and Ng as a special case. We also present results of a numerical study proving that the choice of the best algorithm from the class significantly depends on the value of p. 相似文献
4.
An important issue when conducting stochastic frontier analysis is how to choose a proper parametric model, which includes
choices of the functional form of the frontier function, distributions of the composite errors, and also the exogenous variables.
In this paper, we extend the likelihood ratio test of Vuong, Econometrica 57(2):307–333, (1989) and Takeuchi’s, Suri-Kagaku (Math Sci) 153:12–18, (1976) model selection criterion to the stochastic frontier models. The most attractive feature of this test is that it can not
only be used for testing a non-nested model, but also still be applicable even when the general model is misspecified. Finally,
we also demonstrate how to apply this test to the Indian farm data used by Battese and Coelli, J Prod Anal 3:153–169, (1992), Empir Econ 20(2):325–332, (1995) and Alvarez et al., J Prod Anal 25:201–212, (2006). 相似文献
5.
Vassilios D. Pyromalis George S. Vozikis 《The International Entrepreneurship and Management Journal》2009,5(4):439-460
The succession process in family firms has by far been determined to be the most critical phase in the family business life-cycle
(e.g. Morris et al. Journal of Business Venturing 18:513–531, 1997; Wang et al. 2000) and characterized as the period in which most family firm fatalities occur (Handler and Kram Family Business Review 1:361–381, 1988). This paper is an empirical study on Greek family firms and seeks to identify the critical success factors that have a major
impact on the outcome of a generational transition in the leadership of the family firm. Based on an integrated conceptual
framework proposed by Pyromalis et al. (2006), we test the impact of five factors, namely the incumbent’s propensity to step aside, the successor’s willingness to take
over, the positive family relations and communication, succession planning, and the successor’s appropriateness and preparation
on both the satisfaction of the stakeholders with the succession process and the effectiveness of the succession process per se. The results provide a useful insight and confirm the importance of the
aforementioned factors in the succession process by mapping a safe passage through the family business succession process,
and by contributing not only to the overall family business literature but also generating strong arguments in favor of the
family firm as an integral entrepreneurial element for a region’s sustainable economic development. 相似文献
6.
The original Data Envelopment Analysis (DEA) models developed by Charnes et al. (Eur J Oper Res 2:429–444, 1978), Banker et al. (Manag Sci 30:1078–1092, 1984) were both radial models. These models and their varied extensions have remained the most popular DEA models in terms of
utilization. The benchmark targets they determined for inefficient units are primarily based on the notion of maintaining
the same input and output mixes originally employed by the evaluated unit (i.e. disregarding allocative considerations). This
paper presents a methodology to investigate allocative and overall efficiency in the absence of defined input and output prices.
The benchmarks determined from models based on this methodology will consider all possible input and/or output mixes. Application
of this methodology is illustrated on a model of the financial intermediary function of a bank branch network. 相似文献
7.
This paper proposes a dual-level inefficiency model for analysing datasets with a sub-company structure, which permits firm
inefficiency to be decomposed into two parts: a component that varies across different sub-companies within a firm (internal
inefficiency); and a persistent component that applies across all sub-companies in the same firm (external inefficiency).
We adapt the models developed by Kumbhakar and Hjalmarsson (J Appl Econom 10:33–47, 1995) and Kumbhakar and Heshmati (Am J Agric Econ 77:660–674, 1995), making the same distinction between persistent and residual inefficiency, but in our case across sub-companies comprising
a firm, rather than over time. The proposed model is important in a regulatory context, where datasets with a sub-company
structure are commonplace, and regulators are interested in identifying and eliminating both persistent and sub-company varying
inefficiency. Further, as regulators often have to work with small cross-sections, the utilisation of sub-company data can
be seen as an additional means of expanding cross-sectional datasets for efficiency estimation. Using an international dataset
of rail infrastructure managers we demonstrate the possibility of separating firm inefficiency into its persistent and sub-company
varying components. The empirical illustration highlights the danger that failure to allow for the dual-level nature of inefficiency
may cause overall firm inefficiency to be underestimated. 相似文献
8.
Non-discretionary or environmental variables are regarded as important in the evaluation of efficiency in Data Envelopment
Analysis (DEA), but there is no consensus on the correct treatment of these variables. This paper compares the performance
of the standard BCC model as a base case with two single-stage models: the Banker and Morey (1986a) model, which incorporates continuous environmental variables and the Banker and Morey (1986b) model, which incorporates categorical environmental variables. Simulation analyses are conducted using a shifted Cobb-Douglas
function, with one output, one non-discretionary input, and two discretionary inputs. The production function is constructed
to separate environmental impact from managerial inefficiency, while providing measures of both for comparative purposes.
Tests are performed to evaluate the accuracy of each model. The distribution of the inputs, the sample size and the number
of categories for the categorical model are varied in the simulations to determine their impact on the performance of each
model. The results show that the Banker and Morey models should be used in preference to the standard BCC model when the environmental
impact is moderate to high. Both the continuous and categorical models perform equally well but the latter may be better suited
to some applications with larger sample sizes. Even when the environmental impact is slight, the use of a simple two-way split
of the sample data can produce significantly better results under the Categorical model in comparison to the BCC model. 相似文献
9.
It is well-known that the naive bootstrap yields inconsistent inference in the context of data envelopment analysis (DEA)
or free disposal hull (FDH) estimators in nonparametric frontier models. For inference about efficiency of a single, fixed
point, drawing bootstrap pseudo-samples of size m < n provides consistent inference, although coverages are quite sensitive to the choice of subsample size m. We provide a probabilistic framework in which these methods are shown to valid for statistics comprised of functions of
DEA or FDH estimators. We examine a simple, data-based rule for selecting m suggested by Politis et al. (Stat Sin 11:1105–1124, 2001), and provide Monte Carlo evidence on the size and power of our tests. Our methods (i) allow for heterogeneity in the inefficiency
process, and unlike previous methods, (ii) do not require multivariate kernel smoothing, and (iii) avoid the need for solutions
of intermediate linear programs. 相似文献
10.
A stochastic frontier model with correction for sample selection 总被引:3,自引:2,他引:1
William Greene 《Journal of Productivity Analysis》2010,34(1):15-24
Heckman’s (Ann Econ Soc Meas 4(5), 475–492, 1976; Econometrica 47, 153–161, 1979) sample selection model has been employed in three decades of applications of linear regression studies. This paper builds
on this framework to obtain a sample selection correction for the stochastic frontier model. We first show a surprisingly
simple way to estimate the familiar normal-half normal stochastic frontier model using maximum simulated likelihood. We then
extend the technique to a stochastic frontier model with sample selection. In an application that seems superficially obvious,
the method is used to revisit the World Health Organization data (WHO in The World Health Report, WHO, Geneva 2000; Tandon et al. in Measuring the overall health system performance for 191 countries, World Health Organization, 2000) where the sample partitioning is based on OECD membership. The original study pooled all 191 countries. The OECD members
appear to be discretely different from the rest of the sample. We examine the difference in a sample selection framework. 相似文献
11.
Faraz Alireza Kazemzadeh R. B. Moghadam M. B. Parsian Ahmad 《Quality and Quantity》2012,46(4):1323-1336
Faraz and Parsian (Stat Pap 47:569–593, 2006) have shown that the double warning lines (DWL) scheme detects process shifts more quickly than the other variable ratio
sampling schemes such as variable sample sizes (VSS), variable sampling intervals (VSI) and variable sample sizes and sampling
intervals (VSSVSI). In this paper, the DWL T2 control chart for monitoring the process mean vector is economically designed. The cost model proposed by Costa and Rahim
(J Appl Stat 28:875–885, 2001) is used here and is minimized through a genetic algorithm (GA) approach. Then the effects of the model parameters on the
chart parameters and resulting operating loss is studied and finally a comparison between all possible variable ratio sampling
(VRS) schemes are made to choose the best option economically. 相似文献
12.
We consider a semiparametric method to estimate logistic regression models with missing both covariates and an outcome variable, and propose two new estimators. The first, which is based solely on the validation set, is an extension of the validation likelihood estimator of Breslow and Cain (Biometrika 75:11–20, 1988). The second is a joint conditional likelihood estimator based on the validation and non-validation data sets. Both estimators are semiparametric as they do not require any model assumptions regarding the missing data mechanism nor the specification of the conditional distribution of the missing covariates given the observed covariates. The asymptotic distribution theory is developed under the assumption that all covariate variables are categorical. The finite-sample properties of the proposed estimators are investigated through simulation studies showing that the joint conditional likelihood estimator is the most efficient. A cable TV survey data set from Taiwan is used to illustrate the practical use of the proposed methodology. 相似文献
13.
Brian A. Altman 《Employee Responsibilities and Rights Journal》2010,22(1):21-32
This paper applies Novak’s (1998) theory of learning to the problem of workplace bullying. Novak’s theory offers an understanding of how actions of bullying
and responses to bullying can be seen as deriving from individualized conceptualizations of workplace bullying by those involved.
Further, Novak’s theory suggests that training involving Ausubel’s concept of meaningful learning (Ausubel Educational Theory
11(1): 15–25, 1961; Ausubel et al.
1978) which attends to learners’ pre-existing knowledge and allows for new meaning to be constructed regarding workplace bullying
can lead to new actions related to workplace bullying. Ideally, these new actions can involve both a reduction in workplace
bullying prevalence, and responses to workplace bullying which recognize and are informed by the negative consequences of
this workplace dynamic. 相似文献
14.
Ruey-Ching Hwang Jhao-Siang Siao Huimin Chung C. K. Chu 《Journal of Productivity Analysis》2011,36(3):263-273
We use a stochastic frontier model with firm-specific technical inefficiency effects in a panel framework (Battese and Coelli
in Empir Econ 20:325–332, 1995) to assess two popular probability of bankruptcy (PB) measures based on Merton model (Merton in J Financ 29:449–470, 1974) and discrete-time hazard model (DHM; Shumway in J Bus 74:101–124, 2001). Three important results based on our empirical studies are obtained. First, a firm with a higher PB generally has less
technical efficiency. Second, for an ex-post bankrupt firm, its PB tends to increase and its technical efficiency of production
tends to decrease, as the time to its bankruptcy draws near. Finally, the information content about firm’s technical inefficiency
provided by PB based on DHM is significantly more than that based on Merton model. By the last result and the fact that economic-based
efficiency measures are reasonable indicators of the long-term health and prospects of firms (Baek and Pagán in Q J Bus Econ
41:27–41, 2002), we conclude that PB based on DHM is a better credit risk proxy of firms. 相似文献
15.
Since Solow (Q J Econ 70:65–94, 1956) the economic literature has widely accepted innovation and technological progress as the central drivers of long-term economic
growth. From the microeconomic perspective, this has led to the idea that the growth effects on the macroeconomic level should
be reflected in greater competitiveness of the firms. Although innovation effort does not always translate into greater competitiveness,
it is recognized that innovation is, in an appropriate sense, unique and differs from other inputs like labor or capital.
Nonetheless, often this uniqueness is left unspecified. We analyze two arguments rendering innovation special, the first related
to partly non-discretionary innovation input levels and the second to the induced increase in the firm’s competitiveness on
the global market. Methodologically the analysis is based on restriction tests in non-parametric frontier models, where we
use and extend tests proposed by Simar and Wilson (Commun Stat Simul Comput 30(1):159–184, 2001; J Prod Anal, forthcoming, 2010). The empirical data is taken from the German Community Innovation Survey 2007 (CIS 2007), where we focus on mechanical engineering
firms. Our results are consistent with the explanation of the firms’ inability to freely choose the level of innovation inputs.
However, we do not find significant evidence that increased innovation activities correspond to an increase in the ability
to serve the global market. 相似文献
16.
Frank Fullard 《The International Entrepreneurship and Management Journal》2007,3(3):263-276
This paper describes a model developed to measure customer satisfaction with enterprise training programmes. Based on developments
in customer satisfaction and quality measurement, it is proposed as an alternative to the training evaluation model developed
by Kirkpatrick (1959). A single indicator, a Customer Satisfaction Index (CSI), quantifies the level of satisfaction with each training programme.
The model also measures the individual parameters that contribute to the CSI, as well as their relative importance. It facilitates
a benchmarking process regarding these parameters and between training programmes. The development process of the model is
described, as is its use in practice. 相似文献
17.
Stochastic FDH/DEA estimators for frontier analysis 总被引:2,自引:2,他引:0
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two
main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar
to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs
technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric
methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of
local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation
of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical
noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing
them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH
estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets.
Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom
137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave. 相似文献
18.
Holger Dette 《Metrika》1997,46(1):71-82
In his book Pukelsheim [8] pointed out that designs supported at the arcsin points are very efficient for the statistical
inference in a polynomial regression model. In this note we determine the canonical moments of a class of distributions which
have nearly equal weights at the arcsin points. The class contains theD-optimal arcsin support design and theD
1-optimal design for a polynomial regression. The results allow explicit representations ofD-, andD
1-efficiencies of these designs in all polynomial models with a degree less than the number of support points of the design. 相似文献
19.
Pooia Lalbakhsh 《Enterprise Information Systems》2017,11(5):758-785
This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy. 相似文献
20.
This paper sums up in a common analytical structure the main results, scattered in economic literature, concerning the linearity
between rate of profit and real wage in a simple Sraffa’s model. The paper is mainly based on previous results of one of the
two authors and on results of Miyao (Int Econ Rev 18:151–162, 1977) and Schefold (Zeitschrift für angewandte Mathematik und Physik 27:873–875, 1976a; Zeitschrift für National?konomie 36:21–48, 1976b). 相似文献