首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A rich theory of production and analysis of productive efficiency has developed since the pioneering work by Tjalling C. Koopmans and Gerard Debreu. Michael J. Farrell published the first empirical study, and it appeared in a statistical journal (Journal of the Royal Statistical Society), even though the article provided no statistical theory. The literature in econometrics, management sciences, operations research and mathematical statistics has since been enriched by hundreds of papers trying to develop or implement new tools for analysing productivity and efficiency of firms. Both parametric and non‐parametric approaches have been proposed. The mathematical challenge is to derive estimators of production, cost, revenue or profit frontiers, which represent, in the case of production frontiers, the optimal loci of combinations of inputs (like labour, energy and capital) and outputs (the products or services produced by the firms). Optimality is defined in terms of various economic considerations. Then the efficiency of a particular unit is measured by its distance to the estimated frontier. The statistical problem can be viewed as the problem of estimating the support of a multivariate random variable, subject to some shape constraints, in multiple dimensions. These techniques are applied in thousands of papers in the economic and business literature. This ‘guided tour’ reviews the development of various non‐parametric approaches since the early work of Farrell. Remaining challenges and open issues in this challenging arena are also described. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

2.
In this article we are interested in the asymptotic comparison, at optimal levels, of a set of semi‐parametric reduced‐bias extreme value (EV) index estimators, valid for a wide class of heavy‐tailed models, underlying the available data. Again, as in the classical case, there is not any estimator that can always dominate the alternatives, but interesting clear‐cut patterns are found. Consequently, and in practice, a suitable choice of a set of EV index estimators will jointly enable us to better estimate the EV index γ, the primary parameter of extreme events.  相似文献   

3.
Verifying probabilistic forecasts for extreme events is a highly active research area because popular media and public opinions are naturally focused on extreme events, and biased conclusions are readily made. In this context, classical verification methods tailored for extreme events, such as thresholded and weighted scoring rules, have undesirable properties that cannot be mitigated, and the well-known continuous ranked probability score (CRPS) is no exception.In this paper, we define a formal framework for assessing the behavior of forecast evaluation procedures with respect to extreme events, which we use to demonstrate that assessment based on the expectation of a proper score is not suitable for extremes. Alternatively, we propose studying the properties of the CRPS as a random variable by using extreme value theory to address extreme event verification. An index is introduced to compare calibrated forecasts, which summarizes the ability of probabilistic forecasts for predicting extremes. The strengths and limitations of this method are discussed using both theoretical arguments and simulations.  相似文献   

4.
Growing scientific evidence suggests that more frequent and severe weather extremes such as heat waves, hurricanes, flooding and droughts will have an increasing impact on organizations, industries and entire economies. These findings call for the development of theoretical and practical frameworks to strengthen the capacity of organizations to respond to such impacts. Yet despite the need to understand what is required to build anticipatory adaptation and organizational resilience to expected impacts, the organizational theory literature offers only limited insights. This paper proposes a comprehensive conceptual framework of organizational adaptation and resilience to extreme weather events for addressing the effects of ecological discontinuities in organizational research and strategic decision‐making. Implications and suggestions for future research are offered. Copyright © 2011 John Wiley & Sons, Ltd and ERP Environment.  相似文献   

5.
We apply extreme value analysis to US sectoral stock indices in order to assess whether tail risk measures like value‐at‐risk and extremal linkages were significantly altered by 9/11. We test whether semi‐parametric quantile estimates of ‘downside risk’ and ‘upward potential’ have increased after 9/11. The same methodology allows one to estimate probabilities of joint booms and busts for pairs of sectoral indices or for a sectoral index and a market portfolio. The latter probabilities measure the sectoral response to macro shocks during periods of financial stress (so‐called ‘tail‐βs’). Taking 9/11 as the sample midpoint we find that tail‐βs often increase in a statistically and economically significant way. This might be due to perceived risk of new terrorist attacks. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
Good statistical practice dictates that summaries in Monte Carlo studies should always be accompanied by standard errors. Those standard errors are easy to provide for summaries that are sample means over the replications of the Monte Carlo output: for example, bias estimates, power estimates for tests and mean squared error estimates. But often more complex summaries are of interest: medians (often displayed in boxplots), sample variances, ratios of sample variances and non‐normality measures such as skewness and kurtosis. In principle, standard errors for most of these latter summaries may be derived from the Delta Method, but that extra step is often a barrier for standard errors to be provided. Here, we highlight the simplicity of using the jackknife and bootstrap to compute these standard errors, even when the summaries are somewhat complicated. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

7.
We estimate parametric and semi‐parametric binary choice models of benefit take‐up by British pensioners and use a revealed preference argument to infer the cash‐equivalent value of disutility arising from stigma or complexity of the claims process. These implicit costs turn out to be relatively small, averaging about £3–4 per week across Income Support recipients. Using the Foster–Greer–Thorbecke measure of poverty among pensioners, we find that allowing for implicit claim costs incurred by benefit recipients raises the measured degree of poverty by not more than 13%. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
Consumption‐based equivalence scales are estimated by applying the extended partially linear model (EPLM) to the 1998 German Income and Consumption Survey (EVS). In this model the equivalence scales are identified from nonlinearities in household demand. The econometric framework should not therefore impose strong restrictions on the functional forms of household expenditure shares. The chosen semi‐parametric specification meets this requirement. It is flexible, it yields ‐consistent parameter estimates and it is consistent with consumer theory. Estimated equivalence scales are below or in the range of the expert equivalence scales of the German social benefits system. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
In the area of environmental analysis using hedonic price models, we investigate the performance of various nonparametric and semiparametric specifications. The proposed model specifications are made up of two parts: a linear component for house characteristics and a non‐(semi)parametric component representing the nonlinear influence of environmental indicators on house prices. We adopt a general‐to‐specific search procedure, based on recent specification tests comparing the proposed specifications with a fully nonparametric benchmark model, to select the best model specification. An application of these semiparametric models to rural districts indicates that pollution resulting from intensive livestock farming has a significant nonlinear impact on house prices. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
Because the state of the equity market is latent, several methods have been proposed to identify past and current states of the market and forecast future ones. These methods encompass semi‐parametric rule‐based methods and parametric Markov switching models. We compare the mean‐variance utilities that result when a risk‐averse agent uses the predictions of the different methods in an investment decision. Our application of this framework to the S&P 500 shows that rule‐based methods are preferable for (in‐sample) identification of the state of the market, but Markov switching models for (out‐of‐sample) forecasting. In‐sample, only the mean return of the market index matters, which rule‐based methods exactly capture. Because Markov switching models use both the mean and the variance to infer the state, they produce superior forecasts and lead to significantly better out‐of‐sample performance than rule‐based methods. We conclude that the variance is a crucial ingredient for forecasting the market state. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Univariate continuous distributions are one of the fundamental components on which statistical modelling, ancient and modern, frequentist and Bayesian, multi‐dimensional and complex, is based. In this article, I review and compare some of the main general techniques for providing families of typically unimodal distributions on with one or two, or possibly even three, shape parameters, controlling skewness and/or tailweight, in addition to their all‐important location and scale parameters. One important and useful family is comprised of the ‘skew‐symmetric’ distributions brought to prominence by Azzalini. As these are covered in considerable detail elsewhere in the literature, I focus more on their complements and competitors. Principal among these are distributions formed by transforming random variables, by what I call ‘transformation of scale’—including two‐piece distributions—and by probability integral transformation of non‐uniform random variables. I also treat briefly the issues of multi‐variate extension, of distributions on subsets of and of distributions on the circle. The review and comparison is not comprehensive, necessarily being selective and therefore somewhat personal. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

12.
Second‐order orientation methods provide a natural tool for the analysis of spatial point process data. In this paper, we extend to the spatiotemporal setting the spatial point pair orientation distribution function. The new space–time orientation distribution function is used to detect space–time anisotropic configurations. An edge‐corrected estimator is defined and illustrated through a simulation study. We apply the resulting estimator to data on the spatiotemporal distribution of fire ignition events caused by humans in a square area of 30 × 30 km2 for 4 years. Our results confirm that our approach is able to detect directional components at distinct spatiotemporal scales. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

13.
Extreme value theory is concerned with the study of the asymptotic distribution of extreme events, that is to say events which are rare in frequency and huge in magnitude with respect to the majority of observations. Statistical methods derived from it have been employed increasingly in finance, especially for risk measurement. This paper surveys some of those main applications, namely for testing different distributional assumptions for the data, for Value‐at‐Risk and Expected Shortfall calculations, for asset allocation under safety‐first type constraints, and for the study of contagion and dependence across markets under conditions of stress.  相似文献   

14.
This paper presents estimators of distributional impacts of interventions when selection to the program is based on observable characteristics. Distributional impacts are calculated as differences in inequality measures of the marginal distributions of potential outcomes of receiving and not receiving the treatment. The estimation procedure involves a first non‐parametric estimation of the propensity score. In the second step weighted versions of inequality measures are computed using weights based on the estimated propensity score. Consistency, semi‐parametric efficiency and validity of inference based on the percentile bootstrap are shown for the estimators. Results from Monte Carlo exercises show its good performance in small samples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
We consider the estimation of nonlinear models with mismeasured explanatory variables, when information on the marginal distribution of the true values of these variables is available. We derive a semi‐parametric MLE that is shown to be $\sqrt{n}$ consistent and asymptotically normally distributed. In a simulation experiment we find that the finite sample distribution of the estimator is close to the asymptotic approximation. The semi‐parametric MLE is applied to a duration model for AFDC welfare spells with misreported welfare benefits. The marginal distribution of the correctly measured welfare benefits is obtained from an administrative source. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
We propose the use of instrumental variables and pairwise matching to identify the average treatment effect on variance in potential outcomes. We show that identifying and estimating program impact on dispersion of potential outcomes in an endogenous‐switching model is possible, without using the identification‐at‐infinity argument, if we impose semi‐parametric conditions or shape restrictions on the error structure. In the presence of a multi‐valued or continuously distributed instrument, we recommend the pairwise‐matching method under a set of symmetry conditions. Simulations and an empirical example show that the matching method is much more precise than the instrumental‐variable approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Bottazzi and Peri (Economic Journal 2007; 117 : 486–511) show the existence of a cointegrating relationship between the domestic stock of knowledge, domestic R&D and the international knowledge stock for a panel of OECD countries and interpret it as evidence supporting the semi‐endogenous versus the endogenous growth theory. We replicate the baseline specification of their study and we show that main results are robust to the use of a different estimation strategy (Bai et al., Journal of Econometrics 2009; 149 : 82–99) that duly takes into account cross‐sectional correlation: interestingly, in this case we also find a larger role for knowledge spillovers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
When receiving less resources than a competitor, envy may be evoked that may result in spiteful behavior. This paper applies evolutionary theory to understand envy and its outcomes. A theoretical framework is developed that is based on the cause–effect relationships of unequal outcomes, envy, defection of cooperation, and welfare loss. To test this framework, an experiment with 136 participants is run. The results confirm that receiving less than another can indeed lead to experiences of envy and defection of future cooperation, producing a welfare loss of one‐sixth. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
This paper uses extreme value theory to study the implications of skewness risk for nominal loan contracts in a production economy. Productivity and inflation innovations are drawn from generalized extreme value distributions. The model is solved using a third‐order perturbation and estimated by the simulated method of moments. Results show that the data reject the hypothesis that innovations are drawn from normal distributions and favor instead the alternative that they are drawn from asymmetric distributions. Estimates indicate that skewness risk accounts for 12% of the risk premia and reduces bond yields by approximately 55 basis points. For a bond that pays 1 dollar at maturity, the adjustment factor associated with skewness risk ranges from 0.15 cents for a 3‐month bond to 2.05 cents for a 5‐year bond. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
  • Cause‐related events are growing in frequency and popularity. These events enable corporates and not‐for‐profit organisations to collaborate for mutual benefit, within the strategic framework of a social partnership. However, while anecdotal evidence indicates that millions of dollars are invested in events, less is known about how the strategic objectives of social partnerships are achieved via cause‐related events. We present the findings of an ethnographic study of two social partnerships and contribute insights into how and why events help them achieve their strategic objectives. Case analysis data reveals that the fit between events and partnerships; the people, teams, and relationships; and collaboration of resources all contribute to generating competitive advantage and value. We discuss the managerial implications for those collaborating to organise a cause‐related event.
Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号