首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Aspects of statistical analysis in DEA-type frontier models   总被引:2,自引:2,他引:2  
In Grosskopf (1995) and Banker (1995) different approaches and problems of statistical inference in DEA frontier models are presented. This paper focuses on the basic characteristics of DEA models from a statistical point of view. It arose from comments and discussions on both papers above. The framework of DEA models is deterministic (all the observed points lie on the same side of the frontier), nevertheless a stochastic model can be constructed once a data generating process is defined. So statistical analysis may be performed and sampling properties of DEA estimators can be established. However, practical statistical inference (such as test of hypothesis, confidence intervals) still needs artifacts like the bootstrap to be performed. A consistent bootstrap relies also on a clear definition of the data generating proces and on a consistent estimator of it: The approach of Simar and Wilson (1995) is described. Finally, some trails are proposed for introducing stochastic noise in DEA models, in the spirit of the Kneip-Simar (1995) approach.  相似文献   

2.
In-depth data analysis plus statistical modeling can produce inferentialcausal models. Their creation thus combines aspects of analysis by close inspection,that is, reason analysis and cross-tabular analysis, with statistical analysis procedures,especially those that are special cases of the generalized linear model (McCullaghand Nelder, 1989; Agresti, 1996; Lindsey, 1997). This paper explores some of the roots of this combined method and suggests some new directions. An exercise clarifies some limitations of classic reason analysis by showing how the cross tabulation of variables with controls for test factors may produce better inferences. Then, given the cross tabulation of several variables, by explicating Coleman effect parameters, logistic regressions, and Poisson log-linear models, it shows how generalized linear models provide appropriate measures of effects and tests of statistical significance. Finally, to address a weakness of reason analysis, a case-control design is proposed and an example is developed.  相似文献   

3.
In the presented text the authors judge the importance of statistics in the monetary policy of the Czech National Bank (CNB) over the course of the economic transformation process, with particular consideration of changing statistical needs and the possibilities and limits of statistical data exploitation in the monetary analyses. The importance of statistics lies on the level of collection and processing of statistical information and on the level of use of statistical methods to analyse data. Since the start of the 1990s the requirements for statistics were significantly influenced by monetary policy. In the period 1990–1997, monetary targeting was the primary influential factor. Since 1998, the monetary policy is influenced by inflation targeting. Statistical priorities switched from monetary data to economy and financial market data. Much progress has been made in the use of statistical methods for analysing data. Statistics available at present cover the CNB's standard monetary-policy requirements and are on par with those in developed countries. Its further development will reflect the standard changes taking place in the more advanced countries.  相似文献   

4.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

5.
Polytomous logistic regression   总被引:1,自引:0,他引:1  
In this paper a review will be given of some methods available for modelling relationships between categorical response variables and explanatory variables. These methods are all classed under the name polytomous logistic regression (PLR). Models for PLR will be presented and compared; model parameters will be tested and estimated by weighted least squares and by likelihood. Usually, software is needed for computation, and available statistical software is reported.
An industrial problem is solved to some extent as an example to illustrate the use of PLR. The paper is concluded by a discussion on the various PLR-methods and some topics that need a further study are mentioned.  相似文献   

6.
This paper comprises general comments on some current statistical issues, inspired in part by some papers that have been published in Statistica Neerlandica. Occasional glimpses of the future are hazarded.  相似文献   

7.
This paper aims at displaying a synthetic view of the historical development and the current research concerning causal relationships, starting from the Aristotelian doctrine of causes, following with the main philosophical streams until the middle of the twentieth century, and commenting on the present intensive research work in the statistical domain. The philosophical survey dwells upon various concepts of cause, and some attempts towards picking out spurious causes. Concerning statistical modelling, factorial models and directed acyclic graphs are examined and compared. Special attention is devoted to randomization and pseudo‐randomization (for observational studies) in view of avoiding the effect of possible confounders. An outline of the most common problems and pitfalls, encountered in modelling empirical data, closes the paper, with a warning to be very cautious in modelling and inferring conditional independence between variables.  相似文献   

8.
Abstract. Developments in the vast and growing literatures on nonparametric and semiparametric statistical estimation are reviewed. The emphasis is on useful methodology rather than statistical properties for their own sake. Some empirical applications to economic data are described. The paper deals separately with nonparametric density estimation, nonparametric regression estimation, and estimation of semiparametric models.  相似文献   

9.
Statistical theory aims to provide a foundation for studying the collection and interpretation of data, a foundation that does not depend on the particular details of the substantive field in which the data are being considered. This gives a systematic way to approach new problems, and a common language for summarising results; ideally, the foundations and common language ensure that statistical aspects of one study, or of several studies on closely related phenomena, can be broadly accessible. We discuss some principles of statistical inference, to outline how these are, or could be, used to inform the interpretation of results, and to provide a greater degree of coherence for the foundations of statistics.  相似文献   

10.
Ordinal measurements as ratings, preference and evaluation data are very common in applied disciplines, and their analysis requires a proper modelling approach for interpretation, classification and prediction of response patterns. This work proposes a comparative discussion between two statistical frameworks that serve these goals: the established class of cumulative models and a class of mixtures of discrete random variables, denoted as CUB models, whose peculiar feature is the specification of an uncertainty component to deal with indecision and heterogeneity. After surveying their definition and main features, we compare the performances of the selected paradigms by means of simulation experiments and selected case studies. The paper is tailored to enrich the understanding of the two approaches by running an extensive and comparative analysis of results, relative advantages and limitations, also at graphical level. In conclusion, a summarising review of the key issues of the alternative strategies and some final remarks are given, aimed to support a unifying setting.  相似文献   

11.
Forecasting earthquakes and earthquake risk   总被引:1,自引:0,他引:1  
This paper reviews issues, models, and methodologies arising out of the problems of predicting earthquakes and forecasting earthquake risk. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within specified time, space, and magnitude windows. One recurring theme is that such probabilities are best developed from models which specify a time-varying conditional intensity (conditional probability per unit time, area or volume, and magnitude interval) for every point in the region under study. The paper comprises three introductory sections, and three substantive sections. The former outline the current state of earthquake prediction, earthquakes and their parameters, and the point process background. The latter cover the estimation of background risk, the estimation of time-varying risk, and some specific examples of models and prediction algorithms. The paper concludes with some brief comments on the links between forecasting earthquakes and other forecasting problems.  相似文献   

12.
Books on linear models and multivariate analysis generally include a chapter on matrix algebra, quite rightly so, as matrix results are used in the discussion of statistical methods in these areas. During recent years a number of papers have appeared where statistical results derived without the use of matrix theorems have been used to prove some matrix results which are used to generate other statistical results. This may have some pedagogical value. It is not, however, suggested that prior knowledge of matrix theory is not necessary for studying statistics. It is intended to show that a judicious use of statistical and matrix results might be of help in providing elegant proofs of problems both in statistics and matrix algebra and make the study of both the subjects somewhat interesting. Some basic notions of vector spaces and matrices are, however, necessary and these are outlined in the introduction to this paper.  相似文献   

13.
A method is presented to improve the precision of timely data, which are published when final data are not yet available. Explicit statistical formulae, equivalent to Kalman filtering, are derived to combine historical with preliminary information. The application of these formulae is validated by the data, through a statistical test of compatibility between sources of information. A measure of the share of precision of each source of information is also derived. An empirical example with Mexican economic data serves to illustrate the procedure.  相似文献   

14.
This survey paper gives an impression of the main ways in which statistics is used in disciplines such as sociology and psychology. After an introductory section 1 the negative image of social science research is discussed in section 2. Section 3 is devoted to the enormous influence of modern computing facilities. Measurement of human behaviour has its specific problems (section 4). The use of linear and log–linear models is the topic of section 5. Latent variables are a basic concept for social and behavioral scientists, both in some linear models (section 6) and in item response theory (section 7). In the next section multidimensional and optimal scaling techniques are mentioned, and a selection of other topics is the content of section 9. Some general remarks on the generalizability claim of statistical methods constitute the final section. Because of space limitations and priority considerations, the author has decided to write a paper about topics and not about individual research contributions. For this reason there is no list of references (it would take several pages) and no painful split of all Dutch authors into those mentioned and those omitted. In general the Dutch research community has made quite a few major contributions to the area discussed in this paper.  相似文献   

15.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

16.
Abstract  A class of linear models is defined which contains many of the usual mixed and random models and allows the construction of tests for a wide class of hypotheses in a general manner. Characterizations are given for this class of models denoted as "regular linear models". Problems of estimation are briefly touched and some aids to practical applications are given, followed by two examples.  相似文献   

17.
We deal with general mixture of hierarchical models of the form m(x) = føf(x |θ) g (θ)dθ , where g(θ) and m(x) are called mixing and mixed or compound densities respectively, and θ is called the mixing parameter. The usual statistical application of these models emerges when we have data xi, i = 1,…,n with densities f(xii) for given θi, and the θ1 are independent with common density g(θ) . For a certain well known class of densities f(x |θ) , we present a sample-based approach to reconstruct g(θ) . We first provide theoretical results and then we use, in an empirical Bayes spirit, the first four moments of the data to estimate the first four moments of g(θ) . By using sampling techniques we proceed in a fully Bayesian fashion to obtain any posterior summaries of interest. Simulations which investigate the operating characteristics of our proposed methodology are presented. We illustrate our approach using data from mixed Poisson and mixed exponential densities.  相似文献   

18.
西部地区城市关系发展现状及政策选择   总被引:1,自引:0,他引:1  
李珍刚  胡佳 《城市问题》2007,(8):95-100
城市是区域发展的核心和载体,和谐的城市关系对资源有效配置、城市竞争力提升、区域经济发展等有积极作用,这种作用的发挥对西部地区更有突出意义.经过多年发展,西部城市关系发展取得了一定成效,并形成了一定的关系模式,但也存在种种困难.因此,需要从城市自身发展、城市政府间合作、城市联盟、中心城市成长等方面进一步拓展西部城市关系的发展空间.  相似文献   

19.
This paper is concerned with the statistical analysis of proportions involving extra-binomial variation. Extra-binomial variation is inherent to experimental situations where experimental units are subject to some source of variation, e.g. biological or environmental variation. A generalized linear model for proportions does not account for random variation between experimental units. In this paper an extended version of the generalized linear model is discussed with special reference to experiments in agricultural research. In this model it is assumed that both treatment effects and random contributions of plots are part of the linear predictor. The methods are applied to results from two agricultural experiments.  相似文献   

20.
Gradual location set covering with service quality   总被引:1,自引:0,他引:1  
Location set covering models were first described in the early 1970s. In their simplest form, they minimize the number of facilities necessary to completely cover a set of customers in some given space, where covering means providing service within a predetermined distance. This paper considers extensions of the basic model that soften the covered/not covered dichotomy and replace it with gradual covering. The models discussed in this work include the quality of service as a criterion. The models are formulated and compared with each other with respect to their size and features. A small series of computational tests concludes the paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号