共查询到17条相似文献,搜索用时 15 毫秒
1.
The authors consider the problem of estimating a conditional density by a conditional kernel density estimate when the error associated with the estimate is measured by the L1‐norm. On the basis of the combinatorial method of Devroye and Lugosi ( 1996 ), they propose a method for selecting the bandwidths adaptively and for providing a theoretical justification of the approach. They use simulated data to illustrate the finite‐sample performance of their estimator. 相似文献
2.
Over the last decades, several methods for selecting the bandwidth have been introduced in kernel regression. They differ quite a bit, and although there already exist more selection methods than for any other regression smoother, one can still observe coming up new ones. Given the need of automatic data‐driven bandwidth selectors for applied statistics, this review is intended to explain and, above all, compare these methods. About 20 different selection methods have been revised, implemented and compared in an extensive simulation study. 相似文献
3.
Giovanna Menardi 《Revue internationale de statistique》2016,84(3):413-433
In spite of the current availability of numerous methods of cluster analysis, evaluating a clustering configuration is questionable without the definition of a true population structure, representing the ideal partition that clustering methods should try to approximate. A precise statistical notion of cluster, unshared by most of the mainstream methods, is provided by the density‐based approach, which assumes that clusters are associated to some specific features of the probability distribution underlying the data. The non‐parametric formulation of this approach, known as modal clustering, draws a correspondence between the groups and the modes of the density function. An appealing implication is that the ill‐specified task of cluster detection can be regarded to as a more circumscribed problem of estimation, and the number of clusters is also conceptually well defined. In this work, modal clustering is critically reviewed from both conceptual and operational standpoints. The main directions of current research are outlined as well as some challenges and directions of further research. 相似文献
4.
M. C. Jones 《Metrika》1992,39(1):335-340
Estimators of derivatives of a density function based on differences of the empirical distribution function (Maltz 1974) are
identified as derivatives of kernel density estimators using particular kernel functions. Properties of this family of kernels
are investigated. 相似文献
5.
Starting from the pioneering works of Shannon and Weiner in 1948, a plethora of works have been reported on entropy in different directions. Entropy‐related review work in the direction of statistical inference, to the best of our knowledge, has not been reported so far. Here, we have tried to collect all possible works in this direction during the last seven decades so that people interested in entropy, specially the new researchers, get benefited. 相似文献
6.
中国数字经济高质量发展的地区差异及动态演进 总被引:1,自引:0,他引:1
数字经济高质量发展的地区差异和动态演进特征成为推动数字经济行稳致远的重要基础,通过构建包含创新、协调、绿色、开放和共享五个维度的数字经济高质量发展评价体系,采用Dagum基尼系数和Kernel核密度考察了数字经济高质量发展的地区差异和动态演进特征。研究发现:八大综合经济区数字经济高质量发展差距显著,总体呈北部、东部、南部沿海经济区发展水平高,东北、长江中游、大西北综合经济区发展水平居中,黄河中游、大西南综合经济区发展水平较低的格局。鉴于此,应加快制定数字经济高质量发展的区域战略,依托创新、协调、绿色、开放和共享五大体系促进数字经济高质量发展。 相似文献
7.
A strong law of large numbers for a triangular array of strictly stationary associated random variables is proved. It is used
to derive the pointwise strong consistency of kernel type density estimator of the one-dimensional marginal density function
of a strictly stationary sequence of associated random variables, and to obtain an improved version of a result by Van Ryzin
(1969) on the strong consistency of density estimator for a sequence of independent and identically distributed random variables. 相似文献
8.
《Spatial Economic Analysis》2013,8(4):451-471
Abstract This paper demonstrates an evaluation of welfare policies and regional allocation of public investment using the recent developments in efficiency analysis and statistical inference. Specifically, the efficiency of the welfare policies of the Greek prefectures for the census years of 1980, 1990 and 2000 are compared and analyzed. The paper, using bootstrap techniques on unconditional and conditional full frontier applications, indicates that there are major welfare inefficiencies among the prefectures over the three census years. The analysis reveals that the increase of population density over the years has a negative impact on the welfare efficiency levels of the Greek prefectures. RÉSUMÉ Cette communication démontre une évaluation des politiques sociales et de l'affectation régionale d'investissements publics, sur la base de développements récents dans l'analyse du rendement et des conclusions statistiques. Plus spécifiquement, nous comparaisons et nous analysons l'efficacité des politiques sociales des préfectures grecques pour les années de recensement 1980, 1990 et 2000. En appliquant des techniques de rééchantillonnage à des applications à frontières intégrales inconditionnelles et conditionnelles, la communication démontre que les années de recensement ont été marquées par l'existence d'une grande inefficacité sur le plan social, parmi les préfectures. L'analyse révèle que l'augmentation de la densité de la population a, au fil des années, eu un effet négatif sur l'efficacité des préfectures de la Grèce sur le plan social. R esumen Este estudio demuestra una evaluación de las políticas de beneficios sociales y la distribución regional de la inversión pública usando los recientes desarrollos en análisis de eficiencia e inferencia estadística. Específicamente, se compara y analiza la eficiencia de las políticas de beneficios sociales de las prefecturas griegas en los años de censo 1980, 1990 y 2000. El estudio que usa técnicas bootstrap en las aplicaciones de frontera completa incondicionales y condicionales indica que existen importantes ineficiencias en los beneficios sociales entre las prefecturas durante los tres años censados. El análisis revela que el aumento de la densidad de la población a través de los años tiene un impacto negativo en los niveles de eficiencia de los beneficios sociales en las prefecturas griegas. 相似文献
9.
In toxicity studies, model mis‐specification could lead to serious bias or faulty conclusions. As a prelude to subsequent statistical inference, model selection plays a key role in toxicological studies. It is well known that the Bayes factor and the cross‐validation method are useful tools for model selection. However, exact computation of the Bayes factor is usually difficult and sometimes impossible and this may hinder its application. In this paper, we recommend to utilize the simple Schwarz criterion to approximate the Bayes factor for the sake of computational simplicity. To illustrate the importance of model selection in toxicity studies, we consider two real data sets. The first data set comes from a study of dietary fortification with carbonyl iron in which the Bayes factor and the cross‐validation are used to determine the number of sub‐populations in a mixture normal model. The second example involves a developmental toxicity study in which the selection of dose–response functions in a beta‐binomial model is explored. 相似文献
10.
Byeong U. Park Enno Mammen Young K. Lee Eun Ryung Lee 《Revue internationale de statistique》2015,83(1):36-64
Varying coefficient regression models are known to be very useful tools for analysing the relation between a response and a group of covariates. Their structure and interpretability are similar to those for the traditional linear regression model, but they are more flexible because of the infinite dimensionality of the corresponding parameter spaces. The aims of this paper are to give an overview on the existing methodological and theoretical developments for varying coefficient models and to discuss their extensions with some new developments. The new developments enable us to use different amount of smoothing for estimating different component functions in the models. They are for a flexible form of varying coefficient models that requires smoothing across different covariates' spaces and are based on the smooth backfitting technique that is admitted as a powerful technique for fitting structural regression models and is also known to free us from the curse of dimensionality. 相似文献
11.
Bayes factors that do not require prior distributions are proposed for testing one parametric model versus another. These Bayes factors are relatively simple to compute, relying only on maximum likelihood estimates, and are Bayes consistent at an exponential rate for nested models even when the smaller model is true. These desirable properties derive from the use of data splitting. Large sample properties, including consistency, of the Bayes factors are derived, and a simulation study explores practical concerns. The methodology is illustrated with civil engineering data involving compressive strength of concrete. 相似文献
12.
We study parametric and non‐parametric approaches for assessing the accuracy and coverage of a population census based on dual system surveys. The two parametric approaches being considered are post‐stratification and logistic regression, which have been or will be implemented for the US Census dual system surveys. We show that the parametric model‐based approaches are generally biased unless the model is correctly specified. We then study a local post‐stratification approach based on a non‐parametric kernel estimate of the Census enumeration functions. We illustrate that the non‐parametric approach avoids the risk of model mis‐specification and is consistent under relatively weak conditions. The performances of these estimators are evaluated numerically via simulation studies and an empirical analysis based on the 2000 US Census post‐enumeration survey data. 相似文献
13.
Using data from 170 for‐profit U.S. firms with 100 or more employees from 27 North American Industry Classification System (NAICS) industry subsectors, we investigated firm‐level precursors of HR flexibility and industry‐level boundary conditions of the HR flexibility—firm financial performance relationship. The findings denote that a contingency illumination is warranted in which consideration should be given to firm‐level factors such as flexibility business strategy and high‐performance work systems, which may play a key role in engendering HR flexibility, and external factors such as industry dynamism and growth, which may serve as boundary conditions that influence the relevance and impact of HR flexibility. This study is an important extension of extant HR flexibility research and adds clarity regarding the roles and relevance of HR flexibility and the circumstances in which HR flexibility and/or its focal factors may augment (or diminish) firm competitiveness and performance. 相似文献
14.
Irne Gijbels Rezaul Karim Anneleen Verhasselt 《Revue internationale de statistique》2019,87(3):471-504
In this paper, we provide a detailed study of a general family of asymmetric densities. In the general framework, we establish expressions for important characteristics of the distributions and discuss estimation of the parameters via method‐of‐moments as well as maximum likelihood estimation. Asymptotic normality results for the estimators are provided. The results under the general framework are then applied to some specific examples of asymmetric densities. The use of the asymmetric densities is illustrated in a real‐data analysis. 相似文献
15.
《Revue internationale de statistique》2017,85(1):61-83
Functional data analysis is a field of growing importance in Statistics. In particular, the functional linear model with scalar response is surely the model that has attracted more attention in both theoretical and applied research. Two of the most important methodologies used to estimate the parameters of the functional linear model with scalar response are functional principal component regression and functional partial least‐squares regression. We provide an overview of estimation methods based on these methodologies and discuss their advantages and disadvantages. We emphasise that the role played by the functional principal components and by the functional partial least‐squares components that are used in estimation appears to be very important to estimate the functional slope of the model. A functional version of the best subset selection strategy usual in multiple linear regression is also analysed. Finally, we present an extensive comparative simulation study to compare the performance of all the considered methodologies that may help practitioners in the use of the functional linear model with scalar response. 相似文献
16.
Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates‐dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstrated that under mild assumptions on the dependence of the intensity on covariates, this approach can provide better results than the classical nonparametric method based on local smoothing in the spatial domain. In comparison with the parametric pseudo‐likelihood estimation, the nonparametric approach can be more accurate particularly when the dependence on covariates is weak or if there is uncertainty about the model or about the range of interactions. An important supplementary task is the dimension reduction of the covariate space. It is shown that the techniques based on the inverse regression, previously applied to Cox processes, are useful even when the interactions are present. © 2014 The Authors. Statistica Neerlandica © 2014 VVS. 相似文献
17.
David R. Bickel 《Revue internationale de statistique》2013,81(2):188-206
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins. 相似文献