首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   32篇
  免费   3篇
财政金融   5篇
工业经济   2篇
计划管理   18篇
经济学   7篇
综合类   1篇
运输经济   1篇
经济概况   1篇
  2020年   1篇
  2019年   1篇
  2018年   1篇
  2017年   7篇
  2014年   1篇
  2013年   4篇
  2011年   1篇
  2009年   1篇
  2008年   2篇
  2007年   1篇
  2005年   2篇
  2004年   1篇
  2003年   2篇
  2000年   1篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1994年   1篇
  1993年   2篇
  1986年   2篇
排序方式: 共有35条查询结果,搜索用时 31 毫秒
1.
Both standard and robust methods are used here to estimate models of Engel curves for three household commodities, namely, food, transport, and tobacco and alcohol in Canada. The income elasticities of demand computed from the various methods differ significantly for the transport and tobacco-alcohol consumption where there are obvious outliers and zero expenditures problem. Robust estimators point to lower income elasticities and have better performance than the standard LS and Tobit estimator. These results are analyzed in the light of the information on finite-sample performance obtained in a previous Monte Carlo study. First version received: July 2000/Final version received: July 2001 RID="*" ID="*"  I wish to thank Victoria Zinde-Walsh, John Galbraith, Clint Coakley, two anonymous referees and an associate editor for helpful comments. I would also like to thank Anastassia Khouri for kindly providing the 1992 Family Expenditure Survey of Canada data.  相似文献   
2.
Systematic risk estimation in the presence of large and many outliers   总被引:1,自引:1,他引:0  
It is well recognized that the effect of extreme points on systematic risk estimates is not adequately captured through least squares estimation. This article uses the reweighted least median squares (RWLMS) approach, first proposed by Rousseeuw (1984), which accurately detects outlier presence. Using a large sample of 1350 NYSE/AMEX firms, the article demonstrates that least squares does indeed mask several potentially influential points, that this masking is very pervasive over the sample, and that it may persist even after conventional robust estimation techniques are applied. When these masked points are “unmasked” by RWLMS and zero weights assigned to such observations, the resulting RWLMS estimates of beta are on average 10%–15% smaller. However, a Bayesian treatment of such points (assigning a priori nonzero weights) is possible in both one and two factor market models.  相似文献   
3.
In this paper a reflection is made on the problems that can arise in key sector analysis and industrial clustering, due to the usual presence of outliers when using multidimensional data related to the sectors in an input–output table. Multidimensional outliers are considered as being not only linked to the low number of clusters usually observed in this kind of study, but probably causing invalid results in most of the works involving multivariate statistical techniques, such as cluster and factor analysis. Actually, by comparing the key sectors of the Spanish economy obtained in Díaz et al. (2006) Díaz, B., Moniche, L. and Morillas, A. 2006. A fuzzy clustering approach to the key sectors of the Spanish economy. Economic Systems Research, 18: 299318. [Taylor & Francis Online] [Google Scholar] to the ones we get taking into account the problem the outliers pose, one can realize they greatly distort the results. On the other hand, it is shown that identification of outliers can be considered as a good and new procedure to help select the most important sectors in an economy.  相似文献   
4.
This paper examines the quality of data on household assets, liabilities and net worth in the South African National Income Dynamics Study (NIDS) Wave 2. The NIDS is the first nationally representative survey on household wealth in South Africa. The cross-sectionally weighted data are found to be fit for use in terms of the univariate distributions of net worth, assets and liabilities, but population totals are probably underestimated due to the presence of missing wealth data in Phase 2 of Wave 2 that is not taken into account in the weights. When compared with national accounts estimates of household net worth, there is an apparent inversion of the estimated totals of financial assets versus non-financial assets. Further research is required into why this is so. We find that the NIDS wealth module is a suitable instrument for the analysis of household wealth.  相似文献   
5.
Abstract

This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three worked illustrative examples, two of which feature actual insurance claims data.  相似文献   
6.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   
7.
The theory of robustness modelling is essentially based on heavy‐tailed distributions, because longer tails are more prepared to deal with diverse information (such as outliers) because of the higher probabilities on the tails. There are many classes of distributions that can be regarded as heavy tails; some of them have interesting properties and are not explored in statistics. In the present work, we propose a robustness modelling approach based on the O‐regularly varying class (ORV), which is a generalization of the regular variation family; however, the ORV class allows more flexible tails behaviour, which can improve the way in which the outlying information is discarded by the model. We establish sufficient conditions in the location and in the scale parameter structures, which allow to resolve automatically the conflicts of information. We also provide a procedure for generating new distributions within the ORV class.  相似文献   
8.
D. R. Jensen 《Metrika》2000,52(3):213-223
Recent work by LaMotte (1999) uncovered redundancies and inconsistencies in the current practice of selected deletion diagnostics in regression. The present study extends earlier work to include further diagnostics on using different methods. Benchmarks adjusted to the scale of each diagnostic are given to assure consistency across diagnostics. Case studies illustrate anomalies in the use of these diagnostics as currently practiced. Alternative diagnostics are given to gauge effects of single-case deletions on variances and biases in prediction and estimation. Received: November 1999  相似文献   
9.
In frontier analysis, most nonparametric approaches (DEA, FDH) are based on envelopment ideas which assume that with probability one, all observed units belong to the attainable set. In these “deterministic” frontier models, statistical inference is now possible, by using bootstrap procedures. In the presence of noise, envelopment estimators could behave dramatically since they are very sensitive to extreme observations that might result only from noise. DEA/FDH techniques would provide estimators with an error of the order of the standard deviation of the noise. This paper adapts some recent results on detecting change points [Hall P, Simar L (2002) J Am Stat Assoc 97:523–534] to improve the performances of the classical DEA/FDH estimators in the presence of noise. We show by simulated examples that the procedure works well, and better than the standard DEA/FDH estimators, when the noise is of moderate size in term of signal to noise ratio. It turns out that the procedure is also robust to outliers. The paper can be seen as a first attempt to formalize stochastic DEA/FDH estimators.   相似文献   
10.
This note shows the empirical dangers of the presence of large additive outliers when testing for unit roots using standard unit root statistics. Using recent proposed procedures applied to four Latin-American inflation series, I show that the unit root hypothesis cannot be rejected.Jel classification: C2, C3, C5I want to thank Pierre Perron for useful comments on a preliminary version of this paper. Helpful comments from an anonymous referee, and Yiagadeesen Samy are appreciated. I thank the Editor Baldev Raj for useful comments about the final structure of this paper. Finally, I also thank André Lucas for helpful suggestions concerning the use of his nice computer program Robust Inference Plus Estimation (RIPE).First revision received: August 2001/Final revision received: December 2002  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号