首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
Several papers have estimated the parameters of Pareto distributions for city sizes in different countries, but only one has attempted to explain the differing magnitudes of these parameters with a set of country-specific explanatory variables. While it is reassuring that there has been some research which advances beyond simple “curve-fitting” to explore the determinants of city size distributions, the existing research uses a two-stage OLS method which yields invalid second-stage standard errors (and, consequently, questionable hypothesis tests). In this paper, we develop candidate one-stage structural models with normal and non-normal errors which accommodate truncated size distributions, potentially Pareto-like shapes, and city-level variables. In general, these new models are nonlinear in parameters. We illustrate with data on U.S. urban areas.  相似文献   

2.
Analysis of covariance techniques have been developed primarily for normally distributed errors. We give solutions when the errors have non-normal distributions. We show that our solutions are efficient and robust. We provide a real-life example.  相似文献   

3.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

4.
基于多标度分形理论,提出了一种新的更适用于实际金融资产收益数据的非对称性测度方法——两阶段非对称性检验法(Two-step asymmetry testing,TAT),并运用Monte Carlo模拟考察了其与传统的偏度系数检验法的非对称性判定结论差异。实证结果表明:总体来讲,本文提出的两阶段非对称性检验法在常用检验水平下取得了较偏度系数法更为准确的金融资产收益非对称性判定结论,且两阶段非对称性检验法较偏度系数法更适用于具有非独立、非正态特性数据的非对称性检验。  相似文献   

5.
In likelihood-based approaches to robustify state space models, Gaussian error distributions are replaced by non-normal alternatives with heavier tails. Robustified observation models are appropriate for time series with additive outliers, while state or transition equations with heavy-tailed error distributions lead to filters and smoothers that can cope with structural changes in trend or slope caused by innovations outliers. As a consequence, however, conditional filtering and smoothing densities become analytically intractable. Various attempts have been made to deal with this problem, reaching from approximate conditional mean type estimation to fully Bayesian analysis using MCMC simulation. In this article we consider penalized likelihood smoothers, this means estimators which maximize penalized likelihoods or, equivalently, posterior densities. Filtering and smoothing for additive and innovations outlier models can be carried out by computationally efficient Fisher scoring steps or iterative Kalman-type filters. Special emphasis is on the Student family, for which EM-type algorithms to estimate unknown hyperparameters are developed. Operational behaviour is illustrated by simulation experiments and by real data applications. Received: March 1998  相似文献   

6.
In spite of Taguchi's robust parameter design (Introduction to quality engineering: designing quality into products and processes, 1986, Asian Productivity Organization, Tokiyo), tolerance design is still important at the design stage of products and processes. Taguchi's proposal and related methods for tolerance design, however, do not efficiently use the information that can be obtained from the parameter design experiment. In this paper, we introduce a new method for tolerance design based on the response surface approach to parameter design. It is a flexible method because non-normal distributions of the noise factors and the quality characteristic are allowed. Moreover, it is unnecessary to perform a new physical experiment. Essentially, tolerances of noise factors are maximized, subject to constraints to ensure that the mean value of the quality characteristic remains on target and the fraction nonconforming is below a pre-specified maximum. Some aspects of model uncertainty are discussed and the method is illustrated by means of an example.  相似文献   

7.
In structural equation modeling the statistician needs assumptions inorder (1) to guarantee that the estimates are consistent for the parameters of interest, and (2) to evaluate precision of the estimates and significance level of test statistics. With respect to purpose (1), the typical type of analyses (ML and WLS) are robust against violation of distributional assumptions; i.e., estimates remain consistent or any type of WLS analysis and distribution of z. (It should be noted, however, that (1) is sensitive to structural misspecification.) A typical assumption used for purpose (2), is the assumption that the vector z of observable follows a multivariate normal distribution.In relation to purpose (2), distributional misspecification may have consequences for efficiency, as well as power of test statistics (see Satorra, 1989a); that is, some estimation methods may bemore precise than others for a given specific distribution of z. For instance, ADF-WLS is asymptotically optimal under a variety of distributions of z, while the asymptotic optimality of NT-WLS may be lost when the data is non-normal  相似文献   

8.
Variables sampling plans based upon continuous distributions are well known. The usual assumption is that a measurable characteristic associated with a product has a normal distribution, a case which has been treated extensively in the literature. Other continuous distributions, particularly the exponential, have also been used as models. In this paper we discuss variables sampling plans for situations in which the measurable characteristic has either a Poisson or a binomial distribution.  相似文献   

9.
10.
Efficient semiparametric and parametric estimates are developed for a spatial autoregressive model, containing non-stochastic explanatory variables and innovations suspected to be non-normal. The main stress is on the case of distribution of unknown, nonparametric, form, where series nonparametric estimates of the score function are employed in adaptive estimates of parameters of interest. These estimates are as efficient as the ones based on a correct form, in particular they are more efficient than pseudo-Gaussian maximum likelihood estimates at non-Gaussian distributions. Two different adaptive estimates are considered, relying on somewhat different regularity conditions. A Monte Carlo study of finite sample performance is included.  相似文献   

11.
Pareto-Koopmans efficiency in Data Envelopment Analysis (DEA) is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set. In contrast to other approaches which have used Chance Constrained Programming formulations in DEA, the emphasis here is on joint chance constraints. An assumption of arbitrary but known probability distributions leads to the P-Model of chance constrained programming. A necessary condition for a DMU to be stochastically efficient and a sufficient condition for a DMU to be non-stochastically efficient are provided. Deterministic equivalents using the zero order decision rules of chance constrained programming and multivariate normal distributions take the form of an extended version of the additive model of DEA. Contacts are also maintained with all of the other presently available deterministic DEA models in the form of easily identified extensions which can be used to formalize the treatment of efficiency when stochastic elements are present.  相似文献   

12.
This study analyzes mean probability distributions reported by ASA-NBER forecasters on two macroeconomic variables, GNP and the GNP implicit price deflator. In the derivation of expectations, a critical assertion has been that the aggregate average expectation can be regarded as coming from a normal distribution. We find that, in fact, this assumption should be rejected in favor of distributions which are more peaked and skewed. For IPD, they are mostly positively skewed, and for nominal GNP the reverse is true. We then show that a non-central scaled t-distribution fit the empirical distributions remarkably well. The practice of using the degree of consensus across a group of predictions as a measure of a typical forecasters' uncertainty about the prediction is called to question.  相似文献   

13.
Usual inference methods for stable distributions are typically based on limit distributions. But asymptotic approximations can easily be unreliable in such cases, for standard regularity conditions may not apply or may hold only weakly. This paper proposes finite-sample tests and confidence sets for tail thickness and asymmetry parameters (αα and ββ) of stable distributions. The confidence sets are built by inverting exact goodness-of-fit tests for hypotheses which assign specific values to these parameters. We propose extensions of the Kolmogorov–Smirnov, Shapiro–Wilk and Filliben criteria, as well as the quantile-based statistics proposed by McCulloch (1986) in order to better capture tail behavior. The suggested criteria compare empirical goodness-of-fit or quantile-based measures with their hypothesized values. Since the distributions involved are quite complex and non-standard, the relevant hypothetical measures are approximated by simulation, and pp-values are obtained using Monte Carlo (MC) test techniques. The properties of the proposed procedures are investigated by simulation. In contrast with conventional wisdom, we find reliable results with sample sizes as small as 25. The proposed methodology is applied to daily electricity price data in the US over the period 2001–2006. The results show clearly that heavy kurtosis and asymmetry are prevalent in these series.  相似文献   

14.
In frontier analysis, most of the nonparametric approaches (DEA, FDH) are based on envelopment ideas which suppose that with probability one, all the observed units belong to the attainable set. In these deterministic frontier models, statistical theory is now mostly available (Simar and Wilson, 2000a). In the presence of super-efficient outliers, envelopment estimators could behave dramatically since they are very sensitive to extreme observations. Some recent results from Cazals et al. (2002) on robust nonparametric frontier estimators may be used in order to detect outliers by defining a new DEA/FDH deterministic type estimator which does not envelop all the data points and so is more robust to extreme data points. In this paper, we summarize the main results of Cazals et al. (2002) and we show how this tool can be used for detecting outliers when using the classical DEA/FDH estimators or any parametric techniques. We propose a methodology implementing the tool and we illustrate through some numerical examples with simulated and real data. The method should be used in a first step, as an exploratory data analysis, before using any frontier estimation.  相似文献   

15.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

16.
For a normal distribution, a two-stage procedure has been proposed for constructing a fixed width confidence interval for the mean when the variance is unknown. It has all the properties ofStein's two-stage procedure, and at the same time it is asymptotically efficient. We have also discussed the non-normal case as inChow/Robbins [1965].  相似文献   

17.
The progressive Type-II hybrid censoring scheme introduced by Kundu and Joarder (Comput Stat Data Anal 50:2509–2528, 2006), has received some attention in the last few years. One major drawback of this censoring scheme is that very few observations (even no observation at all) may be observed at the end of the experiment. To overcome this problem, Cho et al. (Stat Methodol 23:18–34, 2015) recently introduced generalized progressive censoring which ensures to get a pre specified number of failures. In this paper we analyze generalized progressive censored data in presence of competing risks. For brevity we have considered only two competing causes of failures, and it is assumed that the lifetime of the competing causes follow one parameter exponential distributions with different scale parameters. We obtain the maximum likelihood estimators of the unknown parameters and also provide their exact distributions. Based on the exact distributions of the maximum likelihood estimators exact confidence intervals can be obtained. Asymptotic and bootstrap confidence intervals are also provided for comparison purposes. We further consider the Bayesian analysis of the unknown parameters under a very flexible beta–gamma prior. We provide the Bayes estimates and the associated credible intervals of the unknown parameters based on the above priors. We present extensive simulation results to see the effectiveness of the proposed method and finally one real data set is analyzed for illustrative purpose.  相似文献   

18.
Bayesian analysis of a Tobit quantile regression model   总被引:1,自引:0,他引:1  
This paper develops a Bayesian framework for Tobit quantile regression. Our approach is organized around a likelihood function that is based on the asymmetric Laplace distribution, a choice that turns out to be natural in this context. We discuss families of prior distributions on the quantile regression vector that lead to proper posterior distributions with finite moments. We show how the posterior distribution can be sampled and summarized by Markov chain Monte Carlo methods. A method for comparing alternative quantile regression models is also developed and illustrated. The techniques are illustrated with both simulated and real data. In particular, in an empirical comparison, our approach out-performed two other common classical estimators.  相似文献   

19.
Principal components estimation and identification of static factors   总被引:1,自引:0,他引:1  
It is known that the principal component estimates of the factors and the loadings are rotations of the underlying latent factors and loadings. We study conditions under which the latent factors can be estimated asymptotically without rotation. We derive the limiting distributions for the estimated factors and factor loadings when NN and TT are large and make precise how identification of the factors affects inference based on factor augmented regressions. We also consider factor models with additive individual and time effects. The asymptotic analysis can be modified to analyze identification schemes not considered in this analysis.  相似文献   

20.
Summary  (Statistical investigation of the distribution of data for the solids of bread (in loaves analysed in the Food Inspection Laboratory in Amsterdam))
The distributions of the data of the solids of bread as analysed during the years of the war are investigated. The means and the standard deviations are calculated, also χ2, kurtosis and skewness supposing the distributions to be normal. An example of calculation is given in table I. Actual numbers for different years are given in table II and in table III. The distributions were tested on normality because former investigations showed that the distribution of under survey prepared loaves did not deviate significantly from the normal.
It is found that generally the investigated distributions cannot be regarded as normal. Though symmetric they show leptokurtosis and the χ2-test for the goodness of fit of normal equation gives values of P 0,01 (or a little more). Similar distributions were found by Clancey1) in his investigation of numbers of chemical analyses of industrial products (about 10% of the distributions showed this shape, some 10% were truncated leptokurtic curves) and by us for the fat percentage of meals from the governmental eating-houses. The distributions are represented on probability-paper. This way of representing results gives a clear view of the variations of the mean and the standarddeviation in the course of the years (fig. 1). The deviations of the shape of the normal straight line on probability paper by special causes is investigated (fig. 3, fig. 4, fig. 5, fig. 6 to be compared with fig. 2). With this "spectrum" of possibilities of deviations from the normal distribution in mind the special cause for the leptokurtic shape in our special case has been discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号