首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
This paper proposes a new quantile regression model to characterize the heterogeneity for distributional effects of maternal smoking during pregnancy on infant birth weight across different the mother's age. By imposing a parametric restriction on the quantile functions of the potential outcome distributions conditional on the mother's age, we estimate the quantile treatment effects of maternal smoking during pregnancy on her baby's birth weight across different age groups of mothers. The results show strongly that the quantile effects of maternal smoking on low infant birth weight are negative and substantially heterogenous across different ages.  相似文献   

2.
Sir Francis Galton introduced median regression and the use of the quantile function to describe distributions. Very early on the tradition moved to mean regression and the universal use of the Normal distribution, either as the natural ‘error’ distribution or as one forced by transformation. Though the introduction of ‘quantile regression’ refocused attention on the shape of the variability about the line, it uses nonparametric approaches and so ignores the actual distribution of the ‘error’ term. This paper seeks to show how Galton's approach enables the complete regression model, deterministic and stochastic elements, to be modelled, fitted and investigated. The emphasis is on the range of models that can be used for the stochastic element. It is noted that as the deterministic terms can be built up from components, so to, using quantile functions, can the stochastic element. The model may thus be treated in both modelling and fitting as a unity. Some evidence is presented to justify the use of a much wider range of distributional models than is usually considered and to emphasize their flexibility in extending regression models.  相似文献   

3.
This paper extends unit root tests based on quantile regression proposed by Koenker and Xiao [Koenker, R., Xiao, Z., 2004. Unit root quantile autoregression inference, Journal of the American Statistical Association 99, 775–787] to allow stationary covariates and a linear time trend. The limiting distribution of the test is a convex combination of Dickey–Fuller and standard normal distributions, with weight determined by the correlation between the equation error and the regression covariates. A simulation experiment is described, illustrating the finite sample performance of the unit root test for several types of distributions. The test based on quantile autoregression turns out to be especially advantageous when innovations are heavy-tailed. An application to the CPI-based real exchange rates using four different countries suggests that real exchange rates are not constant unit root processes.  相似文献   

4.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

5.
This paper establishes the log-concavity property of several forms of univariate and multivariate skew-normal distributions. This property is then used to prove the monotonicity of the hazard as well as reversed hazard functions. The log-concavity and monotonicity of the hazard and reversed hazard functions of series and parallel systems of components is then discussed. The corresponding results for the multivariate normal distribution follow readily as special cases.  相似文献   

6.
According to the usual law of small numbers a multivariate Poisson distribution is derived by defining an appropriate model for multivariate Binomial distributions and examining their behaviour for large numbers of trials and small probabilities of marginal and simultaneous successes. The weak limit law is a generalization of Poisson's distribution to larger finite dimensions with arbitrary dependence structure. Compounding this multivariate Poisson distribution by a Gamma distribution results in a multivariate Pascal distribution which is again asymptotically multivariate Poisson. These Pascal distributions contain a class of multivariate geometric distributions. Finally the bivariate Binomial distribution is shown to be the limit law of appropriate bivariate hypergeometric distributions. Proving the limit theorems mentioned here as well as understanding the corresponding limit distributions becomes feasible by using probability generating functions.  相似文献   

7.
We propose a method to decompose the changes in the wage distribution over a period of time in several factors contributing to those changes. The method is based on the estimation of marginal wage distributions consistent with a conditional distribution estimated by quantile regression as well as with any hypothesized distribution for the covariates. Comparing the marginal distributions implied by different distributions for the covariates, one is then able to perform counterfactual exercises. The proposed methodology enables the identification of the sources of the increased wage inequality observed in most countries. Specifically, it decomposes the changes in the wage distribution over a period of time into several factors contributing to those changes, namely by discriminating between changes in the characteristics of the working population and changes in the returns to these characteristics. We apply this methodology to Portuguese data for the period 1986–1995, and find that the observed increase in educational levels contributed decisively towards greater wage inequality. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we explore partial identification and inference for the quantile of treatment effects for randomized experiments. First, we propose nonparametric estimators of sharp bounds on the quantile of treatment effects and establish their asymptotic properties under general conditions. Second, we construct confidence intervals for the bounds and the true quantile by using the approach in Chernozhukov et al. (2009). Third, under additional conditions, we develop a new approach to construct confidence intervals for the bounds and the true quantile and refer to it as the order statistic approach. A simulation study is conducted to investigate the finite sample performance of both approaches.  相似文献   

9.
《Labour economics》2005,12(4):577-590
This paper proposes a semiparametric estimator of distribution functions in the presence of covariates. The method is based on the estimation of the conditional distribution by quantile regression. The conditional distribution is then integrated over the range of covariates. Counterfactual distributions can be estimated, allowing the decomposition of changes in distribution into three factors: coefficients, covariates and residuals. Sources of changes in wage inequality in the USA between 1973 and 1989 are examined. Unlike most of the literature, we find that residuals account for only 20% of the explosion of inequality in the 80s.  相似文献   

10.
Bayesian analysis of a Tobit quantile regression model   总被引:1,自引:0,他引:1  
This paper develops a Bayesian framework for Tobit quantile regression. Our approach is organized around a likelihood function that is based on the asymmetric Laplace distribution, a choice that turns out to be natural in this context. We discuss families of prior distributions on the quantile regression vector that lead to proper posterior distributions with finite moments. We show how the posterior distribution can be sampled and summarized by Markov chain Monte Carlo methods. A method for comparing alternative quantile regression models is also developed and illustrated. The techniques are illustrated with both simulated and real data. In particular, in an empirical comparison, our approach out-performed two other common classical estimators.  相似文献   

11.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

12.
In this paper, we consider testing distributional assumptions in multivariate GARCH models based on empirical processes. Using the fact that joint distribution carries the same amount of information as the marginal together with conditional distributions, we first transform the multivariate data into univariate independent data based on the marginal and conditional cumulative distribution functions. We then apply the Khmaladze's martingale transformation (K-transformation) to the empirical process in the presence of estimated parameters. The K-transformation eliminates the effect of parameter estimation, allowing a distribution-free test statistic to be constructed. We show that the K-transformation takes a very simple form for testing multivariate normal and multivariate t-distributions. The procedure is applied to a multivariate financial time series data set.  相似文献   

13.
Forecasts of probability distributions are needed to support decision making in many applications. The accuracy of predictive distributions should be evaluated by maximising sharpness subject to calibration. Sharpness relates to the concentration of the predictive distributions, while calibration concerns their statistical consistency with the data. This paper focuses on calibration testing. It is important that a calibration test cannot be gamed by forecasts that have been strategically designed to pass the test. The widely used tests of probabilistic calibration for predictive distributions are based on the probability integral transform. Drawing on previous results for quantile prediction, we show that strategic distributional forecasting is a concern for these tests. To address this, we provide a simple extension of one of the tests. We illustrate ideas using simulated data.  相似文献   

14.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

15.
Several recent papers use the quantile regression decomposition method of Machado and Mata [Machado, J.A.F. and Mata, J. (2005). Counterfactual decomposition of changes in wage distributions using quantile regression, Journal of Applied Econometrics, 20, 445–65.] to analyze the gender gap across log wage distributions. In this paper, we prove that this procedure yields consistent and asymptotically normal estimates of the quantiles of the counterfactual distribution that it is designed to simulate. Since employment rates often differ substantially by gender, sample selection is potentially a serious issue for such studies. To address this issue, we extend the Machado–Mata technique to account for selection. We illustrate our approach to adjusting for sample selection by analyzing the gender log wage gap for full-time workers in the Netherlands.  相似文献   

16.
Analytical bias reduction methods are developed for univariate rounded data for the first time. Extensions are given to rounding of multivariate data, and to smooth functionals of several distributions. As a by‐product, we give for the first time the relation between rounded and unrounded multivariate cumulants. Estimators obtained by analytical bias reduction are compared with bootstrap and jackknife estimators by simulation.  相似文献   

17.
Pareto-Koopmans efficiency in Data Envelopment Analysis (DEA) is extended to stochastic inputs and outputs via probabilistic input-output vector comparisons in a given empirical production (possibility) set. In contrast to other approaches which have used Chance Constrained Programming formulations in DEA, the emphasis here is on joint chance constraints. An assumption of arbitrary but known probability distributions leads to the P-Model of chance constrained programming. A necessary condition for a DMU to be stochastically efficient and a sufficient condition for a DMU to be non-stochastically efficient are provided. Deterministic equivalents using the zero order decision rules of chance constrained programming and multivariate normal distributions take the form of an extended version of the additive model of DEA. Contacts are also maintained with all of the other presently available deterministic DEA models in the form of easily identified extensions which can be used to formalize the treatment of efficiency when stochastic elements are present.  相似文献   

18.
Quantile aggregation (or ‘Vincentization’) is a simple and intuitive way of combining probability distributions, originally proposed by S.B. Vincent in 1912. In certain cases, such as under Gaussianity, the Vincentized distribution belongs to the same family as that of the individual distributions and it can be obtained by averaging the individual parameters. This article compares the properties of quantile aggregation with those of the forecast combination schemes normally adopted in the econometric forecasting literature, based on linear or logarithmic averages of the individual densities. Analytical results and Monte Carlo experiments indicate that the properties of quantile aggregation are between those of the linear and the logarithmic pool. Larger differences among the combination schemes occur when there are biases in the individual forecasts: in that case quantile aggregation seems preferable on the whole. The practical usefulness of Vincentization is illustrated empirically in the context of linear forecasting models for Italian GDP and quantile predictions of euro area inflation.  相似文献   

19.
Multivariate frailty approaches are most commonly used to define distributions of random vectors, which represent lifetimes of individuals or components and stochastically compare them in terms of various multivariate orders. In this paper, we study a multivariate shared reversed frailty model and a general multivariate reversed frailty mixture model, and derive sufficient conditions for some of the stochastic orderings to hold among the random vectors. We also consider a particular case of a general multivariate mixture model in which the baseline distribution function is represented in terms of a copula and study stochastic comparisons (stochastic and lower orthant order) among the two random vectors.  相似文献   

20.
Characterizing systems of distributions by quantile measures   总被引:1,自引:0,他引:1  
Modelling an empirical distribution by means of a simple theoretical distribution is an interesting issue in applied statistics. A reasonable first step in this modelling process is to demand that measures for location, dispersion, skewness and kurtosis for the two distributions coincide. Up to now, the four measures used hereby were based on moments.
In this paper measures are considered which are based on quantiles. Of course, the four values of these quantile measures do not uniquely determine the modelling distribution. They do, however, within specific systems of distributions, like Pearson's or Johnson's; they share this property with the four moment-based measures.
This opens the possibility of modelling an empirical distribution—within a specific system—by means of quantile measures. Since moment-based measures are sensitive to outliers, this approach may lead to a better fit. Further, tests of fit—e.g. a test for normality—may be constructed based on quantile measures. In view of the robustness property, these tests may achieve higher power than the classical moment-based tests.
For both applications the limiting joint distribution of quantile measures will be needed; they are derived here as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号