首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article considers the problem of estimating the cell frequencies in a contingency table under inequality constraints. Algorithms are proposed for cell frequency estimation via minimizing the Kullback–Leibler distance subject to inequality constraints. The proposed algorithms are shown to be simple, easy to be used, fast, and reliable. Theorems are derived to guarantee the convergence of the algorithms. Applications and extensions of the algorithms are provided for more general problems than contingency table. The R programs that implement the proposed algorithms are presented in Appendix B.  相似文献   

2.
《Journal of econometrics》2002,108(2):317-342
This paper proposes the use of the bootstrap for the most commonly applied procedures in inequality, mobility and poverty measurement. In addition to simple inequality index estimation the scenarios considered are inequality difference tests for correlated data, decompositions by sub-group or income source, decompositions of inequality changes, and mobility index and poverty index estimation. Besides showing the consistency of the bootstrap for these scenarios, the paper also develops simple ways to deal with longitudinal correlation and panel attrition or non-response. In principle, all the proposed procedures can be handled by the δ-method, but Monte Carlo evidence suggests that the simplest possible bootstrap procedure should be the preferred method in practice, as it achieves the same accuracy as the δ-method and takes into account the stochastic dependencies in the data without explicitly having to deal with its covariance structure. If a variance estimate is available, then the studentized version of the bootstrap may lead to an improvement in accuracy, but substantially so only for relatively small sample sizes. All results incorporate the possibility that different observations have different sampling weights.  相似文献   

3.
Bayesian model selection using encompassing priors   总被引:1,自引:0,他引:1  
This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes.  相似文献   

4.
An important aspect of applied research is the assessment of the goodness-of-fit of an estimated statistical model. In the analysis of contingency tables, this usually involves determining the discrepancy between observed and estimated frequencies using the likelihood-ratio statistic. In models with inequality constraints, however, the asymptotic distribution of this statistic depends on the unknown model parameters and, as a result, there no longer exists an unique p -value. Bootstrap p -values obtained by replacing the unknown parameters by their maximum likelihood estimates may also be inaccurate, especially if many of the imposed inequality constraints are violated in the available sample. We describe the various problems associated with the use of asymptotic and bootstrap p -values and propose the use of Bayesian posterior predictive checks as a better alternative for assessing the fit of log-linear models with inequality constraints.  相似文献   

5.
Matti Langel  Yves Tillé 《Metrika》2012,75(8):1093-1110
Zenga’s new inequality curve and index are two recent tools for measuring inequality. Proposed in 2007, they should thus not be mistaken for anterior measures suggested by the same author. This paper focuses on the new measures only, which are hereafter referred to simply as the Zenga curve and Zenga index. The Zenga curve Z(α) involves the ratio of the mean income of the 100α % poorest to that of the 100(1?α)% richest. The Zenga index can also be expressed by means of the Lorenz Curve and some of its properties make it an interesting alternative to the Gini index. Like most other inequality measures, inference on the Zenga index is not straightforward. Some research on its properties and on estimation has already been conducted but inference in the sampling framework is still needed. In this paper, we propose an estimator and variance estimator for the Zenga index when estimated from a complex sampling design. The proposed variance estimator is based on linearization techniques and more specifically on the direct approach presented by Demnati and Rao. The quality of the resulting estimators are evaluated in Monte Carlo simulation studies on real sets of income data. Finally, the advantages of the Zenga index relative to the Gini index are discussed.  相似文献   

6.
Comparison of Sampling Schemes for Dynamic Linear Models   总被引:1,自引:0,他引:1  
Hyperparameter estimation in dynamic linear models leads to inference that is not available analytically. Recently, the most common approach is through MCMC approximations. A number of sampling schemes that have been proposed in the literature are compared. They basically differ in their blocking structure. In this paper, comparison between the most common schemes is performed in terms of different efficiency criteria, including efficiency ratio and processing time. A sample of time series was simulated to reflect different relevant features such as series length and system volatility.  相似文献   

7.
This paper presents estimation methods and asymptotic theory for the analysis of a nonparametrically specified conditional quantile process. Two estimators based on local linear regressions are proposed. The first estimator applies simple inequality constraints while the second uses rearrangement to maintain quantile monotonicity. The bandwidth parameter is allowed to vary across quantiles to adapt to data sparsity. For inference, the paper first establishes a uniform Bahadur representation and then shows that the two estimators converge weakly to the same limiting Gaussian process. As an empirical illustration, the paper considers a dataset from Project STAR and delivers two new findings.  相似文献   

8.
Multivariate continuous time models are now widely used in economics and finance. Empirical applications typically rely on some process of discretization so that the system may be estimated with discrete data. This paper introduces a framework for discretizing linear multivariate continuous time systems that includes the commonly used Euler and trapezoidal approximations as special cases and leads to a general class of estimators for the mean reversion matrix. Asymptotic distributions and bias formulae are obtained for estimates of the mean reversion parameter. Explicit expressions are given for the discretization bias and its relationship to estimation bias in both multivariate and in univariate settings. In the univariate context, we compare the performance of the two approximation methods relative to exact maximum likelihood (ML) in terms of bias and variance for the Vasicek process. The bias and the variance of the Euler method are found to be smaller than the trapezoidal method, which are in turn smaller than those of exact ML. Simulations suggest that when the mean reversion is slow, the approximation methods work better than ML, the bias formulae are accurate, and for scalar models the estimates obtained from the two approximate methods have smaller bias and variance than exact ML. For the square root process, the Euler method outperforms the Nowman method in terms of both bias and variance. Simulation evidence indicates that the Euler method has smaller bias and variance than exact ML, Nowman’s method and the Milstein method.  相似文献   

9.
《Journal of econometrics》1987,34(3):355-359
Wales and Woodland (1983) have proposed an econometric model to deal with non-negativity constraints in systems of demand equations. This paper points out the relationship between the Wales-Woodland model and the simultaneous equation/limited dependent variable model of Amemiya (1974). This relationship is important because Amemiya has proposed a simple estimation procedure that can be utilized for some cases of the Wales-Woodland model. The issue of internal consistency (or coherency) for models of this type is discussed. I show that internal consistency for the Wales-Woodland model is equivalent to the second-order condition for systems of demand equations without binding quantity constraints.  相似文献   

10.
This paper derives limit distributions of empirical likelihood estimators for models in which inequality moment conditions provide overidentifying information. We show that the use of this information leads to a reduction of the asymptotic mean-squared estimation error and propose asymptotically uniformly valid tests and confidence sets for the parameters of interest. While inequality moment conditions arise in many important economic models, we use a dynamic macroeconomic model as a data generating process and illustrate our methods with instrumental variable estimators of monetary policy rules. The results obtained in this paper extend to conventional GMM estimators.  相似文献   

11.
Macro‐integration is the process of combining data from several sources at an aggregate level. We review a Bayesian approach to macro‐integration with special emphasis on the inclusion of inequality constraints. In particular, an approximate method of dealing with inequality constraints within the linear macro‐integration framework is proposed. This method is based on a normal approximation to the truncated multivariate normal distribution. The framework is then applied to the integration of international trade statistics and transport statistics. By combining these data sources, transit flows can be derived as differences between specific transport and trade flows. Two methods of imposing the inequality restrictions that transit flows must be non‐negative are compared. Moreover, the figures are improved by imposing the equality constraints that aggregates of incoming and outgoing transit flows must be equal.  相似文献   

12.
Several authors have proposed stochastic and non‐stochastic approximations to the maximum likelihood estimate (MLE) for Gibbs point processes in modelling spatial point patterns with pairwise interactions. The approximations are necessary because of the difficulty of evaluating the normalizing constant. In this paper, we first provide a review of methods which yield crude approximations to the MLE. We also review methods based on Markov chain Monte Carlo techniques for which exact MLE has become feasible. We then present a comparative simulation study of the performance of such methods of estimation based on two simulation techniques, the Gibbs sampler and the Metropolis‐Hastings algorithm, carried out for the Strauss model.  相似文献   

13.
In this paper we present a crisp-input/fuzzy-output regression model based on the rationale of generalized maximum entropy (GME) method. The approach can be used in several situations in which one have to handle with particular problems, such as small samples, ill-posed design matrix (e.g., due to the multicollinearity), estimation problems with inequality constraints, etc. After having described the GME-fuzzy regression model, we consider an economic case study in which the features provided from GME approach are evaluated. Moreover, we also perform a sensitivity analysis on the main results of the case study in order to better evaluate some features of the model. Finally, some critical points are discussed together with suggestions for further works.  相似文献   

14.
The article presents an algorithm for linear regression computations subject to linear parametric equality constraints, linear parametric inequality constraints, or a mixture of the two. No rank conditions are imposed on the regression specification or the constraint specification. The algorithm requires a full Moore-Penrose g-inverse which entails extra computational effort relative to other orthonormalization type algorithms. In exchange, auxiliary statistical information is generated: feasibility of a set of constraints may be checked, estimability of a linear parametric function may be checked, and bias and variance may be decomposed by source.  相似文献   

15.
This paper develops a new approach to the estimation of consumer demand models with unobserved heterogeneity subject to revealed preference inequality restrictions. Particular attention is given to nonseparable heterogeneity. The inequality restrictions are used to identify bounds on counterfactual demand. A nonparametric estimator for these bounds is developed and asymptotic properties are derived. An empirical application using data from the UK Family Expenditure Survey illustrates the usefulness of the methods.  相似文献   

16.
We consider estimation of nonparametric structural models under a functional coefficient representation for the regression function. Under this representation, models are linear in the endogenous components with coefficients given by unknown functions of the predetermined variables, a nonparametric generalization of random coefficient models. The functional coefficient restriction is an intermediate approach between fully nonparametric structural models that are ill posed when endogenous variables are continuously distributed, and partially linear models over which they have appreciable flexibility. We propose two-step estimators that use local linear approximations in both steps. The first step is to estimate a vector of reduced forms of regression models and the second step is local linear regression using the estimated reduced forms as regressors. Our large sample results include consistency and asymptotic normality of the proposed estimators. The high practical power of estimators is illustrated via both a Monte Carlo simulation study and an application to returns to education.  相似文献   

17.
This paper develops a new model for the analysis of stochastic volatility (SV) models. Since volatility is a latent variable in SV models, it is difficult to evaluate the exact likelihood. In this paper, a non-linear filter which yields the exact likelihood of SV models is employed. Solving a series of integrals in this filter by piecewise linear approximations with randomly chosen nodes produces the likelihood, which is maximized to obtain estimates of the SV parameters. A smoothing algorithm for volatility estimation is also constructed. Monte Carlo experiments show that the method performs well with respect to both parameter estimates and volatility estimates. We illustrate our model by analysing daily stock returns on the Tokyo Stock Exchange. Since the method can be applied to more general models, the SV model is extended so that several characteristics of daily stock returns are allowed, and this more general model is also estimated. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
PRE-TEST ESTIMATION AND TESTING IN ECONOMETRICS: RECENT DEVELOPMENTS   总被引:2,自引:0,他引:2  
Abstract. This paper surveys a range of important developments in the area of preliminary-test inference in the context of econometric modelling. Both pre-test estimation and pre-test testing are discussed. Special attention is given to recent contributions and results. These include analyses of pre-test strategies under model mis-specification and generalised regression errors; exact sampling distribution results; and pre-testing inequality constraints on the model's parameters. In many cases, practical advice is given to assist applied econometricians in appraising the relative merits of pre-testing. It is shown that there are situations where pre-testing can be advantageous in practice  相似文献   

19.
A new class of forecasting models is proposed that extends the realized GARCH class of models through the inclusion of option prices to forecast the variance of asset returns. The VIX is used to approximate option prices, resulting in a set of cross-equation restrictions on the model’s parameters. The full model is characterized by a nonlinear system of three equations containing asset returns, the realized variance, and the VIX, with estimation of the parameters based on maximum likelihood methods. The forecasting properties of the new class of forecasting models, as well as a number of special cases, are investigated and applied to forecasting the daily S&P500 index realized variance using intra-day and daily data from September 2001 to November 2017. The forecasting results provide strong support for including the realized variance and the VIX to improve variance forecasts, with linear conditional variance models performing well for short-term one-day-ahead forecasts, whereas log-linear conditional variance models tend to perform better for intermediate five-day-ahead forecasts.  相似文献   

20.
This paper examines identification and estimation in recursive linear models. After developing the main result on identification of recursive models, the paper considers estimation in models subject to overidentifying constraints. A particularly simple, but quite general and efficient, approach to estimating constrained recursive models is developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号