首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

2.
In the context of allocation models with vector autoregressive errors we propose a convenient procedure, based on the Lagrange multiplier principle, for testing any possible combination of absence of serial correlation, homogeneity, and symmetry against any possible alternative which specifies autocorrelation of an arbitrary given order. We also derive generic expressions for the maximum likelihood estimation of the models under six possible combinations of constraints. The methodology is illustrated with the Rotterdam model and the differential AIDS model, both estimated from the same quarterly British data.  相似文献   

3.
We consider efficient methods for likelihood inference applied to structural models. In particular, we introduce a particle filter method which concentrates upon disturbances in the Markov state of the approximating solution to the structural model. A particular feature of such models is that the conditional distribution of interest for the disturbances is often multimodal. We provide a fast and effective method for approximating such distributions. We estimate a neoclassical growth model using this approach. An asset pricing model with persistent habits is also considered. The methodology we employ allows many fewer particles to be used than alternative procedures for a given precision.  相似文献   

4.
We consider a semiparametric method to estimate logistic regression models with missing both covariates and an outcome variable, and propose two new estimators. The first, which is based solely on the validation set, is an extension of the validation likelihood estimator of Breslow and Cain (Biometrika 75:11–20, 1988). The second is a joint conditional likelihood estimator based on the validation and non-validation data sets. Both estimators are semiparametric as they do not require any model assumptions regarding the missing data mechanism nor the specification of the conditional distribution of the missing covariates given the observed covariates. The asymptotic distribution theory is developed under the assumption that all covariate variables are categorical. The finite-sample properties of the proposed estimators are investigated through simulation studies showing that the joint conditional likelihood estimator is the most efficient. A cable TV survey data set from Taiwan is used to illustrate the practical use of the proposed methodology.  相似文献   

5.
This paper considers estimation of censored panel‐data models with individual‐specific slope heterogeneity. The slope heterogeneity may be random (random slopes model) or related to covariates (correlated random slopes model). Maximum likelihood and censored least‐absolute deviations estimators are proposed for both models. The estimators are simple to implement and, in the case of maximum likelihood, lead to straightforward estimation of partial effects. The rescaled bootstrap suggested by Andrews (Econometrica 2000; 68 : 399–405) is used to deal with the possibility of variance parameters being equal to zero. The methodology is applied to an empirical study of Dutch household portfolio choice, where the outcome variable (portfolio share in safe assets) has corner solutions at zero and one. As predicted by economic theory, there is strong evidence of correlated random slopes for the age profiles, indicating a heterogeneous age profile of portfolio adjustment that varies significantly with other household characteristics. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
We consider the dynamic factor model and show how smoothness restrictions can be imposed on factor loadings by using cubic spline functions. We develop statistical procedures based on Wald, Lagrange multiplier and likelihood ratio tests for this purpose. The methodology is illustrated by analyzing a newly updated monthly time series panel of US term structure of interest rates. Dynamic factor models with and without smooth loadings are compared with dynamic models based on Nelson–Siegel and cubic spline yield curves. We conclude that smoothness restrictions on factor loadings are supported by the interest rate data and can lead to more accurate forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This paper derives an analytic closed-form formula for the cumulative distribution function (cdf) of the composite error of the stochastic frontier analysis (SFA) model. Since the presence of a cdf is frequently encountered in the likelihood-based analysis with limited-dependent and qualitative variables as elegantly shown in the classic book of Maddala (Limited-dependent and qualitative variables in econometrics. Cambridge University Press, Cambridge, 1983), the proposed methodology is useful in the framework of the stochastic frontier analysis. We apply the formula to the maximum likelihood estimation of the SFA models with a censored dependent variable. The simulations show that the finite sample performance of the maximum likelihood estimator of the censored SFA model is very promising. A simple empirical example on the modeling of reservation wage in Taiwan is illustrated as a potential application of the censored SFA.  相似文献   

8.
The majority of nonprofit organisations rely heavily on contributions to fund their mission‐critical activities. Because building relationships with donors is critical to the success of nonprofits, organisations must be able to transform their data on prospective donors into an action plan that will optimise the yield of their fundraising efforts. This paper offers a methodology for targeting individuals most likely to make a charitable contribution, by building custom response models using the rich donor data maintained by many nonprofit organisations as well as overlaid demographic information. The methodology is able to utilise efficiently a large volume of variables while being flexible enough to use large categorical variables, such as postal code, and capture non‐linear relationships between the independent variables and the likelihood to give. Copyright © 2001 Henry Stewart Publications  相似文献   

9.
Bayes factors that do not require prior distributions are proposed for testing one parametric model versus another. These Bayes factors are relatively simple to compute, relying only on maximum likelihood estimates, and are Bayes consistent at an exponential rate for nested models even when the smaller model is true. These desirable properties derive from the use of data splitting. Large sample properties, including consistency, of the Bayes factors are derived, and a simulation study explores practical concerns. The methodology is illustrated with civil engineering data involving compressive strength of concrete.  相似文献   

10.
This paper uses free-knot and fixed-knot regression splines in a Bayesian context to develop methods for the nonparametric estimation of functions subject to shape constraints in models with log-concave likelihood functions. The shape constraints we consider include monotonicity, convexity and functions with a single minimum. A computationally efficient MCMC sampling algorithm is developed that converges faster than previous methods for non-Gaussian models. Simulation results indicate the monotonically constrained function estimates have good small sample properties relative to (i) unconstrained function estimates, and (ii) function estimates obtained from other constrained estimation methods when such methods exist. Also, asymptotic results show the methodology provides consistent estimates for a large class of smooth functions. Two detailed illustrations exemplify the ideas.  相似文献   

11.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

12.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

13.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   

14.
An econometric methodology is developed for nonparametric estimation of concave production technologies. The methodology, based on the principle of maximum likelihood, uses entropic distance and convex programming techniques to estimate production functions. Empirical applications are presented to demonstrate the feasibility of the methodology in small and large datasets. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
This paper proposes a new method for estimating a structural model of labour supply in which hours of work depend on (log) wages and the wage rate is considered endogenous. The main innovation with respect to other related estimation procedures is that a nonparametric additive structure in the hours of work equation is permitted. Though the focus of the paper is on this particular application, a three‐step methodology for estimating models in the presence of the above econometric problems is described. In the first step the reduced form parameters of the participation equation are estimated by a maximum likelihood procedure adapted for estimation of an additive nonparametric function. In the second step the structural parameters of the wage equation are estimated after obtaining the selection‐corrected conditional mean function. Finally, in the third step the structural parameters of the labour supply equation are estimated using local maximum likelihood estimation techniques. The paper concludes with an application to illustrate the feasibility, performance and possible gain of using this method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

17.
This paper considers panel growth regressions in the presence of model uncertainty and reverse causality concerns. For this purpose, my econometric framework combines Bayesian model averaging with a suitable likelihood function for dynamic panel models with weakly exogenous regressors and fixed effects. An application of this econometric methodology to a panel of countries over the 1960–2000 period highlights the difficulties in identifying the sources of economic growth by means of cross‐country regressions. In particular, none of the nine candidate regressors considered can be labeled as a robust determinant of economic growth. Moreover, the estimated rate of conditional convergence is indistinguishable from zero. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
The class of p2 models is suitable for modeling binary relation data in social network analysis. A p2 model is essentially a regression model for bivariate binary responses, featuring within‐dyad dependence and correlated crossed random effects to represent heterogeneity of actors. Despite some desirable properties, these models are used less frequently in empirical applications than other models for network data. A possible reason for this is due to the limited possibilities for this model for accounting for (and explicitly modeling) structural dependence beyond the dyad as can be done in exponential random graph models. Another motive, however, may lie in the computational difficulties existing to estimate such models by means of the methods proposed in the literature, such as joint maximization methods and Bayesian methods. The aim of this article is to investigate maximum likelihood estimation based on the Laplace approximation approach, that can be refined by importance sampling. Practical implementation of such methods can be performed in an efficient manner, and the article provides details on a software implementation using R . Numerical examples and simulation studies illustrate the methodology.  相似文献   

19.
This paper proposes four estimators for factor GARCH models: two-stage univariate GARCH (2SUE), two-stage quasi-maximum likelihood (2SML), quasi-maximum likelihood with known factor weights (RMLE), quasi-maximum likelihood with unknown factor weights (MLE). A Monte-Carlo study is designed for bivariate one-factor GARCH models to examine the finite sample properties. Results are presented for biases, ratios of standard errors to standard deviations, ratios of variances, coverage of confidence intervals, effects of misspecified factor weights, and finite sample properties of the 2SUE for factor GARCH-in-mean models.  相似文献   

20.
It is shown empirically that mixed autoregressive moving average regression models with generalized autoregressive conditional heteroskedasticity (Reg-ARMA-GARCH models) can have multimodality in the likelihood that is caused by a dummy variable in the conditional mean. Maximum likelihood estimates at the local and global modes are investigated and turn out to be qualitatively different, leading to different model-based forecast intervals. In the simpler GARCH(p,q) regression model, we derive analytical conditions for bimodality of the corresponding likelihood. In that case, the likelihood is symmetrical around a local minimum. We propose a solution to avoid this bimodality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号