首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Lichun Wang  Peng Lai  Heng Lian 《Metrika》2013,76(8):1083-1103
Generalized varying coefficient partially linear models are a flexible class of semiparametric models that deal with data with different types of responses. In this paper, we focus on polynomial spline estimator as a computationally easier alternative to the more commonly used local polynomial regression approach, since one can directly take advantage of many existing implementations for generalized linear models. Furthermore, motivated by the high dimensionality characteristics that accompany many modern data sets nowadays, we investigate its asymptotic properties when both the number of nonparametric and the number of parametric components grows with, but is still smaller than, the sample size. Simulations and a real data example are used to illustrate our proposal.  相似文献   

2.
Qihua Wang  Lili Yao 《Metrika》2006,64(3):271-288
In this paper, varying coefficient proportional hazard regression models are considered. The model is an important extension of the Cox model, and arises naturally if the coefficients change over different groups characterized by certain covariates in practice. Under random censorship, weighted partial likelihood estimators are defined for the varying coefficients by maximizing weighted partial likelihoods. It is shown that the proposed estimators are consistent and asymptotically normal.  相似文献   

3.
We consider the issue of performing residual and local influence analyses in beta regression models with varying dispersion, which are useful for modelling random variables that assume values in the standard unit interval. In such models, both the mean and the dispersion depend upon independent variables. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. An application using real data is presented and discussed.  相似文献   

4.
The past forty years have seen a great deal of research into the construction and properties of nonparametric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson kernel regression estimator.
A third, less well-explored, strand of applications of smoothing is to the estimation of probabilities in categorical data. In this paper the position of categorical data smoothing as a bridge between nonparametric regression and density estimation is explored. Nonparametric regression provides a paradigm for the construction of effective categorical smoothing estimates, and use of an appropriate likelihood function yields cell probability estimates with many desirable properties. Such estimates can be used to construct regression estimates when one or more of the categorical variables are viewed as response variables. They also lead naturally to the construction of well-behaved density estimates using local or penalized likelihood estimation, which can then be used in a regression context. Several real data sets are used to illustrate these points.  相似文献   

5.
Within models for nonnegative time series, it is common to encounter deterministic components (trends, seasonalities) which can be specified in a flexible form. This work proposes the use of shrinkage type estimation for the parameters of such components. The amount of smoothing to be imposed on the estimates can be chosen using different methodologies: Cross-Validation for dependent data or the recently proposed Focused Information Criterion. We illustrate such a methodology using a semiparametric autoregressive conditional duration model that decomposes the conditional expectations of durations into their dynamic (parametric) and diurnal (flexible) components. We use a shrinkage estimator that jointly estimates the parameters of the two components and controls the smoothness of the estimated flexible component. The results show that, from the forecasting perspective, an appropriate shrinkage strategy can significantly improve on the baseline maximum likelihood estimation.  相似文献   

6.
This article proposes a new technique for estimating trend and multiplicative seasonality in time series data. The technique is computationally quite straightforward and gives better forecasts (in a sense described below) than other commonly used methods. Like many other methods, the one presented here is basically a decomposition technique, that is, it attempts to isolate and estimate the several subcomponents in the time series. It draws primarily on regression analysis for its power and has some of the computational advantages of exponential smoothing. In particular, old estimates of base, trend, and seasonality may be smoothed with new data as they occur. The basic technique was developed originally as a way to generate initial parameter values for a Winters exponential smoothing model [4], but it proved to be a useful forecasting method in itself.The objective in all decomposition methods is to separate somehow the effects of trend and seasonality in the data, so that the two may be estimated independently. When seasonality is modeled with an additive form (Datum = Base + Trend + Seasonal Factor), techniques such as regression analysis with dummy variables or ratio-to-moving-average techniques accomplish this task well. It is more common, however, to model seasonality as a multiplicative form (as in the Winters model, for example, where Datum = [Base + Trend] * Seasonal Factor). In this case, it can be shown that neither of the techniques above achieves a proper separation of the trend and seasonal effects, and in some instances may give highly misleading results. The technique described in this article attempts to deal properly with multiplicative seasonality, while remaining computationally tractable.The technique is built on a set of simple regression models, one for each period in the seasonal cycle. These models are used to estimate individual seasonal effects and then pooled to estimate the base and trend. As new data occur, they are smoothed into the least-squares formulas with computations that are quite similar to those used in ordinary exponential smoothing. Thus, the full least-squares computations are done only once, when the forecasting process is first initiated. Although the technique is demonstrated here under the assumption that trend is linear, the trend may, in fact, assume any form for which the curve-fitting tools are available (exponential, polynomial, etc.).The method has proved to be easy to program and execute, and computational experience has been quite favorable. It is faster than the RTMA method or regression with dummy variables (which requires a multiple regression routine), and it is competitive with, although a bit slower than, ordinary triple exponential smoothing.  相似文献   

7.
Single‐index models are popular regression models that are more flexible than linear models and still maintain more structure than purely nonparametric models. We consider the problem of estimating the regression parameters under a monotonicity constraint on the unknown link function. In contrast to the standard approach of using smoothing techniques, we review different “non‐smooth” estimators that avoid the difficult smoothing parameter selection. For about 30 years, one has had the conjecture that the profile least squares estimator is an ‐consistent estimator of the regression parameter, but the only non‐smooth argmin/argmax estimators that are actually known to achieve this ‐rate are not based on the nonparametric least squares estimator of the link function. However, solving a score equation corresponding to the least squares approach results in ‐consistent estimators. We illustrate the good behavior of the score approach via simulations. The connection with the binary choice and current status linear regression models is also discussed.  相似文献   

8.
This paper studies functional coefficient regression models with nonstationary time series data, allowing also for stationary covariates. A local linear fitting scheme is developed to estimate the coefficient functions. The asymptotic distributions of the estimators are obtained, showing different convergence rates for the stationary and nonstationary covariates. A two-stage approach is proposed to achieve estimation optimality in the sense of minimizing the asymptotic mean squared error. When the coefficient function is a function of a nonstationary variable, the new findings are that the asymptotic bias of its nonparametric estimator is the same as the stationary covariate case but convergence rate differs, and further, the asymptotic distribution is a mixed normal, associated with the local time of a standard Brownian motion. The asymptotic behavior at boundaries is also investigated.  相似文献   

9.
The time-series distributed lag techniques of econometrics can be usefully applied to cross-sectional, spatial and cross-section time-series situations. The application is perfectly natural in cross-section, time-series models when regression coefficients evolve systematically as the cross-section grouping variable changes. The evolution of such coefficients lends itself to polynomial approximation or more general smoothing restrictions. These ideas are not new, Gersovitz and McKinnon (1978) and Trivedi and Lee (1981) providing two of the earliest applications of cross-equation smoothing techniques. However, their applications were in the context of coefficient variation due to seasonal changes and this may account for the non-diffusion of these techniques. The approach here is illustrated in the context of age-specific household formation equations based on census data, using Almon polynomials when the regression coefficients vary systematically by age group. A second application is provided, using spatial data, explaining the incidence of crime, by region; using polynomial and geometric smoothing to model distance declining regional effects.  相似文献   

10.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

11.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

12.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   

13.
We consider the problem of estimating a varying coefficient regression model when regressors include a time trend. We show that the commonly used local constant kernel estimation method leads to an inconsistent estimation result, while a local polynomial estimator yields a consistent estimation result. We establish the asymptotic normality result for the proposed estimator. We also provide asymptotic analysis of the data-driven (least squares cross validation) method of selecting the smoothing parameters. In addition, we consider a partially linear time trend model and establish the asymptotic distribution of our proposed estimator. Two test statistics are proposed to test the null hypotheses of a linear and of a partially linear time trend models. Simulations are reported to examine the finite sample performances of the proposed estimators and the test statistics.  相似文献   

14.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

15.
Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates‐dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstrated that under mild assumptions on the dependence of the intensity on covariates, this approach can provide better results than the classical nonparametric method based on local smoothing in the spatial domain. In comparison with the parametric pseudo‐likelihood estimation, the nonparametric approach can be more accurate particularly when the dependence on covariates is weak or if there is uncertainty about the model or about the range of interactions. An important supplementary task is the dimension reduction of the covariate space. It is shown that the techniques based on the inverse regression, previously applied to Cox processes, are useful even when the interactions are present. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

16.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

17.
In this article, we propose a mean linear regression model where the response variable is inverse gamma distributed using a new parameterization of this distribution that is indexed by mean and precision parameters. The main advantage of our new parametrization is the straightforward interpretation of the regression coefficients in terms of the expectation of the positive response variable, as usual in the context of generalized linear models. The variance function of the proposed model has a quadratic form. The inverse gamma distribution is a member of the exponential family of distributions and has some distributions commonly used for parametric models in survival analysis as special cases. We compare the proposed model to several alternatives and illustrate its advantages and usefulness. With a generalized linear model approach that takes advantage of exponential family properties, we discuss model estimation (by maximum likelihood), black further inferential quantities and diagnostic tools. A Monte Carlo experiment is conducted to evaluate the performances of these estimators in finite samples with a discussion of the obtained results. A real application using minerals data set collected by Department of Mines of the University of Atacama, Chile, is considered to demonstrate the practical potential of the proposed model.  相似文献   

18.
Abstract The management and monitoring of very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations in the class of multivariate GARCH models are BEKK and dynamic conditional correlation (DCC). It is well known that BEKK suffers from the archetypal ‘curse of dimensionality’, whereas DCC does not. It is argued in this paper that this is a misleading interpretation of the suitability of the two models for use in practice. The primary purpose of this paper is to analyse the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of the structural derivation of the models, the availability of analytical forms for the sufficient conditions for existence of moments, sufficient conditions for consistency and asymptotic normality of the appropriate estimators and computational tractability for ultra large numbers of financial assets. Based on theoretical considerations, the paper sheds light on how to discriminate between BEKK and DCC in practical applications.  相似文献   

19.
We introduce a new class of stochastic volatility models with autoregressive moving average (ARMA) innovations. The conditional mean process has a flexible form that can accommodate both a state space representation and a conventional dynamic regression. The ARMA component introduces serial dependence, which results in standard Kalman filter techniques not being directly applicable. To overcome this hurdle, we develop an efficient posterior simulator that builds on recently developed precision-based algorithms. We assess the usefulness of these new models in an inflation forecasting exercise across all G7 economies. We find that the new models generally provide competitive point and density forecasts compared to standard benchmarks, and are especially useful for Canada, France, Italy, and the U.S.  相似文献   

20.
Spatial interaction modeling of interregional commodity flows   总被引:1,自引:0,他引:1  
Drawing from both the spatial price equilibrium theoretical framework and the empirical literature on spatial interaction modeling, this paper expands models of interregional commodity flows (CFs) by incorporating new variables and using a flexible Box-Cox functional form. The 1993 US commodity flows survey provides the empirical basis for estimating state-to-state flow models for 16 commodity groups over the 48 continental US states. The optimized Box-Cox specification proves to be superior to the multiplicative one in all cases, while selected variables provide new insights into the determinants of state-to-state CFs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号