首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Minggen Lu 《Metrika》2018,81(1):1-17
We consider spline-based quasi-likelihood estimation for mixed Poisson regression with single-index models. The unknown smooth function is approximated by B-splines, and a modified Fisher scoring algorithm is employed to compute the estimates. The spline estimate of the nonparametric component is shown to achieve the optimal rate of convergence, and the asymptotic normality of the regression parameter estimates is still valid even if the variance function is misspecified. The semiparametric efficiency of the model can be established if the variance function is correctly specified. The variance of the regression parameter estimates can be consistently estimated by a simple procedure based on the least-squares estimation. The proposed method is evaluated via an extensive Monte Carlo study, and the methodology is illustrated on an air pollution study.  相似文献   

2.
Shanbhag (J Appl Probab 9:580–587, 1972; Theory Probab Appl 24:430–433, 1979) showed that the diagonality of the Bhattacharyya matrix characterizes the set of Normal, Poisson, Binomial, negative Binomial, Gamma or Meixner hypergeometric distributions. In this note, using Shanbhag (J Appl Probab 9:580–587, 1972; Theory Probab Appl 24:430–433, 1979) and Pommeret (J Multivar Anal 63:105–118, 1997) techniques, we evaluated the general form of the 5 × 5 Bhattacharyya matrix in the natural exponential family satisfying f(x|q)=\fracexp{xg(q)}b(g(q))y(x){f(x|\theta)=\frac{\exp\{xg(\theta)\}}{\beta(g(\theta))}\psi(x)} with cubic variance function (NEF-CVF) of θ. We see that the matrix is not diagonal like distribution with quadratic variance function and has off-diagonal elements. In addition, we calculate the 5 × 5 Bhattacharyya matrix for inverse Gaussian distribution and evaluated different Bhattacharyya bounds for the variance of estimator of the failure rate, coefficient of variation, mode and moment generating function due to inverse Gaussian distribution.  相似文献   

3.
Automobile insurance is an example of a market where multi-period contracts are observed. This form of contract can be justified by asymmetrical information between the insurer and the insured. Insurers use risk classification together with bonus-malus systems. In this paper we show that the actual methodology for the integration of these two approaches can lead to inconsistencies. We develop a statistical model that adequately integrates risk classification and experience rating. For this purpose we present Poisson and negative binomial models with regression component in order to use all available information in the estimation of accident distribution. A bonus-malus system which integrates a priori and a posteriori information on an individual basis is proposed, and insurance premium tables are derived as a function of time, past accidents and the significant variables in the regression. Statistical results were obtained from a sample of 19,013 drivers.  相似文献   

4.
R. Göb 《Metrika》1992,39(1):139-153
Investigations on acceptance sampling have attached rather few attention to defects inspection. As to sampling models and the corresponding operating characteristic (OC) function of single defects sampling plans, generally only typeB (process model) has been considered: sampling from a production process where the OC is conceived as a function of sample sizen, acceptance numberc, and process average number of defects per itemλ. For modern quality control, both the steadily increasing complexity of products and the need for differentiated cost calculation involve a clear demand for defects sampling in its practically most relevant form: lot-by-lot single sampling plans, where the OC (typeA, lot OC) is considered as a function of lot sizeN, sample sizen, acceptance numberc, number of defects in the lotK. The present paper develops two lot OC functions from suitable assumptions on the arrangements of the total numberK of defects over theN elements of the lot. Limiting theorems on these OC functions are used to justify to some extent the customary assumption that the Poisson process OC can serve as an approximation for typeA. The dependence of the OC functions on sample sizen, acceptance numberc, total number of defects in the lotK is described by simple formulae.  相似文献   

5.
T. Shiraishi 《Metrika》1990,37(1):189-197
Summary For testing homogeneity in multivariatek sample model, robust tests based onM-estimators are proposed and their asymptoticx 2-distributions are investigated. FurthermoreM-tests in multivariate regression models are discussed.  相似文献   

6.
The generalized linear mixed model (GLMM) extends classical regression analysis to non-normal, correlated response data. Because inference for GLMMs can be computationally difficult, simplifying distributional assumptions are often made. We focus on the robustness of estimators when a main component of the model, the random effects distribution, is misspecified. Results for the maximum likelihood estimators of the Poisson inverse Gaussian model are presented.  相似文献   

7.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

8.
Least Squares Support Vector Machines (LS-SVM) are the state of the art in kernel methods for regression. These models have been successfully applied for time series modelling and prediction. A critical issue for the performance of these models is the choice of the kernel parameters and the hyperparameters which define the function to be minimized. In this paper a heuristic method for setting both the σ parameter of the Gaussian kernel and the regularization hyperparameter based on information extracted from the time series to be modelled is presented and evaluated.  相似文献   

9.
The elliptical laws are a class of symmetrical probability models that include both lighter and heavier tailed distributions. These models may adapt well to the data, even when outliers exist and have other good theoretical properties and application perspectives. In this article, we present a new class of models, which is generated from symmetrical distributions in and generalize the well known inverse Gaussian distribution. Specifically, the density, distribution function, properties, transformations and moments of this new model are obtained. Also, a graphical analysis of the density is provided. Furthermore, we estimate parameters, propose asymptotic inference and discuss influence diagnostics by using likelihood methods for the new distribution. In particular, we show that the maximum likelihood estimates parameters of the new model under the t kernel are down-weighted for the outliers. Thus, smaller weights are attributed to outlying observations, which produce robust parameter estimates. Finally, an illustrative example with real data shows that the new distribution fits better to the data than some other well known probabilistic models.  相似文献   

10.
The t regression models provide a useful extension of the normal regression models for datasets involving errors with longer-than-normal tails. Homogeneity of variances (if they exist) is a standard assumption in t regression models. However, this assumption is not necessarily appropriate. This paper is devoted to tests for heteroscedasticity in general t linear regression models. The asymptotic properties, including asymptotic Chi-square and approximate powers under local alternatives of the score tests, are studied. Based on the modified profile likelihood (Cox and Reid in J R Stat Soc Ser B 49(1):1–39, 1987), an adjusted score test for heteroscedasticity is developed. The properties of the score test and its adjustment are investigated through Monte Carlo simulations. The test methods are illustrated with land rent data (Weisberg in Applied linear regression. Wiley, New York, 1985). The project supported by NSFC 10671032, China, and a grant (HKBU2030/07P) from the Grant Council of Hong Kong, Hong Kong, China.  相似文献   

11.
We propose two new types of nonparametric tests for investigating multivariate regression functions. The tests are based on cumulative sums coupled with either minimum volume sets or inverse regression ideas; involving no multivariate nonparametric regression estimation. The methods proposed facilitate the investigation for different features such as if a multivariate regression function is (i) constant, (ii) of a bathtub shape, and (iii) in a given parametric form. The inference based on those tests may be further enhanced through associated diagnostic plots. Although the potential use of those ideas is much wider, we focus on the inference for multivariate volatility functions in this paper, i.e. we test for (i) heteroscedasticity, (ii) the so-called ‘smiling effect’, and (iii) some parametric volatility models. The asymptotic behavior of the proposed tests is investigated, and practical feasibility is shown via simulation studies. We further illustrate our methods with real financial data.  相似文献   

12.
13.
This study uses GARCH-EVT-copula and ARMA-GARCH-EVT-copula models to perform out-of-sample forecasts and simulate one-day-ahead returns for ten stock indexes. We construct optimal portfolios based on the global minimum variance (GMV), minimum conditional value-at-risk (Min-CVaR) and certainty equivalence tangency (CET) criteria, and model the dependence structure between stock market returns by employing elliptical (Student-t and Gaussian) and Archimedean (Clayton, Frank and Gumbel) copulas. We analyze the performances of 288 risk modeling portfolio strategies using out-of-sample back-testing. Our main finding is that the CET portfolio, based on ARMA-GARCH-EVT-copula forecasts, outperforms the benchmark portfolio based on historical returns. The regression analyses show that GARCH-EVT forecasting models, which use Gaussian or Student-t copulas, are best at reducing the portfolio risk.  相似文献   

14.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

15.
In this work a new family of statistics based on K ϕ -divergence (Burbea and Rao (1982) On the convexity of divergence measures based on entropy function. IEEE Trans Inf Theory 28, 489–495) are obtained by either replacing both distributions involved in the argument by their samples estimates or replacing one distribution and considering the other as given. Asymptotic distributions of these statistics are obtained and test for goodness-of-fit and for homogeneity with a known distribution, can be constructed on the basis of these results  相似文献   

16.
In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing distribution is discrete, resulting in a finite mixture model formulation. An EM algorithm for estimation is described, and the algorithm is applied to data on customer purchases of books offered through direct mail. Our model is compared empirically to a number of other approaches that deal with heterogeneity in Poisson regression models.  相似文献   

17.
Presence of excess zero in ordinal data is pervasive in areas like medical and social sciences. Unfortunately, analysis of such kind of data has so far hardly been looked into, perhaps for the reason that the underlying model that fits such data, is not a generalized linear model. Obviously some methodological developments and intensive computations are required. The current investigation is concerned with the selection of variables in such models. In many occasions where the number of predictors is quite large and some of them are not useful, the maximum likelihood approach is not the automatic choice. As, apart from the messy calculations involved, this approach fails to provide efficient estimates of the underlying parameters. The proposed penalized approach includes ?1 penalty (LASSO) and the mixture of ?1 and ?2 penalties (elastic net). We propose a coordinate descent algorithm to fit a wide class of ordinal regression models and select useful variables appearing in both the ordinal regression and the logistic regression based mixing component. A rigorous discussion on the selection of predictors has been made through a simulation study. The proposed method is illustrated by analyzing the severity of driver injury from Michigan upper peninsula road accidents.  相似文献   

18.
R. Göb 《Metrika》1992,39(1):269-316
Investigations on acceptance sampling have attached rather few importance to defects inspection. For modern quality control, both the steadily increasing complexity of products and the need for differentiated cost calculation involve a clear demand for economic defects sampling in its practically most relevant form: lot-by-lot single sampling plans, where the OC (lot OC) is considered as a function of lot sizeN, sample sizen, acceptance numberc, number of defects in the lotK. Starting from an appropriate lot OC function, the present paper develops an economic cost and control model and an economic objective function for single defects sampling plans by adapting theα-optimal sampling scheme, introduced by E. von Collani for defectives inspection, to the purposes of defects inspection. A simple and accurate approximation ofα-optimal defects plans is derived by means of a Poisson approximation of the lot OC function.  相似文献   

19.
Statistical methodology for spatio‐temporal point processes is in its infancy. We consider second‐order analysis based on pair correlation functions and K‐functions for general inhomogeneous spatio‐temporal point processes and for inhomogeneous spatio‐temporal Cox processes. Assuming spatio‐temporal separability of the intensity function, we clarify different meanings of second‐order spatio‐temporal separability. One is second‐order spatio‐temporal independence and relates to log‐Gaussian Cox processes with an additive covariance structure of the underlying spatio‐temporal Gaussian process. Another concerns shot‐noise Cox processes with a separable spatio‐temporal covariance density. We propose diagnostic procedures for checking hypotheses of second‐order spatio‐temporal separability, which we apply on simulated and real data.  相似文献   

20.
Lynn Roy LaMotte 《Metrika》1999,50(2):109-119
Deleted-case diagnostic statistics in regression analysis are based on changes in estimates due to deleting one or more cases. Bounds on these statistics, suggested in the literature for identifying influential cases, are widely used.  In a linear regression model for Y in terms of X and Z, the model is “collapsible” with respect to Z if the YX relation is unchanged by deleting Z from the model. Deleted-case diagnostic statistics can be viewed as test statistics for collapsibility hypotheses in the mean shift outlier model. It follows that, for any given case, all deleted-case statistics test the same hypothesis, hence all have the same p-value, while the bounds correspond to different levels of significance among the several statistics. Furthermore, the bound for any particular deleted-case statistic gives widely varying levels of significance over the cases in the data set. Received: April 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号