首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper deals with the issue of testing hypotheses in symmetric and log‐symmetric linear regression models in small and moderate‐sized samples. We focus on four tests, namely, the Wald, likelihood ratio, score, and gradient tests. These tests rely on asymptotic results and are unreliable when the sample size is not large enough to guarantee a good agreement between the exact distribution of the test statistic and the corresponding chi‐squared asymptotic distribution. Bartlett and Bartlett‐type corrections typically attenuate the size distortion of the tests. These corrections are available in the literature for the likelihood ratio and score tests in symmetric linear regression models. Here, we derive a Bartlett‐type correction for the gradient test. We show that the corrections are also valid for the log‐symmetric linear regression models. We numerically compare the various tests and bootstrapped tests, through simulations. Our results suggest that the corrected and bootstrapped tests exhibit type I probability error closer to the chosen nominal level with virtually no power loss. The analytically corrected tests as well as the bootstrapped tests, including the Bartlett‐corrected gradient test derived in this paper, perform with the advantage of not requiring computationally intensive calculations. We present a real data application to illustrate the usefulness of the modified tests.  相似文献   

2.
Single‐index models are popular regression models that are more flexible than linear models and still maintain more structure than purely nonparametric models. We consider the problem of estimating the regression parameters under a monotonicity constraint on the unknown link function. In contrast to the standard approach of using smoothing techniques, we review different “non‐smooth” estimators that avoid the difficult smoothing parameter selection. For about 30 years, one has had the conjecture that the profile least squares estimator is an ‐consistent estimator of the regression parameter, but the only non‐smooth argmin/argmax estimators that are actually known to achieve this ‐rate are not based on the nonparametric least squares estimator of the link function. However, solving a score equation corresponding to the least squares approach results in ‐consistent estimators. We illustrate the good behavior of the score approach via simulations. The connection with the binary choice and current status linear regression models is also discussed.  相似文献   

3.
《Economic Systems》2022,46(3):100998
CDM (Crépon, Duguet and Mairesse, 1988) is a workhorse model in the economics of innovation, which explains productivity in a three-stage procedure driven initially by R&D and leads to patents and then to productivity improvements. Based on the logic of this model, an increasing number of papers applies it to emerging economies but modifies the original model without being explicit about the nature and implications of this modification. We argue in this paper that, in its original form, CDM does not capture stylized facts of the determinants of productivity in emerging economies and that we need alternative models. Accordingly, we are critical of papers that try to maintain the validity of the model but actually change it. For that purpose, we test the original CDM model and its two alternatives: investment and production capability–driven models. Our research is based on a large sample of firms in Central and Eastern Europe, former Soviet republics and Turkey, and we show that the alternative models are much closer to the stylized facts of innovation activities and technology upgrading in these and other emerging economies. Our conclusions have important policy implications, which we discuss.  相似文献   

4.
The score test statistics for testing zero inflation and covariance parameter are proposed in the bivariate zero‐inflated Poisson (BZIP) regression model. The Monte Carlo studies show that the score test and likelihood ratio test for testing zero inflation underestimate the nominal significance level, while the score test for covariance parameter keeps the significance level close to the nominal one. To overcome this nominal level underestimation, we propose a bootstrap method of the score test for the testing problem of zero inflation. An empirical example with covariates is provided to illustrate the results. In addition, score test for zero inflation is also proposed in the BZIP model, which allows a flexible dependence structure using copula.  相似文献   

5.
Incomplete correlated 2 × 2 tables are common in some infectious disease studies and two‐step treatment studies in which one of the comparative measures of interest is the risk ratio (RR). This paper investigates the two‐stage tests of whether K RRs are homogeneous and whether the common RR equals a freewill constant. On the assumption that K RRs are equal, this paper proposes four asymptotic test statistics: the Wald‐type, the logarithmic‐transformation‐based, the score‐type and the likelihood ratio statistics to test whether the common RR equals a prespecified value. Sample size formulae based on hypothesis testing method and confidence interval method are proposed in the second stage of test. Simulation results show that sample sizes based on the score‐type test and the logarithmic‐transformation‐based test are more accurate to achieve the predesigned power than those based on the Wald‐type test. The score‐type test performs best of the four tests in terms of type I error rate. A real example is used to illustrate the proposed methods.  相似文献   

6.
Despite certain advances for non‐randomized response (NRR) techniques in the past 6 years, the existing non‐randomized crosswise and triangular models have several limitations in practice. In this paper, I propose a new NRR model, called the parallel model with a wider application range. Asymptotical properties of the maximum likelihood estimator (and its modified version) for the proportion of interest are explored. Theoretical comparisons with the crosswise and triangular models show that the parallel model is always more efficient than the two existing NRR models for most of the possible parameter ranges. Bayesian methods for analyzing survey data from the parallel model are developed. A case study on college students' premarital sexual behavior at Wuhan and a case study on plagiarism at the University of Hong Kong are conducted and are used to illustrate the proposed methods. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

7.
In missing data problems, it is often the case that there is a natural test statistic for testing a statistical hypothesis had all the data been observed. A fuzzy  p -value approach to hypothesis testing has recently been proposed which is implemented by imputing the missing values in the "complete data" test statistic by values simulated from the conditional null distribution given the observed data. We argue that imputing data in this way will inevitably lead to loss in power. For the case of scalar parameter, we show that the asymptotic efficiency of the score test based on the imputed "complete data" relative to the score test based on the observed data is given by the ratio of the observed data information to the complete data information. Three examples involving probit regression, normal random effects model, and unidentified paired data are used for illustration. For testing linkage disequilibrium based on pooled genotype data, simulation results show that the imputed Neyman Pearson and Fisher exact tests are less powerful than a Wald-type test based on the observed data maximum likelihood estimator. In conclusion, we caution against the routine use of the fuzzy  p -value approach in latent variable or missing data problems and suggest some viable alternatives.  相似文献   

8.
Mann–Whitney‐type causal effects are generally applicable to outcome variables with a natural ordering, have been recommended for clinical trials because of their clinical relevance and interpretability and are particularly useful in analysing an ordinal composite outcome that combines an original primary outcome with death and possibly treatment discontinuation. In this article, we consider robust and efficient estimation of such causal effects in observational studies and clinical trials. For observational studies, we propose and compare several estimators: regression estimators based on an outcome regression (OR) model or a generalised probabilistic index (GPI) model, an inverse probability weighted estimator based on a propensity score model and two doubly robust (DR), locally efficient estimators. One of the DR estimators involves a propensity score model and an OR model, is consistent and asymptotically normal under the union of the two models and attains the semiparametric information bound when both models are correct. The other DR estimator has the same properties with the OR model replaced by a GPI model. For clinical trials, we extend an existing augmented estimator based on a GPI model and propose a new one based on an OR model. The methods are evaluated and compared in simulation experiments and applied to a clinical trial in cardiology and an observational study in obstetrics.  相似文献   

9.
Many phenomena in the life sciences can be analyzed by using a fixed design regression model with a regression function m that exhibits a crossing‐point in the following sense: the regression function runs below or above its mean level, respectively, according as the input variable lies to the left or to the right of that crossing‐point, or vice versa. We propose a non‐parametric estimator and show weak and strong consistency as long as the crossing‐point is unique. It is defined as maximizing point arg max of a certain marked empirical process. For testing the hypothesis H0 that the regression function m actually is constant (no crossing‐point), a decision rule is designed for the specific alternative H1 that m possesses a crossing‐point. The pertaining test‐statistic is the ratio max/argmax of the maximum value and the maximizing point of the marked empirical process. Under the hypothesis the ratio converges in distribution to the corresponding ratio of a reflected Brownian bridge, for which we derive the distribution function. The test is consistent on the whole alternative and superior to the corresponding Kolmogorov–Smirnov test, which is based only on the maximal value max. Some practical examples of possible applications are given where a certain study about dental phobia is discussed in more detail.  相似文献   

10.
The t regression models provide a useful extension of the normal regression models for datasets involving errors with longer-than-normal tails. Homogeneity of variances (if they exist) is a standard assumption in t regression models. However, this assumption is not necessarily appropriate. This paper is devoted to tests for heteroscedasticity in general t linear regression models. The asymptotic properties, including asymptotic Chi-square and approximate powers under local alternatives of the score tests, are studied. Based on the modified profile likelihood (Cox and Reid in J R Stat Soc Ser B 49(1):1–39, 1987), an adjusted score test for heteroscedasticity is developed. The properties of the score test and its adjustment are investigated through Monte Carlo simulations. The test methods are illustrated with land rent data (Weisberg in Applied linear regression. Wiley, New York, 1985). The project supported by NSFC 10671032, China, and a grant (HKBU2030/07P) from the Grant Council of Hong Kong, Hong Kong, China.  相似文献   

11.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

12.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

13.
A single outlier in a regression model can be detected by the effect of its deletion on the residual sum of squares. An equivalent procedure is the simple intervention in which an extra parameter is added for the mean of the observation in question. Similarly, for unobserved components or structural time-series models, the effect of elaborations of the model on inferences can be investigated by the use of interventions involving a single parameter, such as trend or level changes. Because such time-series models contain more than one variance, the effect of the intervention is measured by the change in individual variances.We examine the effect on the estimated parameters of moving various kinds of intervention along the series. The horrendous computational problems involved are overcome by the use of score statistics combined with recent developments in filtering and smoothing. Interpretation of the resulting time-series plots of diagnostics is aided by simulation envelopes.Our procedures, illustrated with four example, permit keen insights into the fragility of inferences to specific shocks, such as outliers and level breaks. Although the emphasis is mostly on parameter estimation, forecast are also considered. Possible extensions include seasonal adjustment and detrending of series.  相似文献   

14.
The adoption of cleaner technology (CT) has the potential to play an important role in tackling the impacts of business on climate change on business. It is therefore important to understand the factors motivating the adoption of CT in business. Using a technology–firm–stakeholder framework, this study proposes a perception‐based model for the adoption of CT for climate proactivity that is tested against data collected from 106 firms in India. Six factors are tested using a logistic regression and five are found to be significant in distinguishing adopter firms from non‐adopter firms. The results suggest that the perception‐based model using a technology–firm–stakeholder framework is a useful approach for examining factors affecting the adoption decision. While techno‐economic benefits are perceived to be higher by adopter firms than by non‐adopter firms, other benefits are not perceived differently by either adopter or non‐adopter firms. In addition, adopter firms perceive lower financial costs and higher technical capability than non‐adopter firms do. Also, adopter firms perceive higher regulatory pressure but lower stakeholder pressure than non‐adopter firms do. Implications of the findings and future research areas are discussed. Copyright © 2010 John Wiley & Sons, Ltd and ERP Environment.  相似文献   

15.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

16.
17.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

18.
We propose a score statistic to test the vector of odds ratio parameters under the logistic regression model based on case–control data. The proposed score test is based on the semiparametric profile loglikelihood function under a two-sample semiparametric model, which is equivalent to the assumed logistic regression model. The proposed score statistic has an asymptotic chi-squared distribution under the null hypothesis and an asymptotic noncentral chi-squared distribution under local alternatives to the null hypothesis. Moreover, we show that the proposed score test is asymptotically equivalent to the Wald test under the logistic regression model based on case–control data. In addition, we demonstrate that the proposed score statistic and its asymptotic distribution may be obtained by fitting the prospective logistic regression model to case–control data. We present some results on simulation and on the analysis of two real datasets.  相似文献   

19.
The present paper introduces a methodology for the semiparametric or non‐parametric two‐sample equivalence problem when the effects are specified by statistical functionals. The mean relative risk functional of two populations is given by the average of the time‐dependent risk. This functional is a meaningful non‐parametric quantity, which is invariant under strictly monotone transformations of the data. In the case of proportional hazard models, the functional determines just the proportional hazard risk factor. It is shown that an equivalence test of the type of the two‐sample Savage rank test is appropriate for this functional. Under proportional hazards, this test can be carried out as an exact level α test. It also works quite well under other semiparametric models. Similar results are presented for a Wilcoxon rank‐sum test for equivalence based on the Mann–Whitney functional given by the relative treatment effect.  相似文献   

20.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号