首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
This article is concerned with the inference on seemingly unrelated non‐parametric regression models with serially correlated errors. Based on an initial estimator of the mean functions, we first construct an efficient estimator of the autoregressive parameters of the errors. Then, by applying an undersmoothing technique, and taking both of the contemporaneous correlation among equations and serial correlation into account, we propose an efficient two‐stage local polynomial estimation for the unknown mean functions. It is shown that the resulting estimator has the same bias as those estimators which neglect the contemporaneous and/or serial correlation and smaller asymptotic variance. The asymptotic normality of the resulting estimator is also established. In addition, we develop a wild block bootstrap test for the goodness‐of‐fit of models. The finite sample performance of our procedures is investigated in a simulation study whose results come out very supportive, and a real data set is analysed to illustrate the usefulness of our procedures.  相似文献   

2.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

3.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

4.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

5.
This paper uses non‐parametric kernel methods to construct observation‐specific elasticities of substitution for a balanced panel of 73 developed and developing countries to examine the capital–skill complementarity hypothesis. The exercise shows some support for capital–skill complementarity, but the strength of the evidence depends upon the definition of skilled labour and the elasticity of substitution measure being used. The added flexibility of the non‐parametric procedure is also capable of uncover ing that the elasticities of substitution vary across countries, groups of countries and time periods.  相似文献   

6.
The present paper introduces a methodology for the semiparametric or non‐parametric two‐sample equivalence problem when the effects are specified by statistical functionals. The mean relative risk functional of two populations is given by the average of the time‐dependent risk. This functional is a meaningful non‐parametric quantity, which is invariant under strictly monotone transformations of the data. In the case of proportional hazard models, the functional determines just the proportional hazard risk factor. It is shown that an equivalence test of the type of the two‐sample Savage rank test is appropriate for this functional. Under proportional hazards, this test can be carried out as an exact level α test. It also works quite well under other semiparametric models. Similar results are presented for a Wilcoxon rank‐sum test for equivalence based on the Mann–Whitney functional given by the relative treatment effect.  相似文献   

7.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

8.
This survey reviews the existing literature on the most relevant Bayesian inference methods for univariate and multivariate GARCH models. The advantages and drawbacks of each procedure are outlined as well as the advantages of the Bayesian approach versus classical procedures. The paper makes emphasis on recent Bayesian non‐parametric approaches for GARCH models that avoid imposing arbitrary parametric distributional assumptions. These novel approaches implicitly assume infinite mixture of Gaussian distributions on the standardized returns which have been shown to be more flexible and describe better the uncertainty about future volatilities. Finally, the survey presents an illustration using real data to show the flexibility and usefulness of the non‐parametric approach.  相似文献   

9.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

10.
We propose a strategy for assessing structural stability in time‐series frameworks when potential change dates are unknown. Existing stability tests are effective in detecting structural change, but procedures for identifying timing are imprecise, especially in assessing the stability of variance parameters. We present a likelihood‐based procedure for assigning conditional probabilities to the occurrence of structural breaks at alternative dates. The procedure is effective in improving the precision with which inferences regarding timing can be made. We illustrate parametric and non‐parametric implementations of the procedure through Monte Carlo experiments, and an assessment of the volatility reduction in the growth rate of US GDP. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we discuss the analysis of data from population‐based case‐control studies when there is appreciable non‐response. We develop a class of estimating equations that are relatively easy to implement. For some important special cases, we also provide efficient semi‐parametric maximum‐likelihood methods. We compare the methods in a simulation study based on data from the Women's Cardiovascular Health Study discussed in Arbogast et al. (Estimating incidence rates from population‐based case‐control studies in the presence of non‐respondents, Biometrical Journal 44, 227–239, 2002).  相似文献   

12.
A rich theory of production and analysis of productive efficiency has developed since the pioneering work by Tjalling C. Koopmans and Gerard Debreu. Michael J. Farrell published the first empirical study, and it appeared in a statistical journal (Journal of the Royal Statistical Society), even though the article provided no statistical theory. The literature in econometrics, management sciences, operations research and mathematical statistics has since been enriched by hundreds of papers trying to develop or implement new tools for analysing productivity and efficiency of firms. Both parametric and non‐parametric approaches have been proposed. The mathematical challenge is to derive estimators of production, cost, revenue or profit frontiers, which represent, in the case of production frontiers, the optimal loci of combinations of inputs (like labour, energy and capital) and outputs (the products or services produced by the firms). Optimality is defined in terms of various economic considerations. Then the efficiency of a particular unit is measured by its distance to the estimated frontier. The statistical problem can be viewed as the problem of estimating the support of a multivariate random variable, subject to some shape constraints, in multiple dimensions. These techniques are applied in thousands of papers in the economic and business literature. This ‘guided tour’ reviews the development of various non‐parametric approaches since the early work of Farrell. Remaining challenges and open issues in this challenging arena are also described. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

13.
In this paper, we propose several finite‐sample specification tests for multivariate linear regressions (MLR). We focus on tests for serial dependence and ARCH effects with possibly non‐Gaussian errors. The tests are based on properly standardized multivariate residuals to ensure invariance to error covariances. The procedures proposed provide: (i) exact variants of standard multivariate portmanteau tests for serial correlation as well as ARCH effects, and (ii) exact versions of the diagnostics presented by Shanken ( 1990 ) which are based on combining univariate specification tests. Specifically, we combine tests across equations using a Monte Carlo (MC) test method so that Bonferroni‐type bounds can be avoided. The procedures considered are evaluated in a simulation experiment: the latter shows that standard asymptotic procedures suffer from serious size problems, while the MC tests suggested display excellent size and power properties, even when the sample size is small relative to the number of equations, with normal or Student‐t errors. The tests proposed are applied to the Fama–French three‐factor model. Our findings suggest that the i.i.d. error assumption provides an acceptable working framework once we allow for non‐Gaussian errors within 5‐year sub‐periods, whereas temporal instabilities clearly plague the full‐sample dataset. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Copulas are distributions with uniform marginals. Non‐parametric copula estimates may violate the uniformity condition in finite samples. We look at whether it is possible to obtain valid piecewise linear copula densities by triangulation. The copula property imposes strict constraints on design points, making an equi‐spaced grid a natural starting point. However, the mixed‐integer nature of the problem makes a pure triangulation approach impractical on fine grids. As an alternative, we study the ways of approximating copula densities with triangular functions which guarantees that the estimator is a valid copula density. The family of resulting estimators can be viewed as a non‐parametric MLE of B‐spline coefficients on possibly non‐equally spaced grids under simple linear constraints. As such, it can be easily solved using standard convex optimization tools and allows for a degree of localization. A simulation study shows an attractive performance of the estimator in small samples and compares it with some of the leading alternatives. We demonstrate empirical relevance of our approach using three applications. In the first application, we investigate how the body mass index of children depends on that of parents. In the second application, we construct a bivariate copula underlying the Gibson paradox from macroeconomics. In the third application, we show the benefit of using our approach in testing the null of independence against the alternative of an arbitrary dependence pattern.  相似文献   

15.
Assuming that two‐step monotone missing data is drawn from a multivariate normal population, this paper derives the Bartlett‐type correction to the likelihood ratio test for missing completely at random (MCAR), which plays an important role in the statistical analysis of incomplete datasets. The advantages of our approach are confirmed in Monte Carlo simulations. Our correction drastically improved the accuracy of the type I error in Little's (1988, Journal of the American Statistical Association, 83 , 1198–1202) test for MCAR and performed well even on moderate sample sizes.  相似文献   

16.
This paper presents a design for compensation systems for green strategy implementation based on parametric and non‐parametric approaches. The purpose of the analysis is to use formal modeling to explain the issues that arise with the multi‐task problem of implementing an environmental strategy in addition to an already existing profit‐oriented strategy. For the first class of compensation systems (parametric), a multi‐task model is used as a basis. For the second class of compensation systems (non‐parametric), data envelopment analysis is applied.Copyright © 2003 John Wiley & Sons, Ltd. and ERP Environment  相似文献   

17.
In this paper, we propose a new first‐order non‐negative integer‐valued autoregressive [INAR(1)] process with Poisson–geometric marginals based on binomial thinning for modeling integer‐valued time series with overdispersion. Also, the new process has, as a particular case, the Poisson INAR(1) and geometric INAR(1) processes. The main properties of the model are derived, such as probability generating function, moments, conditional distribution, higher‐order moments, and jumps. Estimators for the parameters of process are proposed, and their asymptotic properties are established. Some numerical results of the estimators are presented with a discussion of the obtained results. Applications to two real data sets are given to show the potentiality of the new process.  相似文献   

18.
Collusion and heterogeneity across firms may introduce asymmetry in bidding games. A major difficulty in asymmetric auctions is that the Bayesian Nash equilibrium strategies are solutions of an intractable system of differential equations. We propose a simple method for estimating asymmetric first‐price auctions with affiliated private values. Considering two types of bidders, we show that these differential equations can be rewritten using the observed bid distribution. We establish the identification of the model, characterize its theoretical restrictions, and propose a two‐step non‐parametric estimation procedure for estimating the private value distributions. An empirical analysis of joint bidding in OCS auctions is provided. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we investigate certain operational and inferential aspects of invariant Post‐randomization Method (PRAM) as a tool for disclosure limitation of categorical data. Invariant PRAM preserves unbiasedness of certain estimators, but inflates their variances and distorts other attributes. We introduce the concept of strongly invariant PRAM, which does not affect data utility or the properties of any statistical method. However, the procedure seems feasible in limited situations. We review methods for constructing invariant PRAM matrices and prove that a conditional approach, which can preserve the original data on any subset of variables, yields invariant PRAM. For multinomial sampling, we derive expressions for variance inflation inflicted by invariant PRAM and variances of certain estimators of the cell probabilities and also their tight upper bounds. We discuss estimation of these quantities and thereby assessing statistical efficiency loss from applying invariant PRAM. We find a connection between invariant PRAM and creating partially synthetic data using a non‐parametric approach, and compare estimation variance under the two approaches. Finally, we discuss some aspects of invariant PRAM in a general survey context.  相似文献   

20.
The effective use of spatial information in a regression‐based approach to small area estimation is an important practical issue. One approach to account for geographic information is by extending the linear mixed model to allow for spatially correlated random area effects. An alternative is to include the spatial information by a non‐parametric mixed models. Another option is geographic weighted regression where the model coefficients vary spatially across the geography of interest. Although these approaches are useful for estimating small area means efficiently under strict parametric assumptions, they can be sensitive to outliers. In this paper, we propose robust extensions of the geographically weighted empirical best linear unbiased predictor. In particular, we introduce robust projective and predictive estimators under spatial non‐stationarity. Mean squared error estimation is performed by two analytic approaches that account for the spatial structure in the data. Model‐based simulations show that the methodology proposed often leads to more efficient estimators. Furthermore, the analytic mean squared error estimators introduced have appealing properties in terms of stability and bias. Finally, we demonstrate in the application that the new methodology is a good choice for producing estimates for average rent prices of apartments in urban planning areas in Berlin.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号