首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multivariate continuous time models are now widely used in economics and finance. Empirical applications typically rely on some process of discretization so that the system may be estimated with discrete data. This paper introduces a framework for discretizing linear multivariate continuous time systems that includes the commonly used Euler and trapezoidal approximations as special cases and leads to a general class of estimators for the mean reversion matrix. Asymptotic distributions and bias formulae are obtained for estimates of the mean reversion parameter. Explicit expressions are given for the discretization bias and its relationship to estimation bias in both multivariate and in univariate settings. In the univariate context, we compare the performance of the two approximation methods relative to exact maximum likelihood (ML) in terms of bias and variance for the Vasicek process. The bias and the variance of the Euler method are found to be smaller than the trapezoidal method, which are in turn smaller than those of exact ML. Simulations suggest that when the mean reversion is slow, the approximation methods work better than ML, the bias formulae are accurate, and for scalar models the estimates obtained from the two approximate methods have smaller bias and variance than exact ML. For the square root process, the Euler method outperforms the Nowman method in terms of both bias and variance. Simulation evidence indicates that the Euler method has smaller bias and variance than exact ML, Nowman’s method and the Milstein method.  相似文献   

2.
Wei Yu  Cuizhen Niu  Wangli Xu 《Metrika》2014,77(5):675-693
In this paper, we use the empirical likelihood method to make inferences for the coefficient difference of a two-sample linear regression model with missing response data. The commonly used empirical likelihood ratio is not concave for this problem, so we append a natural and well-explained condition to the likelihood function and propose three types of restricted empirical likelihood ratios for constructing the confidence region of the parameter in question. It can be demonstrated that all three empirical likelihood ratios have, asymptotically, chi-squared distributions. Simulation studies are carried out to show the effectiveness of the proposed approaches in aspects of coverage probability and interval length. A real data set is analysed with our methods as an example.  相似文献   

3.
Properties of the most familiar optimality criteria, for example A-, D- and E-optimality, are well known, but the distance optimality criterion has not drawn much attention to date. In this paper properties of the distance optimality criterion for the parameter vector of the classical linear model under normally distributed errors are investigated. DS-optimal designs are derived for first-order polynomial fit models. The matter of how the distance optimality criterion is related to traditional D- and E-optimality criteria is also addressed. Received: June 1999  相似文献   

4.
We analyze optimality properties of maximum likelihood (ML) and other estimators when the problem does not necessarily fall within the locally asymptotically normal (LAN) class, therefore covering cases that are excluded from conventional LAN theory such as unit root nonstationary time series. The classical Hájek–Le Cam optimality theory is adapted to cover this situation. We show that the expectation of certain monotone “bowl-shaped” functions of the squared estimation error are minimized by the ML estimator in locally asymptotically quadratic situations, which often occur in nonstationary time series analysis when the LAN property fails. Moreover, we demonstrate a direct connection between the (Bayesian property of) asymptotic normality of the posterior and the classical optimality properties of ML estimators.  相似文献   

5.
Logit based parameter estimation in the Rasch model   总被引:1,自引:0,他引:1  
The similarities between the logistic regression model and the Rasch model (used in psychometric item response theory) are used to derive several methods based on logits that produce parameter estimates for the Rasch model. A result from LeCam and Dzhaparidze is used by which an initial consistent estimate is transformed by one scoring method iteration into an estimate that has the same asymptotic efficiency as the (in this case conditional) maximum likelihood estimate of the item parameters. Indirect evidence about the bias of this CML estimator is produced by studying the (more easily derived) bias of the estimator based on the unweighted logits. Finally, some simple weighted least squares logit-based estimates are presented, and their performance is assessed. On the whole, the computationally simpler logit-based estimates give a fairly good approximation to the CML estimates.  相似文献   

6.
In this paper, we discuss the statistical inference of the lifetime distribution of components based on observing the system lifetimes when the system structure is known. A general proportional hazard rate model for the lifetime of the components is considered, which includes some commonly used lifetime distributions. Different estimation methods—method of moments, maximum likelihood method and least squares method—for the proportionality parameter are discussed. The conditions for existence and uniqueness of method of moments and maximum likelihood estimators are presented. Then, we focus on a special case when the lifetime distributions of the components are exponential. Computational formulas for point and interval estimations of the unknown mean lifetime of the components are provided. A Monte Carlo simulation study is used to compare the performance of these estimation methods and recommendations are made based on these results. Finally, an example is provided to illustrate the methods proposed in this paper.  相似文献   

7.
The approximate theory of optimal linear regression design leads to specific convex extremum problems for numerical solution. A conceptual algorithm is stated, whose concrete versions lead us from steepest descent type algorithms to improved gradient methods, and finally to second order methods with excellent convergence behaviour. Applications are given to symmetric multiple polynomial models of degree three or less, where invariance structures are utilized. A final section is devoted to the construction of efficientexact designs of sizeN from the optimal approximate designs. For the multifactor cubic model and some of the most popular optimality criteria (D-, A-, andI-criteria) fairly efficient exact designs are obtained, even for small sample sizeN. AMS Subject Classification: 62K05.Abbreviated Title: Algorithms for Optimal Design.Invited paper presented at the International Conference on Mathematical Statistics,ProbaStat '94, Smolenice, Slovakia.  相似文献   

8.
We study one aspect of applying Edgeworth expansions to linear rank statistics. Since the use of such expansions is often recommended already for moderate sample sizes we investigate for this case the gain of accuracy for the level of significance of some linear rank tests when their critical values are derived from an Edgeworth expansion instead of from a normal approximation. We verify Does' conditions (1983) for the validity of the expansions for four rank statistics of general interest and show by a numerical study that an Edge-worth expansion does not outperform the normal approximation in all situations. A considerable improvement shows up however for the Klotz test at the 5% level.  相似文献   

9.
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

10.
This paper is concerned with the statistical analysis of proportions involving extra-binomial variation. Extra-binomial variation is inherent to experimental situations where experimental units are subject to some source of variation, e.g. biological or environmental variation. A generalized linear model for proportions does not account for random variation between experimental units. In this paper an extended version of the generalized linear model is discussed with special reference to experiments in agricultural research. In this model it is assumed that both treatment effects and random contributions of plots are part of the linear predictor. The methods are applied to results from two agricultural experiments.  相似文献   

11.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

12.
The problem of scoring ordered classifications prior to the further statistical analysis is discussed. A review of some methods of scoring is provided. This includes linear transformations of integer scores, where previous applications to two way classifications are introduced. Also reviewed are scores based on canonical correlations, maximum likelihood scores under assumed logistic distributions for variables, ridits, and conditional mean scoring functions. The latter are shown to satisfy a reasonable set of postulates, and demonstrates that some earlier attempts to do this were incomplete. Examples of the conditional mean scoring function under different distributional assumptions are given. Methods based on compounded functions of proportions for categorical data are applied to many of the scores reviewed and introduced. Appropriate algorithms for these methods are introduced and exemplified. Through the medium of a range of existing data sets the sensitivity of their results to differing scoring systems applied to two way classifications is examined. It is seen that apart from data arising from highly skewed distributions little is to be lost by using simple integer scores.  相似文献   

13.
This paper proposes a new approach to handle nonparametric stochastic frontier (SF) models. It is based on local maximum likelihood techniques. The model is presented as encompassing some anchorage parametric model in a nonparametric way. First, we derive asymptotic properties of the estimator for the general case (local linear approximations). Then the results are tailored to a SF model where the convoluted error term (efficiency plus noise) is the sum of a half normal and a normal random variable. The parametric anchorage model is a linear production function with a homoscedastic error term. The local approximation is linear for both the production function and the parameters of the error terms. The performance of our estimator is then established in finite samples using simulated data sets as well as with a cross-sectional data on US commercial banks. The methods appear to be robust, numerically stable and particularly useful for investigating a production process and the derived efficiency scores.  相似文献   

14.
Different change point models for AR(1) processes are reviewed. For some models, the change is in the distribution conditional on earlier observations. For others, the change is in the unconditional distribution. Some models include an observation before the first possible change time – others not. Earlier and new CUSUM type methods are given, and minimax optimality is examined. For the conditional model with an observation before the possible change, there are sharp results of optimality in the literature. The unconditional model with possible change at (or before) the first observation is of interest for applications. We examined this case and derived new variants of four earlier suggestions. By numerical methods and Monte Carlo simulations, it was demonstrated that the new variants dominate the original ones. However, none of the methods is uniformly minimax optimal.  相似文献   

15.
Alexander Zaigraev 《Metrika》2002,56(3):259-273
Within the framework of classical linear regression model optimal design criteria of stochastic nature are considered. The particular attention is paid to the shape criterion. Also its limit behaviour is established which generalizes that of the distance stochastic optimality criterion. Examples of the limit maximin criterion are considered and optimal designs for the line fit model are found.  相似文献   

16.
Abstract  Two commonly used clustering methods based on maximum likelihood are considered in the context of the classification problem where observations of unknown origin belong to one of two possible populations. The basic assumptions and associated properties of the two methods are contrasted and illustrated by their application to some medical data. Also, the problem of updating an allocation procedure is considered.  相似文献   

17.
In this paper, a method is introduced for approximating the likelihood for the unknown parameters of a state space model. The approximation converges to the true likelihood as the simulation size goes to infinity. In addition, the approximating likelihood is continuous as a function of the unknown parameters under rather general conditions. The approach advocated is fast and robust, and it avoids many of the pitfalls associated with current techniques based upon importance sampling. We assess the performance of the method by considering a linear state space model, comparing the results with the Kalman filter, which delivers the true likelihood. We also apply the method to a non-Gaussian state space model, the stochastic volatility model, finding that the approach is efficient and effective. Applications to continuous time finance models and latent panel data models are considered. Two different multivariate approaches are proposed. The neoclassical growth model is considered as an application.  相似文献   

18.
This paper studies the Hodges and Lehmann (1956) optimality of tests in a general setup. The tests are compared by the exponential rates of growth to one of the power functions evaluated at a fixed alternative while keeping the asymptotic sizes bounded by some constant. We present two sets of sufficient conditions for a test to be Hodges–Lehmann optimal. These new conditions extend the scope of the Hodges–Lehmann optimality analysis to setups that cannot be covered by other conditions in the literature. The general result is illustrated by our applications of interest: testing for moment conditions and overidentifying restrictions. In particular, we show that (i) the empirical likelihood test does not necessarily satisfy existing conditions for optimality but does satisfy our new conditions; and (ii) the generalized method of moments (GMM) test and the generalized empirical likelihood (GEL) tests are Hodges–Lehmann optimal under mild primitive conditions. These results support the belief that the Hodges–Lehmann optimality is a weak asymptotic requirement.  相似文献   

19.
Information criteria (IC) are often used to decide between forecasting models. Commonly used criteria include Akaike's IC and Schwarz's Bayesian IC. They involve the sum of two terms: the model's log likelihood and a penalty for the number of model parameters. The likelihood is calculated with equal weight being given to all observations. We propose that greater weight should be put on more recent observations in order to reflect more recent accuracy. This seems particularly pertinent when selecting among exponential smoothing methods, as they are based on an exponential weighting principle. In this paper, we use exponential weighting within the calculation of the log likelihood for the IC. Our empirical analysis uses supermarket sales and call centre arrivals data. The results show that basing model selection on the new exponentially weighted IC can outperform individual models and selection based on the standard IC.  相似文献   

20.
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D‐, A‐ or E‐optimality. As an illustrative example, we demonstrate the approach using the power‐logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D‐optimal designs with two regressors for a logistic model and a two‐variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号