首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 172 毫秒
1.
Although there are many sophisticated models for estimation of failure rate based on censored data in continuous distributions, not much work has been done in the discrete case. We introduce a discrete model for life lengths and consider its properties. For this model, we derive the corresponding maximum likelihood estimators of the parameters under Type I and Type II right-censoring. Received May 2000  相似文献   

2.
We present examples based on actual and synthetic datasets to illustrate how simulation methods can mask identification problems in the estimation of discrete choice models such as mixed logit. Simulation methods approximate an integral (without a closed form) by taking draws from the underlying distribution of the random variable of integration. Our examples reveal how a low number of draws can generate estimates that appear identified, but in fact, are either not theoretically identified by the model or not empirically identified by the data. For the particular case of maximum simulated likelihood estimation, we investigate the underlying source of the problem by focusing on the shape of the simulated log-likelihood function under different conditions.  相似文献   

3.
We consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD). Three computationally heavy steps are involved: (i) simulating the model, (ii) estimating the likelihood and (iii) sampling from the posterior distribution of the parameters. Computational complexity of AB models implies that efficient techniques have to be used with respect to points (ii) and (iii), possibly involving approximations. We first discuss non-parametric (kernel density) estimation of the likelihood, coupled with Markov chain Monte Carlo sampling schemes. We then turn to parametric approximations of the likelihood, which can be derived by observing the distribution of the simulation outcomes around the statistical equilibria, or by assuming a specific form for the distribution of external deviations in the data. Finally, we introduce Approximate Bayesian Computation techniques for likelihood-free estimation. These allow embedding SMD methods in a Bayesian framework, and are particularly suited when robust estimation is needed. These techniques are first tested in a simple price discovery model with one parameter, and then employed to estimate the behavioural macroeconomic model of De Grauwe (2012), with nine unknown parameters.  相似文献   

4.
Abstract.  The assumption behind discrete hours labour supply modelling is that utility‐maximising individuals choose from a relatively small number of hours levels, rather than being able to vary hours worked continuously. Such models are becoming widely used in view of their substantial advantages, compared with a continuous hours approach, when estimating and their role in tax policy microsimulation. This paper provides an introduction to the basic analytics of discrete hours labour supply modelling. Special attention is given to model specification, maximum likelihood estimation and microsimulation of tax reforms. The analysis is at each stage illustrated by the use of numerical examples. At the end, an empirical example of a hypothetical policy change to the social security system is given to illustrate the role of discrete hours microsimulation in the analysis of tax and transfer policy changes.  相似文献   

5.
We investigate peer group effects in laboratory experiments based on Milgrom and Roberts' (1982, Econometrica 50 : 443–459) entry limit pricing game. We generalize Heckman's (1981, in Structural Analysis of Discrete Data with Econometric Applications. MIT Press: Cambridge, MA) dynamic discrete‐choice panel data models by introducing time‐lagged social interactions, using the unbiased GHK simulator to implement the computationally cumbersome maximum likelihood estimation. We find that subjects' decisions are significantly influenced by past decisions of peers on several dimensions, including potential entrants' choices and strategic play of like‐type monopolists. The proposed model and estimation method may be applicable to other experiments where peer group effects are likely to play an important role. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Birgit Gaschler 《Metrika》1996,43(1):69-90
In this paper we prove the weak consistency and the asymptotic normality of the maximum likelihood estimation based on discrete observations ofn independent Gaussian Markov processes. The Ornstein Uhlenbeck process is a special Gaussian Markov process. We derive asymptotic simultaneous confidence regions for the parameters of the Ornstein Uhlenbeck process as an application.  相似文献   

7.
《Journal of econometrics》1999,88(2):341-363
Optimal estimation of missing values in ARMA models is typically performed by using the Kalman filter for likelihood evaluation, ‘skipping’ in the computations the missing observations, obtaining the maximum likelihood (ML) estimators of the model parameters, and using some smoothing algorithm. The same type of procedure has been extended to nonstationary ARIMA models in Gómez and Maravall (1994). An alternative procedure suggests filling in the holes in the series with arbitrary values and then performing ML estimation of the ARIMA model with additive outliers (AO). When the model parameters are not known the two methods differ, since the AO likelihood is affected by the arbitrary values. We develop the proper likelihood for the AO approach in the general non-stationary case and show the equivalence of this and the skipping method. Finally, the two methods are compared through simulation, and their relative advantages assessed; the comparison also includes the AO method with the uncorrected likelihood.  相似文献   

8.
I propose a quasi-maximum likelihood framework for estimating nonlinear models with continuous or discrete endogenous explanatory variables. Joint and two-step estimation procedures are considered. The joint procedure is a quasi-limited information maximum likelihood procedure, as one or both of the log likelihoods may be misspecified. The two-step control function approach is computationally simple and leads to straightforward tests of endogeneity. In the case of discrete endogenous explanatory variables, I argue that the control function approach can be applied with generalized residuals to obtain average partial effects. I show how the results apply to nonlinear models for fractional and nonnegative responses.  相似文献   

9.
Conventionally the parameters of a linear state space model are estimated by maximizing a Gaussian likelihood function, even when the input errors are not Gaussian. In this paper we propose estimation by estimating functions fulfilling Godambe's optimality criterion. We discuss the issue of an unknown starting state vector, and we also develop recursive relations for the third- and fourth-order moments of the state predictors required for the calculations. We conclude with a simulation study demonstrating the proposed procedure on the estimation of the stochastic volatility model. The results suggest that the new estimators outperform the Gaussian likelihood.  相似文献   

10.
A method is presented for the estimation of the parameters in the dynamic simultaneous equations model with vector autoregressive moving average disturbances. The estimation procedure is derived from the full information maximum likelihood approach and is based on Newton-Raphson techniques applied to the likelihood equations. The resulting two-step Newton-Raphson procedure involves only generalized instrumental variables estimation in the second step. This procedure also serves as the basis for an iterative scheme to solve the normal equations and obtain the maximum likelihood estimates of the conditional likelihood function. A nine-equation variant of the quarterly forecasting model of the US economy developed by Fair is then used as a realistic example to illustrate the estimation procedure described in the paper.  相似文献   

11.
We consider estimation of the regression coefficients in the multivariate positive stable frailty model with Weibull marginals introduced by Hougaard (1986) . For general cluster sizes and various covariate patterns, we study the efficiency, relative to full maximum likelihood, of estimation from the independence working model and from the fixed effects model.  相似文献   

12.
We examine the conditions under which each individual series that is generated by a vector autoregressive model can be represented as an autoregressive model that is augmented with the lags of a few linear combinations of all the variables in the system. We call this multivariate index-augmented autoregression (MIAAR) modelling. We show that the parameters of the MIAAR can be estimated by a switching algorithm that increases the Gaussian likelihood at each iteration. Since maximum likelihood estimation may perform poorly when the number of parameters increases, we propose a regularized version of our algorithm for handling a medium–large number of time series. We illustrate the usefulness of the MIAAR modelling by both empirical applications and simulations.  相似文献   

13.
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

14.
This paper considers identification and estimation of structural interaction effects in a social interaction model. The model allows unobservables in the group structure, which may be correlated with included regressors. We show that both the endogenous and exogenous interaction effects can be identified if there are sufficient variations in group sizes. We consider the estimation of the model by the conditional maximum likelihood and instrumental variables methods. For the case with large group sizes, the possible identification can be weak in the sense that the estimates converge in distribution at low rates.  相似文献   

15.
We propose an easy-to-implement simulated maximum likelihood estimator for dynamic models where no closed-form representation of the likelihood function is available. Our method can handle any simulable model without latent dynamics. Using simulated observations, we nonparametrically estimate the unknown density by kernel methods, and then construct a likelihood function that can be maximized. We prove that this nonparametric simulated maximum likelihood (NPSML) estimator is consistent and asymptotically efficient. The higher-order impact of simulations and kernel smoothing on the resulting estimator is also analyzed; in particular, it is shown that the NPSML does not suffer from the usual curse of dimensionality associated with kernel estimators. A simulation study shows good performance of the method when employed in the estimation of jump-diffusion models.  相似文献   

16.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

17.
In this paper we develop models of the incidence and extent of external financing crises of developing countries, which lead to multiperiod multinomial discrete choice and discrete/continuous econometric specifications with flexible correlation structures in the unobservables. We show that estimation of these models based on simulation methods has attractive statistical properties and is computationally tractable. Three such simulation estimation methods are exposited, analysed theoretically, and used in practice: a method of smoothly simulated maximum likelihood (SSML) based on a smooth recursive conditioning simulator (SRC), a method of simulated scores (MSS) based on a Gibbs sampling simulator (GSS), and an MSS estimator based on the SRC simulator. The data set used in this study comprises 93 developing countries observed through the 1970–88 period and contains information on external financing responses that was not available to investigators in the past. Moreover, previous studies of external debt problems had to rely on restrictive correlation structures in the unobservables to overcome otherwise intractable computational difficulties. The findings show that being able for the first time to allow for flexible correlation patterns in the unobservables through estimation by simulation has a substantial impact on the parameter estimates obtained from such models. This suggests that past empirical results in this literature require a substantial re-evaluation.  相似文献   

18.
The transformed-data maximum likelihood estimation (MLE) method for structural credit risk models developed by Duan [Duan, J.-C., 1994. Maximum likelihood estimation using price data of the derivative contract. Mathematical Finance 4, 155–167] is extended to account for the fact that observed equity prices may have been contaminated by trading noises. With the presence of trading noises, the likelihood function based on the observed equity prices can only be evaluated via some nonlinear filtering scheme. We devise a particle filtering algorithm that is practical for conducting the MLE estimation of the structural credit risk model of Merton [Merton, R.C., 1974. On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance 29, 449–470]. We implement the method on the Dow Jones 30 firms and on 100 randomly selected firms, and find that ignoring trading noises can lead to significantly over-estimating the firm’s asset volatility. The estimated magnitude of trading noise is in line with the direction that a firm’s liquidity will predict based on three common liquidity proxies. A simulation study is then conducted to ascertain the performance of the estimation method.  相似文献   

19.
This paper develops a new model for the analysis of stochastic volatility (SV) models. Since volatility is a latent variable in SV models, it is difficult to evaluate the exact likelihood. In this paper, a non-linear filter which yields the exact likelihood of SV models is employed. Solving a series of integrals in this filter by piecewise linear approximations with randomly chosen nodes produces the likelihood, which is maximized to obtain estimates of the SV parameters. A smoothing algorithm for volatility estimation is also constructed. Monte Carlo experiments show that the method performs well with respect to both parameter estimates and volatility estimates. We illustrate our model by analysing daily stock returns on the Tokyo Stock Exchange. Since the method can be applied to more general models, the SV model is extended so that several characteristics of daily stock returns are allowed, and this more general model is also estimated. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
基于极值分布理论的VaR与ES度量   总被引:4,自引:0,他引:4  
本文应用极值分布理论对金融收益序列的尾部进行估计,计算收益序列的在险价值VaR和预期不足ES来度量市场风险。通过伪最大似然估计方法估计的GARCH模型对收益数据进行拟合,应用极值理论中的GPD对新息分布的尾部建模,得到了基于尾部估计产生收益序列的VaR和ES值。采用上证指数日对数收益数据为样本,得到了度量条件极值和无条件极值下VaR和ES的结果。实证研究表明:在置信水平很高(如99%)的条件下,采用极值方法度量风险值效果更好。而置信水平在95%下,其他方法和极值方法结合效果会很好。用ES度量风险能够使我们了解不利情况发生时风险的可能情况。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号