首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A frequently occurring problem is to find the maximum likelihood estimation (MLE) of p subject to pC (CP the probability vectors in R k ). The problem has been discussed by many authors and they mainly focused when p is restricted by linear constraints or log-linear constraints. In this paper, we construct the relationship between the the maximum likelihood estimation of p restricted by pC and EM algorithm and demonstrate that the maximum likelihood estimator can be computed through the EM algorithm (Dempster et al. in J R Stat Soc Ser B 39:1–38, 1997). Several examples are analyzed by the proposed method.  相似文献   

2.
Abstract.  The assumption behind discrete hours labour supply modelling is that utility‐maximising individuals choose from a relatively small number of hours levels, rather than being able to vary hours worked continuously. Such models are becoming widely used in view of their substantial advantages, compared with a continuous hours approach, when estimating and their role in tax policy microsimulation. This paper provides an introduction to the basic analytics of discrete hours labour supply modelling. Special attention is given to model specification, maximum likelihood estimation and microsimulation of tax reforms. The analysis is at each stage illustrated by the use of numerical examples. At the end, an empirical example of a hypothetical policy change to the social security system is given to illustrate the role of discrete hours microsimulation in the analysis of tax and transfer policy changes.  相似文献   

3.
The modulated power law process is used to analyze the duration dependence in US business cycles. The model makes less restricting assumptions than traditional models do and measures both the local and global performance of business cycles. The results indicate evidence of positive duration dependence in the U.S. business cycles. Structural change after WWII in both expansion and contraction phases of business cycles is also documented. Hypothesis tests confirm that the model fits US business cycles.   相似文献   

4.
In this paper, we propose a state-varying endogenous regime switching model (the SERS model), which includes the endogenous regime switching model by Chang et al., the CCP model, as a special case. To estimate the unknown parameters in the SERS model, we propose a maximum likelihood estimation method. Monte Carlo simulation results show that in the absence of state-varying endogeneity, the SERS model and the CCP model perform similarly, while in the presence of state-varying endogeneity, the SERS model performs much better than the CCP model. Finally, we use the SERS model to analyze Chinese stock market returns, and our empirical results show that there exists strongly state-varying endogeneity in volatility switching for the Shanghai Composite Index returns. Moreover, the SERS model can indeed produce a much more realistic assessment for the regime switching process than the one obtained by the CCP model.  相似文献   

5.
Bayesian approaches to the estimation of DSGE models are becoming increasingly popular. Prior knowledge is normally formalized either directly on deep parameters' values (‘microprior’) or indirectly, on macroeconomic indicators, e.g. moments of observable variables (‘macroprior’). We introduce a non-parametric macroprior which is elicited from impulse response functions and assess its performance in shaping posterior estimates. We find that using a macroprior can lead to substantially different posterior estimates. We probe into the details of our result, showing that model misspecification is likely to be responsible of that. In addition, we assess to what extent the use of macropriors is impaired by the need of calibrating some hyperparameters.  相似文献   

6.
Many applied researchers have to deal with spatially autocorrelated residuals (SAR). Available tests that identify spatial spillovers as captured by a significant SAR parameter, are either based on maximum likelihood (MLE) or generalized method of moments (GMM) estimates. This paper illustrates the properties of various tests for the null hypothesis of a zero SAR parameter in a comprehensive Monte Carlo study. The main finding is that Wald tests generally perform well regarding both size and power even in small samples. The GMM-based Wald test is correctly sized even for non-normally distributed disturbances and small samples, and it exhibits a similar power as its MLE-based counterpart. Hence, for the applied researcher the GMM Wald test can be recommended, because it is easy to implement.  相似文献   

7.
In this short article, I will attempt to provide some highlights of my chancy life as a statistician in chronological order spanning over 60 years, 1954 to present.  相似文献   

8.
This paper assesses the effects of autocorrelation on parameter estimates of affine term structure models (ATSM) when principal components analysis is used to extract factors. In contrast to recent studies, we design and run a Monte Carlo experiment that relies on the construction of a simulation design that is consistent with the data, rather than theory or observation, and find that parameter estimation from ATSM is precise in the presence of serial correlation in the measurement error term. Our findings show that parameter estimation of ATSM with principal component based factors is robust to autocorrelation misspecification.  相似文献   

9.
On the analysis of multivariate growth curves   总被引:1,自引:0,他引:1  
Growth curve data arise when repeated measurements are observed on a number of individuals with an ordered dimension for occasions. Such data appear frequently in almost all fields in which statistical models are used, for instance in medicine, agriculture and engineering. In medicine, for example, more than one variable is often measured on each occasion. However, analyses are usually based on exploration of repeated measurements of only one variable. The consequence is that the information contained in the between-variables correlation structure will be discarded.  In this study we propose a multivariate model based on the random coefficient regression model for the analysis of growth curve data. Closed-form expressions for the model parameters are derived under the maximum likelihood (ML) and the restricted maximum likelihood (REML) framework. It is shown that in certain situations estimated variances of growth curve parameters are greater for REML. Also a method is proposed for testing general linear hypotheses. One numerical example is provided to illustrate the methods discussed. Received: 22 February 1999  相似文献   

10.
This paper proposes an econometric framework for joint estimation of technology and technology choice/adoption decision. The procedure takes into account the endogeneity of technology choice, which is likely to depend on inefficiency. Similarly, output from each technology depends on inefficiency. The effect of the dual role of inefficiency is estimated using a single-step maximum likelihood method. The proposed model is applied to a sample of conventional and organic dairy farms in Finland. The main findings are: the conventional technology is more productive, ceteris paribus; organic farms are, on average, less efficient technically than conventional farms; both efficiency and subsidy are found to be driving forces behind adoption of organic technology.  相似文献   

11.
微观计量分析中缺失数据的极大似然估计   总被引:3,自引:0,他引:3  
微观计量经济分析中常常遇到缺失数据,传统的处理方法是删除所要分析变量中的缺失数据,或用变量的均值替代缺失数据,这种方法经常造成样本有偏。极大似然估计方法可以有效地处理和估计缺失数据。本文首先介绍缺失数据的极大似然估计方法,然后对一实际调查数据中的缺失数据进行极大似然估计,并与传统处理方法的估计结果进行比较和评价。  相似文献   

12.
张丽丽 《价值工程》2011,30(30):211-211
本文通过两个具体例题表明当连续型总体可能的取值范围不是(-∞,+∞)时,利用第一种似然函数的定义解决点估计问题,学生不仅能够很容易地掌握最大似然估计法,同时对样本来自总体且估计离不开样本这一统计思想加深了理解。  相似文献   

13.
In this paper we develop new Markov chain Monte Carlo schemes for the estimation of Bayesian models. One key feature of our method, which we call the tailored randomized block Metropolis–Hastings (TaRB-MH) method, is the random clustering of the parameters at every iteration into an arbitrary number of blocks. Then each block is sequentially updated through an M–H step. Another feature is that the proposal density for each block is tailored to the location and curvature of the target density based on the output of simulated annealing, following  and  and Chib and Ergashev (in press). We also provide an extended version of our method for sampling multi-modal distributions in which at a pre-specified mode jumping iteration, a single-block proposal is generated from one of the modal regions using a mixture proposal density, and this proposal is then accepted according to an M–H probability of move. At the non-mode jumping iterations, the draws are obtained by applying the TaRB-MH algorithm. We also discuss how the approaches of Chib (1995) and Chib and Jeliazkov (2001) can be adapted to these sampling schemes for estimating the model marginal likelihood. The methods are illustrated in several problems. In the DSGE model of Smets and Wouters (2007), for example, which involves a 36-dimensional posterior distribution, we show that the autocorrelations of the sampled draws from the TaRB-MH algorithm decay to zero within 30–40 lags for most parameters. In contrast, the sampled draws from the random-walk M–H method, the algorithm that has been used to date in the context of DSGE models, exhibit significant autocorrelations even at lags 2500 and beyond. Additionally, the RW-MH does not explore the same high density regions of the posterior distribution as the TaRB-MH algorithm. Another example concerns the model of An and Schorfheide (2007) where the posterior distribution is multi-modal. While the RW-MH algorithm is unable to jump from the low modal region to the high modal region, and vice-versa, we show that the extended TaRB-MH method explores the posterior distribution globally in an efficient manner.  相似文献   

14.
Robust tests and estimators based on nonnormal quasi-likelihood functions are developed for autoregressive models with near unit root. Asymptotic power functions and power envelopes are derived for point-optimal tests of a unit root when the likelihood is correctly specified. The shapes of these power functions are found to be sensitive to the extent of nonnormality in the innovations. Power loss resulting from using least-squares unit-root tests in the presence of thick-tailed innovations appears to be greater than in stationary models.  相似文献   

15.
A smooth and detailed distribution is fitted to coarsely grouped frequency data by a nonparametric approach, based on penalized maximum likelihood. The estimated distribution conserves mean and variance of the data. The numerical solution is described and a compact and simplified algorithm is given. The procedure is applied to two empirical datasets.  相似文献   

16.
On the Application of Conditional Independence to Ordinal Data   总被引:4,自引:0,他引:4  
A special log linear parameterization is described for contingency tables which exploits prior knowledge that an ordinal scale of the variables is involved. It is helpful, in particular, in guiding the possible merging of adjacent levels of variables and may simplify interpretation if higher-order interactions are present. Several sets of data are discussed to illustrate the types of interpretation that can be achieved. The simple structure of the maximum likelihood estimates is derived by use of Lagrange multipliers.  相似文献   

17.
The past forty years have seen a great deal of research into the construction and properties of nonparametric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson kernel regression estimator.
A third, less well-explored, strand of applications of smoothing is to the estimation of probabilities in categorical data. In this paper the position of categorical data smoothing as a bridge between nonparametric regression and density estimation is explored. Nonparametric regression provides a paradigm for the construction of effective categorical smoothing estimates, and use of an appropriate likelihood function yields cell probability estimates with many desirable properties. Such estimates can be used to construct regression estimates when one or more of the categorical variables are viewed as response variables. They also lead naturally to the construction of well-behaved density estimates using local or penalized likelihood estimation, which can then be used in a regression context. Several real data sets are used to illustrate these points.  相似文献   

18.
Estimation of spatial autoregressive panel data models with fixed effects   总被引:13,自引:0,他引:13  
This paper establishes asymptotic properties of quasi-maximum likelihood estimators for SAR panel data models with fixed effects and SAR disturbances. A direct approach is to estimate all the parameters including the fixed effects. Because of the incidental parameter problem, some parameter estimators may be inconsistent or their distributions are not properly centered. We propose an alternative estimation method based on transformation which yields consistent estimators with properly centered distributions. For the model with individual effects only, the direct approach does not yield a consistent estimator of the variance parameter unless T is large, but the estimators for other common parameters are the same as those of the transformation approach. We also consider the estimation of the model with both individual and time effects.  相似文献   

19.
In this paper, transforms are used with exponential smoothing, in the quest for better forecasts. Two types of transforms are explored: those which are applied to a time series directly, and those which are applied indirectly to the prediction errors. The various transforms are tested on a large number of time series from the M3 competition, and ANOVA is applied to the results. We find that the non-transformed time series is significantly worse than some transforms on the monthly data, and on a distribution-based performance measure for both annual and quarterly data.  相似文献   

20.
The transformed-data maximum likelihood estimation (MLE) method for structural credit risk models developed by Duan [Duan, J.-C., 1994. Maximum likelihood estimation using price data of the derivative contract. Mathematical Finance 4, 155–167] is extended to account for the fact that observed equity prices may have been contaminated by trading noises. With the presence of trading noises, the likelihood function based on the observed equity prices can only be evaluated via some nonlinear filtering scheme. We devise a particle filtering algorithm that is practical for conducting the MLE estimation of the structural credit risk model of Merton [Merton, R.C., 1974. On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance 29, 449–470]. We implement the method on the Dow Jones 30 firms and on 100 randomly selected firms, and find that ignoring trading noises can lead to significantly over-estimating the firm’s asset volatility. The estimated magnitude of trading noise is in line with the direction that a firm’s liquidity will predict based on three common liquidity proxies. A simulation study is then conducted to ascertain the performance of the estimation method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号