首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

2.
We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample size is not large, for instance smaller than about 1000, asymptotic distribution-free estimation methods are also not applicable. This paper assumes that the observed variables are transformed to normally distributed variables. The non-normally distributed variables are transformed with a Box–Cox function. Estimation of the model parameters and the transformation parameters is done by the maximum likelihood method. Furthermore, the test statistics (i.e. standard deviations) of these parameters are derived. This makes it possible to show the importance of the transformations. Finally, an empirical example is presented.  相似文献   

3.
Previous work on characterising the distribution of forecast errors in time series models by statistics such as the asymptotic mean square error has assumed that observations used in estimating parameters are statistically independent of those used to construct the forecasts themselves. This assumption is quite unrealistic in practical situations and the present paper is intended to tackle the question of how the statistical dependence between the parameter estimates and the final period observations used to generate forecasts affects the sampling distribution of the forecast errors. We concentrate on the first-order autoregression and, for this model, show that the conditional distribution of forecast errors given the final period observation is skewed towards the origin and that this skewness is accentuated in the majority of cases by the statistical dependence between the parameter estimates and the final period observation.  相似文献   

4.
This paper is concerned with the large sample efficiency of the asymptotic least-squares (ALS) estimators introduced by Gouriéroux, Monfort, and Trognon (1982, 1985) and Chamberlain (1982, 1984). We show how the efficiency of these estimators is affected when additional information is incorporated into the estimation procedure. The relationship between ALS and maximum likelihood is discussed. It is shown that ALS can be used to obtain asymptotically efficient estimates for a large range of econometric models. Many results from the literature on estimation are special cases of the framework adopted in this paper. An application of ALS to a dynamic rational expections factor demand model in the manufacturing sector in The Netherlands demonstrates the potential of the method in the estimation of the parameters in models which are subject to nonlinear cross-equation restrictions.  相似文献   

5.
We discuss a regression model in which the regressors are dummy variables. The basic idea is that the observation units can be assigned to some well-defined combination of treatments, corresponding to the dummy variables. This assignment can not be done without some error, i.e. misclassification can play a role. This situation is analogous to regression with errors in variables. It is well-known that in these situations identification of the parameters is a prominent problem. We will first show that, in our case, the parameters are not identified by the first two moments but can be identified by the likelihood. Then we analyze two estimators. The first is a moment estimator involving moments up to the third order, and the second is a maximum likelihood estimator calculated with the help of the EM algorithm. Both estimators are evaluated on the basis of a small Monte Carlo experiment.  相似文献   

6.
This paper incorporates vintage differences and forecasts into the Markov switching models described by Hamilton (1994). The vintage differences and forecasts induce parameter breaks close to the end of the sample, too close for standard maximum likelihood techniques to produce precise parameter estimates. A supplementary procedure estimates the statistical properties of the end-of-sample observations that behave differently from the rest, allowing inferred probabilities to reflect the breaks. Empirical results using real-time data show that these techniques improve the ability of a Markov switching model based on GDP and GDI to recognize the start of the 2001 recession.  相似文献   

7.
Maximum likelihood procedures for estimating sum-constrained models like demand systems, brand choice models and so on, break down or produce very unstable estimates when the number of categories ( n ) is large as compared with the number of observations ( T ). In applied research, this problem is usually resolved by postulating the contemporaneous covariance matrix of the dependent variables to be known apart from a constant of proportionality. In this paper we develop a maximum likelihood procedure for sum-constrained models with large numbers of categories, which does not require too many observations, but nevertheless allows for n covariance parameters to be estimated freely.  相似文献   

8.
The past forty years have seen a great deal of research into the construction and properties of nonparametric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson kernel regression estimator.
A third, less well-explored, strand of applications of smoothing is to the estimation of probabilities in categorical data. In this paper the position of categorical data smoothing as a bridge between nonparametric regression and density estimation is explored. Nonparametric regression provides a paradigm for the construction of effective categorical smoothing estimates, and use of an appropriate likelihood function yields cell probability estimates with many desirable properties. Such estimates can be used to construct regression estimates when one or more of the categorical variables are viewed as response variables. They also lead naturally to the construction of well-behaved density estimates using local or penalized likelihood estimation, which can then be used in a regression context. Several real data sets are used to illustrate these points.  相似文献   

9.
I propose a quasi-maximum likelihood framework for estimating nonlinear models with continuous or discrete endogenous explanatory variables. Joint and two-step estimation procedures are considered. The joint procedure is a quasi-limited information maximum likelihood procedure, as one or both of the log likelihoods may be misspecified. The two-step control function approach is computationally simple and leads to straightforward tests of endogeneity. In the case of discrete endogenous explanatory variables, I argue that the control function approach can be applied with generalized residuals to obtain average partial effects. I show how the results apply to nonlinear models for fractional and nonnegative responses.  相似文献   

10.
While jackknife and bootstrap estimates of the variance of a statistic are well–known, the author extends these nonparametric maximum likelihood techniques to the estimation of skewness and kurtosis. In addition to the usual negative jackknife also a positive jackknife as proposed by BERAN (1984) receives interest in this work. The performance of the methods is investigated by a Monte Carlo study for Kendall's tau in various situations likely to occur in practice. Possible applications of these developments are discussed.  相似文献   

11.
Robustness issues in multilevel regression analysis   总被引:8,自引:0,他引:8  
A multilevel problem concerns a population with a hierarchical structure. A sample from such a population can be described as a multistage sample. First, a sample of higher level units is drawn (e.g. schools or organizations), and next a sample of the sub‐units from the available units (e.g. pupils in schools or employees in organizations). In such samples, the individual observations are in general not completely independent. Multilevel analysis software accounts for this dependence and in recent years these programs have been widely accepted. Two problems that occur in the practice of multilevel modeling will be discussed. The first problem is the choice of the sample sizes at the different levels. What are sufficient sample sizes for accurate estimation? The second problem is the normality assumption of the level‐2 error distribution. When one wants to conduct tests of significance, the errors need to be normally distributed. What happens when this is not the case? In this paper, simulation studies are used to answer both questions. With respect to the first question, the results show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second‐level standard errors. The answer to the second question is that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level‐2 errors are not fulfilled. Robust standard errors turn out to be more reliable than the asymptotic standard errors based on maximum likelihood.  相似文献   

12.
A method is presented for the estimation of the parameters in the dynamic simultaneous equations model with vector autoregressive moving average disturbances. The estimation procedure is derived from the full information maximum likelihood approach and is based on Newton-Raphson techniques applied to the likelihood equations. The resulting two-step Newton-Raphson procedure involves only generalized instrumental variables estimation in the second step. This procedure also serves as the basis for an iterative scheme to solve the normal equations and obtain the maximum likelihood estimates of the conditional likelihood function. A nine-equation variant of the quarterly forecasting model of the US economy developed by Fair is then used as a realistic example to illustrate the estimation procedure described in the paper.  相似文献   

13.
This paper uses a k-th order nonparametric Granger causality test to analyze whether firm-level, economic policy and macroeconomic uncertainty indicators predict movements in real stock returns and their volatility. Linear Granger causality tests show that whilst economic policy and macroeconomic uncertainty indices can predict stock returns, firm-level uncertainty measures possess no predictability. However, given the existence of structural breaks and inherent nonlinearities in the series, we employ a nonparametric causality methodology, as linear modeling leads to misspecifications thus the results cannot be considered reliable. The nonparametric test reveals that in fact no predictability can be observed for the various measures of uncertainty i.e., firm-level, macroeconomic and economic policy uncertainty, vis-à-vis real stock returns. In turn, a profound causal predictability is demonstrated for the volatility series, with the exception of firm-level uncertainty. Overall our results not only emphasize the role of economic and firm-level uncertainty measures in predicting the volatility of stock returns, but also presage against using linear models which are likely to suffer from misspecification in the presence of parameter instability and nonlinear spillover effects.  相似文献   

14.
This paper develops an exact maximum likelihood technique for estimating linear models with second-order autoregressive errors, which utilizes the full set of observations, and explicitly constrains the estimates of the error process to satisfy a priori stationarity conditions. A non- linear solution technique which is new to econometrics and works very efficiently is put forward as part of the estimating procedure. Empirical results are presented which emphasize the importance of utilizing the full set of observations and the associated stationarity restrictions.  相似文献   

15.
《Statistica Neerlandica》1965,19(2-3):81-91
A comparison is made between two different methods to estimate the probability that a normally distributed observation is less than a certain value. One method is based on the binomial distribution, the other one on HALD'S maximum likelihood estimates of the parameters of a censored normal distribution. For large sample sizes a graph of the relative efficiency of these two estimates is constructed. A sampling experiment was performed in order to investigate for one particular situation the possible bias of HALD'S maximum likelihood estimate, which is only asymptotically unbiassed.  相似文献   

16.
There are many situations in life testing experiment where an item fail instantaneously and hence the observed lifetime is reported as zero. The items that fail prematurely are called early failures. We propose a modified Weibull distribution as a suitable model to represent such situations by mixture of a singular distribution at zero and a two parameter Weibull distribution. We obtain the maximum likelihood estimates of the parameters and their asymptotic distributions. The methods are illustrated on drying of woods under different experiments and schedules reported by Vanmann (Research report, 1991:2).  相似文献   

17.
This paper estimates a hedonic housing model based on flats sold in the city of Paris over the period 1990–2003. This is done using maximum likelihood estimation, taking into account the nested structure of the data. Paris is historically divided into 20 arrondissements, each divided into four quartiers (quarters), which in turn contain between 15 and 169 blocks (îlot, in French) per quartier. This is an unbalanced pseudo?panel data containing 156,896 transactions. Despite the richness of the data, many neighborhood characteristics are not observed, and we attempt to capture these neighborhood spillover effects using a spatial lag model. Using likelihood ratio tests, we find significant spatial lag effects as well as significant nested random error effects. The empirical results show that the hedonic housing estimates and the corresponding marginal effects are affected by taking into account the nested aspects of the Paris housing data as well as the spatial neighborhood effects.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper we review statistical methods which combine hidden Markov models (HMMs) and random effects models in a longitudinal setting, leading to the class of so‐called mixed HMMs. This class of models has several interesting features. It deals with the dependence of a response variable on covariates, serial dependence, and unobserved heterogeneity in an HMM framework. It exploits the properties of HMMs, such as the relatively simple dependence structure and the efficient computational procedure, and allows one to handle a variety of real‐world time‐dependent data. We give details of the Expectation‐Maximization algorithm for computing the maximum likelihood estimates of model parameters and we illustrate the method with two real applications describing the relationship between patent counts and research and development expenditures, and between stock and market returns via the Capital Asset Pricing Model.  相似文献   

19.
《Journal of econometrics》2005,128(2):301-323
Gauss–Hermite quadrature is often used to evaluate and maximize the likelihood for random component probit models. Unfortunately, the estimates are biased for large cluster sizes and/or intraclass correlations. We show that adaptive quadrature largely overcomes these problems. We then extend the adaptive quadrature approach to general random coefficient models with limited and discrete dependent variables. The models can include several nested random effects (intercepts and coefficients) representing unobserved heterogeneity at different levels of a hierarchical dataset. The required multivariate integrals are evaluated efficiently using spherical quadrature rules. Simulations show that adaptive quadrature performs well in a wide range of situations.  相似文献   

20.
In the maximum likelihood estimation of distributed lag models, it makes no difference asymptotically whether one drops or estimates the ‘truncation remainder’ terms. However, in small samples the decision to drop or estimate these terms does have an effect on tests of hypotheses.This paper reports the results of a Monte Carlo experiment performed to see which treatment of these terms leads to the most accurate tests of hypotheses. The results are mixed, but generally it seems better to estimate the truncation remainder than to drop it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号