首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Joint two-step estimation procedures which have the same asymptotic properties as the maximum likelihood (ML) estimator are developed for the final equation, transfer function and structural form of a multivariate dynamic model with normally distributed vector-moving average errors. The ML estimator under fixed and known initial values is obtained by iterating the procedure until convergence. The asymptotic distribution of the two-step estimators is used to construct large sample testing procedures for the different forms of the model.  相似文献   

2.
3.
This paper develops the maximum likelihood procedure to estimate the appropriate functional form in regression models with heteroskedastic errors. The analysis can then enable us to separate out the influence of non-linearity in an estimate of the transformation parameter from the influence of stabilizing the error variance. Illustrative examples have been used to show that estimation and tests for functional form and heteroskedasticity can and should be jointly considered.  相似文献   

4.
This paper considers a two-way error component model with no lagged dependent variable and investigates the performance of various testing and estimation procedures applied to this model by means of Monte Carlo experiments. The following results were found: (1) The Chow-test performed poorly in testing the stability of cross-section regressions over time and in testing the stability of time-series regression across regions. (2) The Roy-Zellner test performed well and is recommended for testing the poolability of the data. (3) The Hausman specification test, employed to test the orthogonality assumption, gave a low frequency of Type I errors. (4) The Lagrange multiplier test, employed to test for zero variance components, did well except in cases where it was badly needed. (5) The problem of negative estimates of the variance components was found to be more serious in the two-way model than in the one-way model. However, replacing the negative variance estimates by zero did not have a serious effect on the performance of the second-round GLS estimates of the regression coefficients. (6) As in the one-way model, all the two-stage estimation methods performed reasonably well. (7) Better estimates of the variance components did not necessarily lead to better second-round GLS estimates of the regression coefficients.  相似文献   

5.
An important problem in statistics is to study the effect of one or two factors on a dependent variable. This type of problem can be formulated as a regression problem (by using dummy (0,1) variables to represent the levels of factors) and the standard least squares (LS) analysis is well-known. The least absolute value (LAV) analysis is less well known, but certainly is becoming more widely used, especially in exploratory data analysis.The purpose of this report is to present a didactic treatment of visual display methods useful in exploratory data analysis. These visual display techniques (stem- and- leaf, box- and- whisker, and two-way plots) are presented for both the least squares and the least absolute value analyses of a two-way classification model.  相似文献   

6.
This paper considers alternative methods of testing cointegration in fractionally integrated processes, using the bootstrap. The investigation focuses on (a) choice of statistic, (b) use of bias correction techniques, and (c) designing the simulation of the null hypothesis. Three residual-based tests are considered, two of the null hypothesis of non-cointegration, the third of the null hypothesis that cointegration exists. The tests are compared in Monte Carlo experiments to throw light on the relative roles of issues (a)–(c) in test performance.  相似文献   

7.
A variety of asymptotically valid tests for orthogonality, serial correlation, predictive failure, and of coefficient restrictions are presented, and their rejection probabilities are assessed in linear structural models with lagged dependent and (possibly) jointly dependent variables by Monte Carlo methods. For all test procedures the small-sample distribution under the null usually deviates substantially from the asymptotic distribution; this impedes their use in a reliable model selection strategy for econometric time-series analysis. Despite the harassing dependence of type I errors on factors generally unknown to the practitioner, inconsistencies originating from specification errors or from disregarded simultaneity may be detected by particular tests in particular situations. From this study some clues emerge on how to interpret (in)significant values of the various test statistics.  相似文献   

8.
This study examined the performance of two alternative estimation approaches in structural equation modeling for ordinal data under different levels of model misspecification, score skewness, sample size, and model size. Both approaches involve analyzing a polychoric correlation matrix as well as adjusting standard error estimates and model chi-squared, but one estimates model parameters with maximum likelihood and the other with robust weighted least-squared. Relative bias in parameter estimates and standard error estimates, Type I error rate, and empirical power of the model test, where appropriate, were evaluated through Monte Carlo simulations. These alternative approaches generally provided unbiased parameter estimates when the model was correctly specified. They also provided unbiased standard error estimates and adequate Type I error control in general unless sample size was small and the measured variables were moderately skewed. Differences between the methods in convergence problems and the evaluation criteria, especially under small sample and skewed variable conditions, were discussed.  相似文献   

9.
Ordered data arise naturally in many fields of statistical practice. Often some sample values are unknown or disregarded due to various reasons. On the basis of some sample quantiles from the Rayleigh distribution, the problems of estimating the Rayleigh parameter, hazard rate and reliability function, and predicting future observations are addressed using a Bayesian perspective. The construction of β-content and β-expectation Bayes tolerance limits is also tackled. Under squared-error loss, Bayes estimators and predictors are deduced analytically. Exact tolerance limits are derived by solving simple nonlinear equations. Highest posterior density estimators and credibility intervals, as well as Bayes estimators and predictors under linear loss, can easily be computed iteratively.  相似文献   

10.
Bayesian and empirical Bayesian estimation methods are reviewed and proposed for the row and column parameters in two-way Contingency tables without interaction. Rasch's multiplicative Poisson model for misreadings is discussed in an example. The case is treated where assumptions of exchangeability are reasonable a priori for the unknown parameters. Two different types of prior distributions are compared, It appears that gamma priors yield more tractable results than lognormal priors.  相似文献   

11.
文章在大量试验资料的基础上,论述测定膨胀土抗剪强度时试样尺寸对抗剪强度参数的影响,从而指出测定膨胀土抗剪强度时的合理试样直径及C、Φ值的确定方法。  相似文献   

12.
Novel transition-based misspecification tests of semiparametric and fully parametric univariate diffusion models based on the estimators developed in [Kristensen, D., 2010. Pseudo-maximum likelihood estimation in two classes of semiparametric diffusion models. Journal of Econometrics 156, 239-259] are proposed. It is demonstrated that transition-based tests in general lack power in detecting certain departures from the null since they integrate out local features of the drift and volatility. As a solution to this, tests that directly compare drift and volatility estimators under the relevant null and alternative are also developed which exhibit better power against local alternatives.  相似文献   

13.
I propose a quasi-maximum likelihood framework for estimating nonlinear models with continuous or discrete endogenous explanatory variables. Joint and two-step estimation procedures are considered. The joint procedure is a quasi-limited information maximum likelihood procedure, as one or both of the log likelihoods may be misspecified. The two-step control function approach is computationally simple and leads to straightforward tests of endogeneity. In the case of discrete endogenous explanatory variables, I argue that the control function approach can be applied with generalized residuals to obtain average partial effects. I show how the results apply to nonlinear models for fractional and nonnegative responses.  相似文献   

14.
In a sample-selection model with the ‘selection’ variable QQ and the ‘outcome’ variable YY, YY is observed only when Q=1Q=1. For a treatment DD affecting both QQ and YY, three effects are of interest: ‘participation  ’ (i.e., the selection) effect of DD on QQ, ‘visible performance  ’ (i.e., the observed outcome) effect of DD on Y≡QYYQY, and ‘invisible performance  ’ (i.e., the latent outcome) effect of DD on YY. This paper shows the conditions under which the three effects are identified, respectively, by the three corresponding mean differences of QQ, YY, and Y|Q=1Y|Q=1 (i.e., Y|Q=1Y|Q=1) across the control (D=0D=0) and treatment (D=1D=1) groups. Our nonparametric estimators for those effects adopt a two-sample framework and have several advantages over the usual matching methods. First, there is no need to select the number of matched observations. Second, the asymptotic distribution is easily obtained. Third, over-sampling the control/treatment group is allowed. Fourth, there is a built-in mechanism that takes into account the ‘non-overlapping support problem’, which the usual matching deals with by choosing a ‘caliper’. Fifth, a sensitivity analysis to gauge the presence of unobserved confounders is available. A simulation study is conducted to compare the proposed methods with matching methods, and a real data illustration is provided.  相似文献   

15.
First difference maximum likelihood (FDML) seems an attractive estimation methodology in dynamic panel data modeling because differencing eliminates fixed effects and, in the case of a unit root, differencing transforms the data to stationarity, thereby addressing both incidental parameter problems and the possible effects of nonstationarity. This paper draws attention to certain pathologies that arise in the use of FDML that have gone unnoticed in the literature and that affect both finite sample performance and asymptotics. FDML uses the Gaussian likelihood function for first differenced data and parameter estimation is based on the whole domain over which the log-likelihood is defined. However, extending the domain of the likelihood beyond the stationary region has certain consequences that have a major effect on finite sample and asymptotic performance. First, the extended likelihood is not the true likelihood even in the Gaussian case and it has a finite upper bound of definition. Second, it is often bimodal, and one of its peaks can be so peculiar that numerical maximization of the extended likelihood frequently fails to locate the global maximum. As a result of these pathologies, the FDML estimator is a restricted estimator, numerical implementation is not straightforward and asymptotics are hard to derive in cases where the peculiarity occurs with non-negligible probabilities. The peculiarities in the likelihood are found to be particularly marked in time series with a unit root. In this case, the asymptotic distribution of the FDMLE has bounded support and its density is infinite at the upper bound when the time series sample size T→∞T. As the panel width n→∞n the pathology is removed and the limit theory is normal. This result applies even for TT fixed and we present an expression for the asymptotic distribution which does not depend on the time dimension. We also show how this limit theory depends on the form of the extended likelihood.  相似文献   

16.
This paper concerns estimating parameters in a high-dimensional dynamic factor model by the method of maximum likelihood. To accommodate missing data in the analysis, we propose a new model representation for the dynamic factor model. It allows the Kalman filter and related smoothing methods to evaluate the likelihood function and to produce optimal factor estimates in a computationally efficient way when missing data is present. The implementation details of our methods for signal extraction and maximum likelihood estimation are discussed. The computational gains of the new devices are presented based on simulated data sets with varying numbers of missing entries.  相似文献   

17.
This paper considers the semiparametric estimation of binary choice sample selection models under a joint symmetry assumption. Our approaches overcome various drawbacks associated with existing estimators. In particular, our method provides root-nn consistent estimators for both the intercept and slope parameters of the outcome equation in a heteroscedastic framework, without the usual cross equation exclusion restriction or parametric specification for the error distribution and/or the form of heteroscedasticity. Our two-step estimators are shown to be consistent and asymptotically normal. A Monte Carlo simulation study indicates the usefulness of our approaches.  相似文献   

18.
This work is concerned with asymptotic properties of the bivariate survival function estimator using the functional relationship between marginal survival functions and a class of copulas for the dependence structure. Specifically, we study consistency and weak convergence of the bivariate survival function estimator obtained considering a two-step procedure of estimation. The obtained results are found from a key decomposition of the bivariate survival function in quantities that can be studied separately. In particular, we use relating results to almost sure and weak convergence of estimators, almost sure convergence of uniformly equicontinuous functions, and the delta method for functionals.  相似文献   

19.
We consider in this paper a parallel system consisting of \(\eta \) identical components. Each component works independently of the others and has a Weibull distributed inter-failure time. When the system fails, we assume that the repair maintenance is imperfect according to the Arithmetic Reduction of Age models (\(ARA_{m}\)) proposed by Doyen and Gaudoin. The purpose of this paper is to generate a simulated failure data of the whole system in order to forecast the behavior of the failure process. Besides, we estimate the maintenance efficiency and the reliability parameters of an imperfect repair following \(ARA_{m}\) models using maximum likelihood estimation method. Our method is tested with several data sets available from related sources. The real data set corresponds to the time between failures of a compressor which is tested by Likelihood Ratio Test (LR). An analysis of the importance and the effect of the memory order of imperfect repair classes (\(ARA_{m}\)) will be discussed using LR test.  相似文献   

20.
A Bayesian approach to the joint estimation of population proportion and sensitivity level of a stigmatizing attribute is proposed by adopting a two-stage randomized response procedure. In the first stage the direct question method is carried out for each respondent, while in the second stage the randomization is exclusively carried out for those individuals declaring their membership in the non-sensitive group. The randomization is implemented on the basis of Franklin’s procedure. The proposed Bayesian method avoids the drawbacks usually connected with the use of maximum-likelihood or moment estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号