首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers nonparametric identification of nonlinear dynamic models for panel data with unobserved covariates. Including such unobserved covariates may control for both the individual-specific unobserved heterogeneity and the endogeneity of the explanatory variables. Without specifying the distribution of the initial condition with the unobserved variables, we show that the models are nonparametrically identified from two periods of the dependent variable YitYit and three periods of the covariate XitXit. The main identifying assumptions include high-level injectivity restrictions and require that the evolution of the observed covariates depends on the unobserved covariates but not on the lagged dependent variable. We also propose a sieve maximum likelihood estimator (MLE) and focus on two classes of nonlinear dynamic panel data models, i.e., dynamic discrete choice models and dynamic censored models. We present the asymptotic properties of the sieve MLE and investigate the finite sample properties of these sieve-based estimators through a Monte Carlo study. An intertemporal female labor force participation model is estimated as an empirical illustration using a sample from the Panel Study of Income Dynamics (PSID).  相似文献   

2.
This paper proposes a Bayesian nonparametric modeling approach for the return distribution in multivariate GARCH models. In contrast to the parametric literature the return distribution can display general forms of asymmetry and thick tails. An infinite mixture of multivariate normals is given a flexible Dirichlet process prior. The GARCH functional form enters into each of the components of this mixture. We discuss conjugate methods that allow for scale mixtures and nonconjugate methods which provide mixing over both the location and scale of the normal components. MCMC methods are introduced for posterior simulation and computation of the predictive density. Bayes factors and density forecasts with comparisons to GARCH models with Student-tt innovations demonstrate the gains from our flexible modeling approach.  相似文献   

3.
First difference maximum likelihood (FDML) seems an attractive estimation methodology in dynamic panel data modeling because differencing eliminates fixed effects and, in the case of a unit root, differencing transforms the data to stationarity, thereby addressing both incidental parameter problems and the possible effects of nonstationarity. This paper draws attention to certain pathologies that arise in the use of FDML that have gone unnoticed in the literature and that affect both finite sample performance and asymptotics. FDML uses the Gaussian likelihood function for first differenced data and parameter estimation is based on the whole domain over which the log-likelihood is defined. However, extending the domain of the likelihood beyond the stationary region has certain consequences that have a major effect on finite sample and asymptotic performance. First, the extended likelihood is not the true likelihood even in the Gaussian case and it has a finite upper bound of definition. Second, it is often bimodal, and one of its peaks can be so peculiar that numerical maximization of the extended likelihood frequently fails to locate the global maximum. As a result of these pathologies, the FDML estimator is a restricted estimator, numerical implementation is not straightforward and asymptotics are hard to derive in cases where the peculiarity occurs with non-negligible probabilities. The peculiarities in the likelihood are found to be particularly marked in time series with a unit root. In this case, the asymptotic distribution of the FDMLE has bounded support and its density is infinite at the upper bound when the time series sample size T→∞T. As the panel width n→∞n the pathology is removed and the limit theory is normal. This result applies even for TT fixed and we present an expression for the asymptotic distribution which does not depend on the time dimension. We also show how this limit theory depends on the form of the extended likelihood.  相似文献   

4.
We consider estimation of the regression function in a semiparametric binary regression model defined through an appropriate link function (with emphasis on the logistic link) using likelihood-ratio based inversion. The dichotomous response variable ΔΔ is influenced by a set of covariates that can be partitioned as (X,Z)(X,Z) where ZZ (real valued) is the covariate of primary interest and XX (vector valued) denotes a set of control variables. For any fixed XX, the conditional probability of the event of interest (Δ=1Δ=1) is assumed to be a non-decreasing function of ZZ. The effect of the control variables is captured by a regression parameter ββ. We show that the baseline conditional probability function (corresponding to X=0X=0) can be estimated by isotonic regression procedures and develop a likelihood ratio based method for constructing asymptotic confidence intervals for the conditional probability function (the regression function) that avoids the need to estimate nuisance parameters. Interestingly enough, the calibration of the likelihood ratio based confidence sets for the regression function no longer involves the usual χ2χ2 quantiles, but those of the distribution of a new random variable that can be characterized as a functional of convex minorants of Brownian motion with quadratic drift. Confidence sets for the regression parameter ββ can however be constructed using asymptotically χ2χ2 likelihood ratio statistics. The finite sample performance of the methods are assessed via a simulation study. The techniques of the paper are applied to data sets on primary school attendance among children belonging to different socio-economic groups in rural India.  相似文献   

5.
In the paper, we propose residual based tests for cointegration in general panels with cross-sectional dependency, endogeneity and various heterogeneities. The residuals are obtained from the usual least squares estimation of the postulated cointegrating relationships from each individual unit, and the nonlinear IV panel unit root testing procedure is applied to the panels of the fitted residuals using as instruments the nonlinear transformations of the adaptively   fitted lagged residuals. The tt-ratio, based on the nonlinear IV estimator, is then constructed to test for unit root in the fitted residuals for each cross-section. We show that such nonlinear IV tt-ratios are asymptotically normal and cross-sectionally independent under the null hypothesis of no cointegration. The average or the minimum of the IVtt-ratios can, therefore, be used to test for the null of a fully non-cointegrated panel against the alternative of a mixed panel, i.e., a panel with only some cointegrated units. We also consider the maximum of the IV tt-ratios to test for a mixed panel against a fully cointegrated panel. The critical values of the minimum, maximum as well as the average tests are easily obtained from the standard normal distribution function. Our simulation results indicate that the residual based tests for cointegration perform quite well in finite samples.  相似文献   

6.
We show that statistical inference on the risk premia in linear factor models that is based on the Fama–MacBeth (FM) and generalized least squares (GLS) two-pass risk premia estimators is misleading when the ββ’s are small and/or the number of assets is large. We propose novel statistics, that are based on the maximum likelihood estimator of Gibbons [Gibbons, M., 1982. Multivariate tests of financial models: A new approach. Journal of Financial Economics 10, 3–27], which remain trustworthy in these cases. The inadequacy of the FM and GLS two-pass tt/Wald statistics is highlighted in a power and size comparison using quarterly portfolio returns from Lettau and Ludvigson [Lettau, M., Ludvigson, S., 2001. Resurrecting the (C)CAPM: A cross-sectional test when risk premia are time-varying. Journal of Political Economy 109, 1238–1287]. The power and size comparison shows that the FM and GLS two-pass tt/Wald statistics can be severely size distorted. The 95% confidence sets for the risk premia in the above-cited work that result from the novel statistics differ substantially from those that result from the FM and GLS two-pass tt-statistics. They show support for the human capital asset pricing model although the 95% confidence set for the risk premia on labor income growth is unbounded. The 95% confidence sets show no support for the (scaled) consumption asset pricing model, since the 95% confidence set of the risk premia on the scaled consumption growth consists of the whole real line, but do not reject it either.  相似文献   

7.
This paper analyzes the properties of a class of estimators, tests, and confidence sets (CSs) when the parameters are not identified in parts of the parameter space. Specifically, we consider estimator criterion functions that are sample averages and are smooth functions of a parameter θθ. This includes log likelihood, quasi-log likelihood, and least squares criterion functions.  相似文献   

8.
This article studies density and parameter estimation problems for nonlinear parametric models with conditional heteroscedasticity. We propose a simple density estimate that is particularly useful for studying the stationary density of nonlinear time series models. Under a general dependence structure, we establish the root nn consistency of the proposed density estimate. For parameter estimation, a Bahadur type representation is obtained for the conditional maximum likelihood estimate. The parameter estimate is shown to be asymptotically efficient in the sense that its limiting variance attains the Cramér–Rao lower bound. The performance of our density estimate is studied by simulations.  相似文献   

9.
10.
This paper introduces the concept of risk parameter in conditional volatility models of the form ?t=σt(θ0)ηt?t=σt(θ0)ηt and develops statistical procedures to estimate this parameter. For a given risk measure rr, the risk parameter is expressed as a function of the volatility coefficients θ0θ0 and the risk, r(ηt)r(ηt), of the innovation process. A two-step method is proposed to successively estimate these quantities. An alternative one-step approach, relying on a reparameterization of the model and the use of a non Gaussian QML, is proposed. Asymptotic results are established for smooth risk measures, as well as for the Value-at-Risk (VaR). Asymptotic comparisons of the two approaches for VaR estimation suggest a superiority of the one-step method when the innovations are heavy-tailed. For standard GARCH models, the comparison only depends on characteristics of the innovations distribution, not on the volatility parameters. Monte-Carlo experiments and an empirical study illustrate the superiority of the one-step approach for financial series.  相似文献   

11.
This paper determines coverage probability errors of both delta method and parametric bootstrap confidence intervals (CIs) for the covariance parameters of stationary long-memory Gaussian time series. CIs for the long-memory parameter d0d0 are included. The results establish that the bootstrap provides higher-order improvements over the delta method. Analogous results are given for tests. The CIs and tests are based on one or other of two approximate maximum likelihood estimators. The first estimator solves the first-order conditions with respect to the covariance parameters of a “plug-in” log-likelihood function that has the unknown mean replaced by the sample mean. The second estimator does likewise for a plug-in Whittle log-likelihood.  相似文献   

12.
We study Neyman–Pearson testing and Bayesian decision making based on observations of the price dynamics (Xt:t∈[0,T])(Xt:t[0,T]) of a financial asset, when the hypothesis is the classical geometric Brownian motion with a given constant growth rate and the alternative is a different random diffusion process with a given, possibly price-dependent, growth rate. Examples of asset price observations are introduced and used throughout the paper to demonstrate the applicability of the theory. By a rigorous mathematical approach, we obtain exact formulae and bounds for the most common statistical characteristics of testing and decision making, such as the power of test (type II error probability), the Bayes factor and its moments (power divergences), and the Bayes risk or Bayes error. These bounds can be much more easily evaluated than the exact formulae themselves and, consequently, they are useful for practical applications. An important theoretical conclusion of this paper is that for the class of alternatives considered   neither the risk nor the errors converge to zero faster than exponentially in the observation time TT. We illustrate in concrete decision situations that the actual rate of convergence is well approximated by the bounds given in the paper.  相似文献   

13.
Given a random sample from a continuous and positive density ff, the logistic transformation is applied and a log density estimate is provided by using basis functions approach. The number of basis functions acts as the smoothing parameter and it is estimated by minimizing a penalized proxy of the Kullback–Leibler distance which includes as particular cases AIC and BIC criteria. We prove that this estimator is consistent.  相似文献   

14.
Many finance questions require the predictive distribution of returns. We propose a bivariate model of returns and realized volatility (RV), and explore which features of that time-series model contribute to superior density forecasts over horizons of 1 to 60 days out of sample. This term structure of density forecasts is used to investigate the importance of: the intraday information embodied in the daily RV estimates; the functional form for log(RV)log(RV) dynamics; the timing of information availability; and the assumed distributions of both return and log(RV)log(RV) innovations. We find that a joint model of returns and volatility that features two components for log(RV)log(RV) provides a good fit to S&P 500 and IBM data, and is a significant improvement over an EGARCH model estimated from daily returns.  相似文献   

15.
16.
17.
The size properties of a two-stage test in a panel data model are investigated where in the first stage a Hausman (1978) specification test is used as a pretest of the random effects specification and in the second stage, a simple hypothesis about a component of the parameter vector is tested, using a tt-statistic that is based on either the random effects or the fixed effects estimator depending on the outcome of the Hausman pretest. It is shown that the asymptotic size of the two-stage test equals 1 for empirically relevant specifications of the parameter space. The size distortion is caused mainly by the poor power properties of the pretest. Given these results, we recommend using a tt-statistic based on the fixed effects estimator instead of the two-stage procedure.  相似文献   

18.
This paper reconsiders a block bootstrap procedure for Quasi Maximum Likelihood estimation of GARCH models, based on the resampling of the likelihood function, as proposed by Gonçalves and White [2004. Maximum likelihood and the bootstrap for nonlinear dynamic models. Journal of Econometrics 119, 199–219]. First, we provide necessary conditions and sufficient conditions, in terms of moments of the innovation process, for the existence of the Edgeworth expansion of the GARCH(1,1) estimator, up to the kk-th term. Second, we provide sufficient conditions for higher order refinements for equally tailed and symmetric test statistics. In particular, the bootstrap estimator based on resampling the likelihood has the same higher order improvements in terms of error in the rejection probabilities as those in Andrews [2002. Higher-order improvements of a computationally attractive kk-step bootstrap for extremum estimators. Econometrica 70, 119–162].  相似文献   

19.
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-nn asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.  相似文献   

20.
It is well-known that size adjustments based on bootstrapping the tt-statistic perform poorly when instruments are weakly correlated with the endogenous explanatory variable. In this paper, we provide a theoretical proof that guarantees the validity of the bootstrap for the score statistic. This theory does not follow from standard results, since the score statistic is not a smooth function of sample means and some parameters are not consistently estimable when the instruments are uncorrelated with the explanatory variable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号