首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We consider residuals for the linear model with a general covariance structure. In contrast to the situation where observations are independent there are several alternative definitions. We draw attention to three quite distinct types of residuals: the marginal residuals, the model‐specified residuals and the full‐conditional residuals. We adopt a very broad perspective including linear mixed models, time series and smoothers as well as models for spatial and multivariate data. We concentrate on defining these different residual types and discussing their interrelationships. The full‐conditional residuals are seen to play several important roles.  相似文献   

2.
In this article, we propose a mean linear regression model where the response variable is inverse gamma distributed using a new parameterization of this distribution that is indexed by mean and precision parameters. The main advantage of our new parametrization is the straightforward interpretation of the regression coefficients in terms of the expectation of the positive response variable, as usual in the context of generalized linear models. The variance function of the proposed model has a quadratic form. The inverse gamma distribution is a member of the exponential family of distributions and has some distributions commonly used for parametric models in survival analysis as special cases. We compare the proposed model to several alternatives and illustrate its advantages and usefulness. With a generalized linear model approach that takes advantage of exponential family properties, we discuss model estimation (by maximum likelihood), black further inferential quantities and diagnostic tools. A Monte Carlo experiment is conducted to evaluate the performances of these estimators in finite samples with a discussion of the obtained results. A real application using minerals data set collected by Department of Mines of the University of Atacama, Chile, is considered to demonstrate the practical potential of the proposed model.  相似文献   

3.
Abstract In this paper, we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 futures. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high‐frequency intraday returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analysed in this paper.  相似文献   

4.
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster‐specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow‐up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster‐specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow‐up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within‐cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).  相似文献   

5.
In this paper we review statistical methods which combine hidden Markov models (HMMs) and random effects models in a longitudinal setting, leading to the class of so‐called mixed HMMs. This class of models has several interesting features. It deals with the dependence of a response variable on covariates, serial dependence, and unobserved heterogeneity in an HMM framework. It exploits the properties of HMMs, such as the relatively simple dependence structure and the efficient computational procedure, and allows one to handle a variety of real‐world time‐dependent data. We give details of the Expectation‐Maximization algorithm for computing the maximum likelihood estimates of model parameters and we illustrate the method with two real applications describing the relationship between patent counts and research and development expenditures, and between stock and market returns via the Capital Asset Pricing Model.  相似文献   

6.
This paper surveys the state of the art in the econometrics of regression models with many instruments or many regressors based on alternative – namely, dimension – asymptotics. We list critical results of dimension asymptotics that lead to better approximations of properties of familiar and alternative estimators and tests when the instruments and/or regressors are numerous. Then, we consider the problem of estimation and inference in the basic linear instrumental variables regression setup with many strong instruments. We describe the failures of conventional estimation and inference, as well as alternative tools that restore consistency and validity. We then add various other features to the basic model such as heteroskedasticity, instrument weakness, etc., in each case providing a review of the existing tools for proper estimation and inference. Subsequently, we consider a related but different problem of estimation and testing in a linear mean regression with many regressors. We also describe various extensions and connections to other settings, such as panel data models, spatial models, time series models, and so on. Finally, we provide practical guidance regarding which tools are most suitable to use in various situations when many instruments and/or regressors turn out to be an issue.  相似文献   

7.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

8.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
We propose the construction of copulas through the inversion of nonlinear state space models. These copulas allow for new time series models that have the same serial dependence structure as a state space model, but with an arbitrary marginal distribution, and flexible density forecasts. We examine the time series properties of the copulas, outline serial dependence measures, and estimate the models using likelihood-based methods. Copulas constructed from three example state space models are considered: a stochastic volatility model with an unobserved component, a Markov switching autoregression, and a Gaussian linear unobserved component model. We show that all three inversion copulas with flexible margins improve the fit and density forecasts of quarterly U.S. broad inflation and electricity inflation.  相似文献   

10.
We compare alternative forecast pooling methods and 58 forecasts from linear, time‐varying and non‐linear models, using a very large dataset of about 500 macroeconomic variables for the countries in the European Monetary Union. On average, combination methods work well but single non‐linear models can outperform them for several series. The performance of pooled forecasts, and of non‐linear models, improves when focusing on a subset of unstable series, but the gains are minor. Finally, on average over the EMU countries, the pooled forecasts behave well for industrial production growth, unemployment and inflation, but they are often beaten by non‐linear models for each country and variable.  相似文献   

11.
Bayesian priors are often used to restrain the otherwise highly over‐parametrized vector autoregressive (VAR) models. The currently available Bayesian VAR methodology does not allow the user to specify prior beliefs about the unconditional mean, or steady state, of the system. This is unfortunate as the steady state is something that economists usually claim to know relatively well. This paper develops easily implemented methods for analyzing both stationary and cointegrated VARs, in reduced or structural form, with an informative prior on the steady state. We document that prior information on the steady state leads to substantial gains in forecasting accuracy on Swedish macro data. A second example illustrates the use of informative steady‐state priors in a cointegration model of the consumption‐wealth relationship in the USA. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

13.
Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high‐dimensional covariates by low‐dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research.  相似文献   

14.
Motivated by the common finding that linear autoregressive models often forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but a subset of the coefficients is treated as being local‐to‐zero. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive mean square error‐minimizing weights for combining the restricted and unrestricted forecasts. Monte Carlo and empirical analyses verify the practical effectiveness of our combination approach.  相似文献   

15.
We propose a density combination approach featuring combination weights that depend on the past forecast performance of the individual models entering the combination through a utility‐based objective function. We apply this model combination scheme to forecast stock returns, both at the aggregate level and by industry, and investigate its forecasting performance relative to a host of existing combination methods, both within the class of linear and time‐varying coefficients, stochastic volatility models. Overall, we find that our combination scheme produces markedly more accurate predictions than the existing alternatives, both in terms of statistical and economic measures of out‐of‐sample predictability. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Nonlinear time series models have become fashionable tools to describe and forecast a variety of economic time series. A closer look at reported empirical studies, however, reveals that these models apparently fit well in‐sample, but rarely show a substantial improvement in out‐of‐sample forecasts, at least over linear models. One of the many possible reasons for this finding is the use of inappropriate model selection criteria and forecast evaluation criteria. In this paper we therefore propose a novel criterion, which we believe does more justice to the very nature of nonlinear models. Simulations show that this criterion outperforms those criteria currently in use, in the sense that the true nonlinear model is more often found to perform better in out‐of‐sample forecasting than a benchmark linear model. An empirical illustration for US GDP emphasizes its relevance.  相似文献   

17.
For Poisson inverse Gaussian regression models, it is very complicated to obtain the influence measures based on the traditional method, because the associated likelihood function involves intractable expressions, such as the modified Bessel function. In this paper, the EM algorithm is employed as a basis to derive diagnostic measures for the models by treating them as a mixed Poisson regression with the weights from the inverse Gaussian distributions. Several diagnostic measures are obtained in both case-deletion model and local influence analysis, based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. Two numerical examples are given to illustrate the results.  相似文献   

18.
This article introduces two parametric robust diagnostic methods for detecting influential observations in the setting of generalized linear models with continuous responses. The legitimacy of the two proposed methods requires no knowledge of the true underlying distributions so long as their second moments exist. The performance of the two proposed influence diagnostic tools is investigated through limited simulation studies and the analyses of an illustration.  相似文献   

19.
In this paper, we survey the most recent advances in supervised machine learning (ML) and high-dimensional models for time-series forecasting. We consider both linear and nonlinear alternatives. Among the linear methods, we pay special attention to penalized regressions and ensemble of models. The nonlinear methods considered in the paper include shallow and deep neural networks, in their feedforward and recurrent versions, and tree-based methods, such as random forests and boosted trees. We also consider ensemble and hybrid models by combining ingredients from different alternatives. Tests for superior predictive ability are briefly reviewed. Finally, we discuss application of ML in economics and finance and provide an illustration with high-frequency financial data.  相似文献   

20.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号