首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose a new conditionally heteroskedastic factor model, the GICA-GARCH model, which combines independent component analysis (ICA) and multivariate GARCH (MGARCH) models. This model assumes that the data are generated by a set of underlying independent components (ICs) that capture the co-movements among the observations, which are assumed to be conditionally heteroskedastic. The GICA-GARCH model separates the estimation of the ICs from their fitting with a univariate ARMA-GARCH model. Here, we will use two ICA approaches to find the ICs: the first estimates the components, maximizing their non-Gaussianity, while the second exploits the temporal structure of the data. After estimating and identifying the common ICs, we fit a univariate GARCH model to each of them in order to estimate their univariate conditional variances. The GICA-GARCH model then provides a new framework for modelling the multivariate conditional heteroskedasticity in which we can explain and forecast the conditional covariances of the observations by modelling the univariate conditional variances of a few common ICs. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. Finally, we present an empirical application to the Madrid stock market, where we evaluate the forecasting performances of the GICA-GARCH and two additional factor GARCH models: the orthogonal GARCH and the conditionally uncorrelated components GARCH.  相似文献   

2.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

3.
This paper reports on an exploration of student wellbeing in secondary school. A wellbeing questionnaire was administered four times to the same students. Multilevel models were applied in which measurements are grouped within students within schools. Differences between students are large, but there are only minor differences between schools regarding the wellbeing. Two methods of analysis of longitudinal data are compared: a multilevel multivariate approach and a multilevel growth curve analysis. It is shown that the estimation of individual growth curves is an elegant and parsimonious way of modelling. The multivariate approach on the other hand is a more modest model. The assumptions, advantages and disadvantages of both perspectives are listed.  相似文献   

4.
We demonstrate that many current approaches for marginal modelling of correlated binary outcomes produce likelihoods that are equivalent to the copula‐based models herein. These general copula models of underlying latent threshold random variables yield likelihood‐based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalised random‐intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
6.
Appropriate modelling of Likert‐type items should account for the scale level and the specific role of the neutral middle category, which is present in most Likert‐type items that are in common use. Powerful hierarchical models that account for both aspects are proposed. To avoid biased estimates, the models separate the neutral category when modelling the effects of explanatory variables on the outcome. The main model that is propagated uses binary response models as building blocks in a hierarchical way. It has the advantage that it can be easily extended to include response style effects and non‐linear smooth effects of explanatory variables. By simple transformation of the data, available software for binary response variables can be used to fit the model. The proposed hierarchical model can be used to investigate the effects of covariates on single Likert‐type items and also for the analysis of a combination of items. For both cases, estimation tools are provided. The usefulness of the approach is illustrated by applying the methodology to a large data set.  相似文献   

7.
We revisit the cointegration relation among output, physical capital, human capital, public capital and labour for 17 Spanish regions observed over the period 1964–2011. Our approach is based on the estimation of a panel data model where cross-section dependence is allowed among the members of the panel. The paper emphasizes the idea that common factors capturing, for instance, total factor productivity, should be accounted for when estimating the parameters. We use several proposals to estimate the long-run relation among these variables, which render consistent and efficient estimates of the parameters.  相似文献   

8.
This article discusses modelling strategies for repeated measurements of multiple response variables. Such data arise in the context of categorical variables where one can select more than one of the categories as the response. We consider each of the multiple responses as a binary outcome and use a marginal (or population‐averaged) modelling approach to analyse its means. Generalized estimating equations are used to account for different correlation structures, both over time and between items. We also discuss an alternative approach using a generalized linear mixed model with conditional interpretations. We illustrate the methods using data from a panel study in Australia called the Household, Income, and Labour Dynamics Survey.  相似文献   

9.
Deep and persistent disadvantage is an important, but statistically rare, phenomenon in the population, and sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. Survey samples are typically designed to produce estimates of population characteristics of planned areas. The sample sizes are calculated so that the survey estimator for each of the planned areas is of a desired level of precision. However, in many instances, estimators are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This has led to the development of a class of indirect estimators that make use of information from related areas through modelling. A model is used to link similar areas to enhance the estimation of unplanned areas; in other words, they borrow strength from the other areas. Doing so improves the precision of estimated characteristics in the small area, especially in areas with smaller sample sizes. Social science researchers have increasingly employed small area estimation to provide localised estimates of population characteristics from surveys. We explore how to extend this approach within the context of deep and persistent disadvantage in Australia. We find that because of the unique circumstances of the Australian population distribution, direct estimates of disadvantage have substantial variation, but by applying small area estimation, there are significant improvements in precision of estimates.  相似文献   

10.
Nonlinear regression models have been widely used in practice for a variety of time series and cross-section datasets. For purposes of analyzing univariate and multivariate time series data, in particular, smooth transition regression (STR) models have been shown to be very useful for representing and capturing asymmetric behavior. Most STR models have been applied to univariate processes, and have made a variety of assumptions, including stationary or cointegrated processes, uncorrelated, homoskedastic or conditionally heteroskedastic errors, and weakly exogenous regressors. Under the assumption of exogeneity, the standard method of estimation is nonlinear least squares. The primary purpose of this paper is to relax the assumption of weakly exogenous regressors and to discuss moment-based methods for estimating STR models. The paper analyzes the properties of the STR model with endogenous variables by providing a diagnostic test of linearity of the underlying process under endogeneity, developing an estimation procedure and a misspecification test for the STR model, presenting the results of Monte Carlo simulations to show the usefulness of the model and estimation method, and providing an empirical application for inflation rate targeting in Brazil. We show that STR models with endogenous variables can be specified and estimated by a straightforward application of existing results in the literature.  相似文献   

11.
Small area estimation is a widely used indirect estimation technique for micro‐level geographic profiling. Three unit level small area estimation techniques—the ELL or World Bank method, empirical best prediction (EBP) and M‐quantile (MQ) — can estimate micro‐level Foster, Greer, & Thorbecke (FGT) indicators: poverty incidence, gap and severity using both unit level survey and census data. However, they use different assumptions. The effects of using model‐based unit level census data reconstructed from cross‐tabulations and having no cluster level contextual variables for models are discussed, as are effects of small area and cluster level heterogeneity. A simulation‐based comparison of ELL, EBP and MQ uses a model‐based reconstruction of 2000/2001 data from Bangladesh and compares bias and mean square error. A three‐level ELL method is applied for comparison with the standard two‐level ELL that lacks a small area level component. An important finding is that the larger number of small areas for which ELL has been able to produce sufficiently accurate estimates in comparison with EBP and MQ has been driven more by the type of census data available or utilised than by the model per se.  相似文献   

12.
This paper considers factor estimation from heterogeneous data, where some of the variables—the relevant ones—are informative for estimating the factors, and others—the irrelevant ones—are not. We estimate the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings. Based on identified posterior factor loading estimates, we provide alternative methods to identify relevant and irrelevant variables. Simulations show that both types of variables are identified quite accurately. Empirical estimates for a large multi‐country GDP dataset and a disaggregated inflation dataset for the USA show that a considerable share of variables is irrelevant for factor estimation.  相似文献   

13.
Macroeconomic forecasting using structural factor analysis   总被引:1,自引:0,他引:1  
The use of a small number of underlying factors to summarize the information from a much larger set of information variables is one of the new frontiers in forecasting. In prior work, the estimated factors have not usually had a structural interpretation and the factors have not been chosen on a theoretical basis. In this paper we propose several variants of a general structural factor forecasting model, and use these to forecast certain key macroeconomic variables. We make the choice of factors more structurally meaningful by estimating factors from subsets of information variables, where these variables can be assigned to subsets on the basis of economic theory. We compare the forecasting performance of the structural factor forecasting model with that of a univariate AR model, a standard VAR model, and some non-structural factor forecasting models. The results suggest that our structural factor forecasting model performs significantly better in forecasting real activity variables, especially at short horizons.  相似文献   

14.
This paper provides a feasible approach to estimation and forecasting of multiple structural breaks for vector autoregressions and other multivariate models. Owing to conjugate prior assumptions we obtain a very efficient sampler for the regime allocation variable. A new hierarchical prior is introduced to allow for learning over different structural breaks. The model is extended to independent breaks in regression coefficients and the volatility parameters. Two empirical applications show the improvements the model has over benchmarks. In a macro application with seven variables we empirically demonstrate the benefits from moving from a multivariate structural break model to a set of univariate structural break models to account for heterogeneous break patterns across data series.  相似文献   

15.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

16.
This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic volatility model. A unified analysis of the model, and its special cases, is developed that encompasses estimation, filtering and model choice. The centerpieces of the estimation algorithm (which relies on MCMC methods) are: (1) a reduced blocking scheme for sampling the free elements of the loading matrix and the factors and (2) a special method for sampling the parameters of the univariate SV process. The resulting algorithm is scalable in terms of series and factors and simulation-efficient. Methods for estimating the log-likelihood function and the filtered values of the time-varying volatilities and correlations are also provided. The performance and effectiveness of the inferential methods are extensively tested using simulated data where models up to 50 dimensions and 688 parameters are fit and studied. The performance of our model, in relation to various multivariate GARCH models, is also evaluated using a real data set of weekly returns on a set of 10 international stock indices. We consider the performance along two dimensions: the ability to correctly estimate the conditional covariance matrix of future returns and the unconditional and conditional coverage of the 5% and 1% value-at-risk (VaR) measures of four pre-defined portfolios.  相似文献   

17.
We introduce the matrix exponential as a way of modelling spatially dependent data. The matrix exponential spatial specification (MESS) simplifies the log-likelihood allowing a closed form solution to the problem of maximum-likelihood estimation, and greatly simplifies the Bayesian estimation of the model. The MESS can produce estimates and inferences similar to those from conventional spatial autoregressive models, but has analytical, computational, and interpretive advantages. We present maximum likelihood and Bayesian approaches to the estimation of this spatial model specification along with methods of model comparisons over different explanatory variables and spatial specifications.  相似文献   

18.
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions.
The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches.
This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.  相似文献   

19.
Asset Pricing with Observable Stochastic Discount Factors   总被引:2,自引:0,他引:2  
The stochastic discount factor model provides a general framework for pricing assets. By specifying the discount factor suitably it encompasses most of the theories currently in use, including CAPM and consumption CAPM. The SDF model has been based on the use of single and multiple factors, and on latent and observed factors. In most situations, and especially for the term structure, single factor models are inappropriate, whilst latent variables require the somewhat arbitrary specification of generating processes and are difficult to interpret. In this paper we survey the principal different implementations of the SDF model for bonds, equity and FOREX and propose a new approach. This is based on the use of multiple factors that are observable and modelling the joint distribution of excess returns and the factors using a multi–variate GARCH–in–mean process. We argue that in general single equation and VAR models, although widely used in empirical finance, are inappropriate as they do not satisfy the no–arbitrage condition. Since risk premia arise from conditional covariation between the returns and the factors, both a multi–variate context and having conditional covariances in the conditional mean process, is essential. We explain how apparent exceptions, such as the CIR and Vasicek models, in fact meet this requirement — but at a price. We explain our new approach, discuss how it might be implemented and present some empirical evidence, mainly from our own researches. Partly, to enable comparisons to be made, the survey also includes evidence from recent empirical work using more traditional approaches.  相似文献   

20.
The envelope model was first introduced as a parsimonious version of multivariate linear regression. It uses dimension reduction techniques to remove immaterial variation in the data and has the potential to gain efficiency in estimation and improve prediction. Many advances have taken place since its introduction, and the envelope model has been applied to many contexts in multivariate analysis, including partial least squares, generalised linear models, Bayesian analysis, variable selection and quantile regression, among others. This article serves as a review of the envelope model and its developments for those who are new to the area.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号