首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We provide a new test for equality of two symmetric positive-definite matrices that leads to a convenient mechanism for testing specification using the information matrix equality or the sandwich asymptotic covariance matrix of the GMM estimator. The test relies on a new characterization of equality between two k dimensional symmetric positive-definite matrices A and B: the traces of AB?1 and BA?1 are equal to k if and only if A=B. Using this simple criterion, we introduce a class of omnibus test statistics for equality and examine their null and local alternative approximations under some mild regularity conditions. A preferred test in the class with good omni-directional power is recommended for practical work. Monte Carlo experiments are conducted to explore performance characteristics under the null and local as well as fixed alternatives. The test is applicable in many settings, including GMM estimation, SVAR models and high dimensional variance matrix settings.  相似文献   

2.
3.
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model Xt=γZt+Yt, where Zt belongs to a large class of deterministic regressors and Yt is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression, and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are χ2-distributed.  相似文献   

4.
This study uses GARCH-EVT-copula and ARMA-GARCH-EVT-copula models to perform out-of-sample forecasts and simulate one-day-ahead returns for ten stock indexes. We construct optimal portfolios based on the global minimum variance (GMV), minimum conditional value-at-risk (Min-CVaR) and certainty equivalence tangency (CET) criteria, and model the dependence structure between stock market returns by employing elliptical (Student-t and Gaussian) and Archimedean (Clayton, Frank and Gumbel) copulas. We analyze the performances of 288 risk modeling portfolio strategies using out-of-sample back-testing. Our main finding is that the CET portfolio, based on ARMA-GARCH-EVT-copula forecasts, outperforms the benchmark portfolio based on historical returns. The regression analyses show that GARCH-EVT forecasting models, which use Gaussian or Student-t copulas, are best at reducing the portfolio risk.  相似文献   

5.
This paper proposes a first-order zero-drift GARCH (ZD-GARCH(1, 1)) model to study conditional heteroscedasticity and heteroscedasticity together. Unlike the classical GARCH model, the ZD-GARCH(1, 1) model is always non-stationary regardless of the sign of the Lyapunov exponent γ0, but interestingly it is stable with its sample path oscillating randomly between zero and infinity over time when γ0=0. Furthermore, this paper studies the generalized quasi-maximum likelihood estimator (GQMLE) of the ZD-GARCH(1, 1) model, and establishes its strong consistency and asymptotic normality. Based on the GQMLE, an estimator for γ0, a t-test for stability, a unit root test for the absence of the drift term, and a portmanteau test for model checking are all constructed. Simulation studies are carried out to assess the finite sample performance of the proposed estimators and tests. Applications demonstrate that a stable ZD-GARCH(1, 1) model is more appropriate than a non-stationary GARCH(1, 1) model in fitting the KV-A stock returns in Francq and Zakoïan (2012).  相似文献   

6.
We explore the validity of the 2-stage least squares estimator with l1-regularization in both stages, for linear triangular models where the numbers of endogenous regressors in the main equation and instruments in the first-stage equations can exceed the sample size, and the regression coefficients are sufficiently sparse. For this l1-regularized 2-stage least squares estimator, we first establish finite-sample performance bounds and then provide a simple practical method (with asymptotic guarantees) for choosing the regularization parameter. We also sketch an inference strategy built upon this practical method.  相似文献   

7.
The classical stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a, b) proposed the so-called “true” fixed-effects specification that distinguishes these two latent components. However, due to the incidental parameters problem, his maximum likelihood estimator may lead to biased variance estimates. We propose two alternative estimators that achieve consistency for n with fixed T. Furthermore, we extend the Chen et al. (2014) results providing a feasible estimator when the inefficiency is heteroskedastic and follows a first-order autoregressive process. We investigate the behavior of the proposed estimators through Monte Carlo simulations showing good finite sample properties, especially in small samples. An application to hospitals’ technical efficiency illustrates the usefulness of the new approach.  相似文献   

8.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

9.
10.
In this paper, we suggest how to handle the issue of the heteroskedasticity of measurement errors when specifying dynamic models for the conditional expectation of realized variance. We show that either adding a GARCH correction within an asymmetric extension of the HAR  class (AHAR-GARCH), or working within the class of asymmetric multiplicative error models (AMEM) greatly reduces the need for quarticity/quadratic terms to capture attenuation bias. This feature in AMEM can be strengthened by considering regime specific dynamics. Model Confidence Sets confirm this robustness both in- and out-of-sample for a panel of 28 big caps and the S&P500 index.  相似文献   

11.
We develop methods for inference in nonparametric time-varying fixed effects panel data models that allow for locally stationary regressors and for the time series length T and cross-section size N both being large. We first develop a pooled nonparametric profile least squares dummy variable approach to estimate the nonparametric function, and establish the optimal convergence rate and asymptotic normality of the resultant estimator. We then propose a test statistic to check whether the bivariate nonparametric function is time-varying or the time effect is separable, and derive the asymptotic distribution of the proposed test statistic. We present several simulated examples and two real data analyses to illustrate the finite sample performance of the proposed methods.  相似文献   

12.
13.
14.
15.
16.
17.
18.
19.
We study the house allocation problem with existing tenants: n houses (stand for “indivisible objects”) are to be allocated to n agents; each agent needs exactly one house and has strict preferences; k houses are initially unowned; k agents initially do not own houses; the remaining nk agents (the so-called “existing tenants”) initially own the remaining nk houses (each owns one). In this setting, we consider various randomized allocation rules under which voluntary participation of existing tenants is assured and the randomization procedure either treats agents equally or discriminates against some (or all) of the existing tenants. We obtain two equivalence results, which generalize the equivalence results in Abdulkadiroğlu and Sönmez (1999) and Sönmez and Ünver (2005).  相似文献   

20.
Within a continuous time life cycle model of consumption and savings, I study the properties of the most general class of additive intertemporal utility functionals. They are not necessarily stationary, and do not necessarily multiplicatively separate a discount factor from “per-period utility”. I prove rigorously that time consistency holds if and only if the per-period felicity function is multiplicatively separable in t, the date of decision and in s, the date of consumption, or equivalently, if the Fisherian instantaneous subjective discount rate does not depend on t. The model allows to explain “anomalies in intertemporal choice” even when the agents are time consistent and various empirical regularities. On the other hand, the model allows to characterize mathematically the “effective consumption profile” of naive, time-inconsistent agents.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号