首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper defines the notion of a local equilibrium of quality (r,s), 0r,s, in a discrete exchange economy: a partial allocation and item prices that guarantee certain stability properties parametrized by the numbers r and s. The quality (r,s) measures the fit between the allocation and the prices: the larger r and s the closer the fit. For r,s1 this notion provides a graceful degradation for the conditional equilibria of Fu, Kleinberg and Lavi (2012) which are exactly the local equilibria of quality (1,1). For 1<r,s the local equilibria of quality (r,s) are more stable than conditional equilibria. Any local equilibrium of quality (r,s) provides, without any assumption on the type of the agents’ valuations, an allocation whose value is at least rs1+rs the optimal fractional allocation. In any economy in which all agents’ valuations are a-submodular, i.e., exhibit complementarity bounded by a1, there is a local equilibrium of quality (1a,1a). In such an economy any greedy allocation provides a local equilibrium of quality (1,1a). Walrasian equilibria are not amenable to such graceful degradation.  相似文献   

2.
3.
4.
5.
6.
This paper proposes a first-order zero-drift GARCH (ZD-GARCH(1, 1)) model to study conditional heteroscedasticity and heteroscedasticity together. Unlike the classical GARCH model, the ZD-GARCH(1, 1) model is always non-stationary regardless of the sign of the Lyapunov exponent γ0, but interestingly it is stable with its sample path oscillating randomly between zero and infinity over time when γ0=0. Furthermore, this paper studies the generalized quasi-maximum likelihood estimator (GQMLE) of the ZD-GARCH(1, 1) model, and establishes its strong consistency and asymptotic normality. Based on the GQMLE, an estimator for γ0, a t-test for stability, a unit root test for the absence of the drift term, and a portmanteau test for model checking are all constructed. Simulation studies are carried out to assess the finite sample performance of the proposed estimators and tests. Applications demonstrate that a stable ZD-GARCH(1, 1) model is more appropriate than a non-stationary GARCH(1, 1) model in fitting the KV-A stock returns in Francq and Zakoïan (2012).  相似文献   

7.
8.
This paper proposes a methodology based on a system of dynamic multiple linear equations that incorporates hourly, daily and annual seasonal characteristics for predicting hourly pm2.5 pollution concentrations for 11 meteorological stations in Santiago, Chile. It is demonstrated that the proposed model has the potential to match or even surpass the accuracy of competing nonlinear forecasting models in terms of both fit and predictive ability. In addition, the model is successful at predicting various categories of high concentration events, between 53% and 76% of mid-range events, and around 90% of extreme-range events on average across all stations. This forecasting model is considered a useful tool for helping government authorities to anticipate critical episodes of poor air quality so as to avoid the detrimental economic and health impacts of extreme pollution levels.  相似文献   

9.
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model Xt=γZt+Yt, where Zt belongs to a large class of deterministic regressors and Yt is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression, and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are χ2-distributed.  相似文献   

10.
11.
12.
13.
We provide a new test for equality of two symmetric positive-definite matrices that leads to a convenient mechanism for testing specification using the information matrix equality or the sandwich asymptotic covariance matrix of the GMM estimator. The test relies on a new characterization of equality between two k dimensional symmetric positive-definite matrices A and B: the traces of AB?1 and BA?1 are equal to k if and only if A=B. Using this simple criterion, we introduce a class of omnibus test statistics for equality and examine their null and local alternative approximations under some mild regularity conditions. A preferred test in the class with good omni-directional power is recommended for practical work. Monte Carlo experiments are conducted to explore performance characteristics under the null and local as well as fixed alternatives. The test is applicable in many settings, including GMM estimation, SVAR models and high dimensional variance matrix settings.  相似文献   

14.
15.
16.
17.
18.
19.
Detecting and modeling structural changes in time series models have attracted great attention. However, relatively little effort has been paid to the testing of structural changes in panel data models despite their increasing importance in economics and finance. In this paper, we propose a new approach to testing structural changes in panel data models. Unlike the bulk of the literature on structural changes, which focuses on detection of abrupt structural changes, we consider smooth structural changes for which model parameters are unknown deterministic smooth functions of time except for a finite number of time points. We use nonparametric local smoothing method to consistently estimate the smooth changing parameters and develop two consistent tests for smooth structural changes in panel data models. The first test is to check whether all model parameters are stable over time. The second test is to check potential time-varying interaction while allowing for a common trend. Both tests have an asymptotic N(0,1) distribution under the null hypothesis of parameter constancy and are consistent against a vast class of smooth structural changes as well as abrupt structural breaks with possibly unknown break points alternatives. Simulation studies show that the tests provide reliable inference in finite samples and two empirical examples with respect to a cross-country growth model and a capital structure model are discussed.  相似文献   

20.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号