首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we show that the sequential logit (SL) model, in which a choice process is characterized as a sequence of independent multinomial logit models, is a limiting case of the nested logit (NL) model. For testing the SL model against the NL model, we propose Wald, likelihood ratio and Lagrange multiplier tests after suitably reparameterizing the NL model. It is found that when the NL model parameters are “weakly identified”, the Wald test severely underrejects the true model, whereas the sizes of the LR and LM tests are not significantly affected.  相似文献   

2.
This paper considers three-dimensional panel data models with unobservable multiple interactive effects, which are potentially correlated with the covariates. We propose an iterated OLS (IOLS) approach for the estimation of the static model and an iterated GMM (IGMM) method for the dynamic model. Monte Carlo simulations illustrate that our methods perform well in finite samples.  相似文献   

3.
Excess payoff dynamics and other well-behaved evolutionary dynamics   总被引:1,自引:0,他引:1  
We consider a model of evolution in games in which agents occasionally receive opportunities to switch strategies, choosing between them using a probabilistic rule. Both the rate at which revision opportunities arrive and the probabilities with which each strategy is chosen are functions of current normalized payoffs. We call the aggregate dynamics induced by this model excess payoff dynamics. We show that every excess payoff dynamic is well-behaved: regardless of the underlying game, each excess payoff dynamic admits unique solution trajectories that vary continuously with the initial state, identifies rest points with Nash equilibria, and respects a basic payoff monotonicity property. We show how excess payoff dynamics can be used to construct well-behaved modifications of imitative dynamics, and relate them to two other well-behaved dynamics based on projections.  相似文献   

4.
We find that the CAPM fails to explain the small firm effect even if its non-parametric form is used which allows time-varying risk and non-linearity in the pricing function. Furthermore, the linearity of the CAPM can be rejected, thus the widely used risk and performance measures, the beta and the alpha, are biased and inconsistent. We deduce semi-parametric measures which are non-constant under extreme market conditions in a single factor setting; on the other hand, they are not significantly different from the linear estimates of the Fama-French three-factor model. If we extend the single factor model with the Fama-French factors, the simple linear model is able to explain the US stock returns correctly.  相似文献   

5.
This paper presents a capital asset pricing model‐based threshold quantile regression model with a generalized autoregressive conditional heteroscedastic specification to examine relations between excess stock returns and “abnormal trading volume”. We employ an adaptive Bayesian Markov chain Monte Carlo method with asymmetric Laplace distribution to study six daily Dow Jones Industrial stocks. The proposed model captures asymmetric risk through market beta and volume coefficients, which change discretely between regimes. Moreover, they are driven by market information and various quantile levels. This study finds that abnormal volume has significantly negative effects on excess stock returns under low quantile levels; however, there are significantly positive effects under high quantile levels. The evidence indicates that each market beta varies with different quantile levels, capturing different states of market conditions.  相似文献   

6.
This paper theoretically explains why bias correction appears in two statistics recently developed by Baltagi et al. (2011, 2012), which are designed to test the sphericity and cross-sectional dependence of the errors in the fixed effects panel model respectively. Our explanation shows that the bias correction is in fact avoidable, which is demonstrated by two corresponding statistics that are newly constructed in this paper. Simulation suggests that our statistics perform as well as the two in Baltagi et al. (2011, 2012). In addition, according to the theories underlying our explanation, we extend a new sphericity test proposed by Fisher et al. (2010) to the fixed effects model. Simulation finds that the test behaves well only if both the cross-sectional and the time series dimension are large.  相似文献   

7.
This note presents a simple generalization of the adaptive expectations mechanism in which the learning parameter is time variant. Expectations generated in this way minimize mean squared forecast errors for any linear state space model.  相似文献   

8.
Motivating individuals to actively engage in physical activity due to its beneficial health effects has been an integral part of Scotland's health policy agenda. The current Scottish guidelines recommend individuals participate in physical activity of moderate vigour for 30 min at least five times per week. For an individual contemplating the recommendation, decisions have to be made in regard of participation, intensity, duration and multiplicity. For the policy maker, understanding the determinants of each decision will assist in designing an intervention to effect the recommended policy. With secondary data sourced from the 2003 Scottish Health Survey (SHeS) we statistically model the combined decisions process, employing a copula approach to model specification. In taking this approach the model flexibly accounts for any statistical associations that may exist between the component decisions. Thus, we model the endogenous relationship between the decision of individuals to participate in sporting activities and, amongst those who participate, the duration of time spent undertaking their chosen activities. The main focus is to establish whether dependence exists between the two random variables assuming the vigour with which sporting activity is performed to be independent of the participation and duration decision. We allow for a variety of controls including demographic factors such as age and gender, economic factors such as income and educational attainment, lifestyle factors such as smoking, alcohol consumption, healthy eating and medical history. We use the model to compare the effect of interventions designed to increase the vigour with which individuals undertake their sport, relating it to obesity as a health outcome.  相似文献   

9.
The commonly-used version of the double-hurdle model rests on a rather restrictive set of statistical assumptions, which are very seldom tested by practitioners, mainly because of the lack of a standard procedure for doing so, although violation of such assumptions can lead to serious modelling flaws. We propose here a bootstrap-corrected conditional moment portmanteau test which is simple to implement and has good size and power properties.  相似文献   

10.
This paper explores some properties of periodically collapsing bubbles, which are a very popular model in the bubbles literature. We first demonstrate that complicated nonlinear bubbles can be represented as a time-varying parameter linear model of order 1. We demonstrate that the bubbles are explosive and nonstationary. We also derive conditions under which the bubbles are strictly stationary. We also demonstrate that the bubbles cannot be weakly stationary by deriving the tail indices of the strictly stationary distribution.  相似文献   

11.
We describe LossCalc™ version 2.0: the Moody's KMV model to predict loss given default (LGD), the equivalent of (1  −  recovery rate). LossCalc is a statistical model that applies multiple predictive factors at different information levels: collateral, instrument, firm, industry, country and the macroeconomy to predict LGD. We find that distance‐to‐default measures (from the Moody's KMV structural model of default likelihood) compiled at both the industry and firm levels are predictive of LGD. We find that recovery rates worldwide are predictable within a common statistical framework, which suggests that the estimation of economic firm value (which is then available to allocate to claimants according to each country's bankruptcy laws) is a dominant step in LGD determination. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds and preferred stock from 1981 to 2004. This dataset includes 1,424 defaults of both public and private firms – both rated and unrated instruments – in all industries. We demonstrate out‐of‐sample and out‐of‐time LGD model validation. The model significantly improves on the use of historical recovery averages to predict LGD .  相似文献   

12.
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.  相似文献   

13.
The comparative static predictions of the Baron and Ferejohn [Baron, D.P., and Ferejohn, J.A., (1989). Bargaining in legislatures, American Political Science Review 83 (4), 1181-1206] model better organize behavior in legislative bargaining experiments than Gamson's Law. Regressions similar to those employed in field data produce results seemingly in support of Gamson's Law (even when using data generated by simulating agents who behave according to the Baron-Ferejohn model), but this is determined by the selection protocol which recognizes voting blocks in proportion to the number of votes controlled. Proposer power is not nearly as strong as predicted in the closed rule Baron and Ferejohn model, as coalition partners refuse to take the small shares given by the continuation value of the game. Discounting pushes behavior in the direction predicted by Baron and Ferejohn but has a much smaller effect than predicted.  相似文献   

14.
In this paper, we propose a temporal disaggregation model with regime switches to disaggregate U.S. quarterly GDP into monthly figures. Alternative to the existing literature, our model is able to capture the nonlinear behaviors of both aggregated and disaggregated output series as well as the asymmetric nature of business cycle phases. To demonstrate the applicability of the proposed model, we apply the model with a Markov trend component to U.S. quarterly real GDP. The results suggest that the combination of a temporal disaggregation model with Markov switches leads to a successful representation of the data relative to the existing literature. Also, the inferred probabilities of unobserved states are clearly in close agreement with the NBER reference cycle on a monthly basis, which highlights the importance of nonlinearities in business cycle.  相似文献   

15.
Missing data is a common problem in economics studies. We propose using Mallows model averaging (MMA) to deal with this problem, which has an important advantage over its competitors in that it asymptotically achieves the lowest possible squared error. A simulation study in comparison with existing methods strongly favors the MMA estimator.  相似文献   

16.
Self-tuning experience weighted attraction learning in games   总被引:2,自引:0,他引:2  
Self-tuning experience weighted attraction (EWA) is a one-parameter theory of learning in games. It addresses a criticism that an earlier model (EWA) has too many parameters, by fixing some parameters at plausible values and replacing others with functions of experience so that they no longer need to be estimated. Consequently, it is econometrically simpler than the popular weighted fictitious play and reinforcement learning models. The functions of experience which replace free parameters “self-tune” over time, adjusting in a way that selects a sensible learning rule to capture subjects’ choice dynamics. For instance, the self-tuning EWA model can turn from a weighted fictitious play into an averaging reinforcement learning as subjects equilibrate and learn to ignore inferior foregone payoffs. The theory was tested on seven different games, and compared to the earlier parametric EWA model and a one-parameter stochastic equilibrium theory (QRE). Self-tuning EWA does as well as EWA in predicting behavior in new games, even though it has fewer parameters, and fits reliably better than the QRE equilibrium benchmark.  相似文献   

17.
In this paper we develop an open economy model explaining the joint determination of output, inflation, interest rates, unemployment and the exchange rate in a multi-country framework. Our model—the Halle Economic Projection Model (HEPM)—is closely related to studies published by Carabenciov et al. (2008a,b,c). Our main contribution is that we model the Euro area countries separately. In doing so, we consider Germany, France, and Italy which represent together about 70% of Euro area GDP. The model combines core equations of the New-Keynesian standard DSGE model with empirically useful ad-hoc equations. We estimate this model using Bayesian techniques and evaluate the forecasting properties. Additionally, we provide an impulse response analysis and historical shock decomposition.  相似文献   

18.
This paper compares alternative time-varying volatility models for daily stock-returns using data from Spanish equity index IBEX-35. Specifically, we estimate a parametric family of models of generalized autoregressive heteroskedasticity (which nests the most popular symmetric and asymmetric GARCH models), a semiparametric GARCH model, the generalized quadratic ARCH model, the stochastic volatility model, the Poisson Jump Diffusion model and, finally, a nonparametric model. Those models which use conditional standard deviation (specifically, TGARCH and AGARCH models) produce better fits than all other GARCH models. We also compare the within sample predictive power of all models using a standard efficiency test. Our results show that the asymmetric behaviour of responses is a statistically significant characteristic of these data. Moreover, we observe that specifications with a distribution which allows for fatter tails than a normal distribution do not necessarily outperform specifications with a normal distribution.  相似文献   

19.
Most studies on housing price dynamics are only concerned with the conditional mean and variance, but overlook other higher-order conditional moments and the structural change characteristics inherent in housing prices. In order to take into account these two important issues, this study utilizes the generalized Markov switching GARCH model to explore house price dynamics and conditional distribution for US market over 1975Q1–2007Q4. The housing return follows two distinct dynamics: the bust regime and the boom regime. The volatility pattern is different in the bust and boom regimes. In addition, the conditional densities derived by the regime-switching model change dramatically over time and are significantly different from normal distribution. More importantly, the regime-switching model can detect in advance a weak US housing market such as the one that occurred in the middle of 2007. The in-sample fitting ability of regime-switching model, which incorporates higher-order moments, has significant improvements compared to the single-regime AR and AR-GARCH models. For the out-of-sample Value-at-Risk forecasting performance, the ability of regime-switching AR-GARCH model to forecast one-step-ahead density is better compared to the single-regime AR-GARCH model.  相似文献   

20.
We model a situation in which two players bargain over two pies, one of which can only be consumed starting at a future date. Suppose the players value the pies asymmetrically: one player values the existing pie more than the future one, while the other player has the opposite valuation. We show that players may consume only a fraction of the existing pie in the first period, and then consume the remainder of it, along with the second pie, at the date at which the second pie becomes available. Thus, our model features a special form of bargaining delay, in which agreements take place in multiple stages. Such partial agreements arise when players are patient enough, when they expect the second pie to become available soon, and when the asymmetry in their valuations is large enough.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号