共查询到20条相似文献,搜索用时 31 毫秒
1.
Aart Kraay 《Journal of Applied Econometrics》2012,27(1):108-128
The identification of structural parameters in the linear instrumental variables (IV) model is typically achieved by imposing the prior identifying assumption that the error term in the structural equation of interest is orthogonal to the instruments. Since this exclusion restriction is fundamentally untestable, there are often legitimate doubts about the extent to which the exclusion restriction holds. In this paper I illustrate the effects of such prior uncertainty about the validity of the exclusion restriction on inferences based on linear IV models. Using a Bayesian approach, I provide a mapping from prior uncertainty about the exclusion restriction into increased uncertainty about parameters of interest. Moderate prior uncertainty about exclusion restrictions can lead to a substantial loss of precision in estimates of structural parameters. This loss of precision is relatively more important in situations where IV estimates appear to be more precise, for example in larger samples or with stronger instruments. I illustrate these points using several prominent recent empirical papers that use linear IV models. An accompanying electronic table allows users to readily explore the robustness of inferences to uncertainty about the exclusion restriction in their particular applications. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
2.
In this paper we propose a multivariate extension of the partial adjustment model of financial ratios. To that end, we use a dynamic factor model which assumes that financial ratios measuring, essentially, the same economic–financial dimension of the firm evolve in a similar way, reflecting the evolution of the common factor. The proposed model is hierarchical with three levels. The first describes the relationship between each ratio and its common factor; the second describes the evolution of the common factors over time by means of Lev's ( 1969 ) partial adjustment model; and the third analyzes the similarity of firms' adjustment coefficients, taking into account their characteristics. The methodology is applied to the analysis of a set of financial ratios related to the business and financial structure of the firm. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
3.
Johannes Sauer 《Journal of Productivity Analysis》2010,34(3):213-237
Milk quota trading rules differ across EU member countries. In Denmark a biannual milk quota exchange was set up in 1997 to
promote a more efficient reallocation of milk quotas as well as to reduce transaction costs related to the searching and matching
of sellers and buyers. Using two comprehensive unbalanced panel data sets on organic and conventional milk farms this study
attempts to disentangle the effects of the introduction of quota transferability on the production structure of those farms
as well as the probability of market entry/exit. Bayesian estimation techniques are used to estimate an input oriented generalized
Leontief distance function as well as a curvature constrained specification. The results suggest that the deregulation in
the quota allocation mechanism led to an increased allocative efficiency of organic as well as conventional milk production
as well as a relative shift of the PPF in favor of the production of organic milk. 相似文献
4.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
5.
We propose a kernel-based Bayesian framework for the analysis of stochastic frontiers and efficiency measurement. The primary feature of this framework is that the unknown distribution of inefficiency is approximated by a transformed Rosenblatt-Parzen kernel density estimator. To justify the kernel-based model, we conduct a Monte Carlo study and also apply the model to a panel of U.S. large banks. Simulation results show that the kernel-based model is capable of providing more precise estimation and prediction results than the commonly-used exponential stochastic frontier model. The Bayes factor also favors the kernel-based model over the exponential model in the empirical application.
相似文献6.
This paper develops a Bayesian method for quantile regression for dichotomous response data. The frequentist approach to this type of regression has proven problematic in both optimizing the objective function and making inferences on the parameters. By accepting additional distributional assumptions on the error terms, the Bayesian method proposed sets the problem in a parametric framework in which these problems are avoided. To test the applicability of the method, we ran two Monte Carlo experiments and applied it to Horowitz's (1993) often studied work‐trip mode choice dataset. Compared to previous estimates for the latter dataset, the method proposed leads to a different economic interpretation. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
7.
We employ a Bayesian approach to analyze financial markets experimental data. We estimate a structural model of sequential trading in which trading decisions are classified in five types: private-information based, noise, herd, contrarian and irresolute. Through Monte Carlo simulation, we estimate the posterior distributions of the structural parameters. This technique allows us to compare several non-nested models of trade arrival. We find that the model best fitting the data is that in which a proportion of trades stems from subjects who do not rely only on their private information once the difference between the number of previous buy and sell decisions is at least two. In this model, the majority of trades stem from subjects following their private information. There is also a large proportion of noise trading activity, which is biased towards buying the asset. We observe little herding and contrarianism, as theory suggests. Finally, we observe a significant proportion of (irresolute) subjects who follow their own private information when it agrees with public information, but abstain from trading when it does not. 相似文献
8.
Both real and monetary macro models have parallely exploited the potential for various preferences in accounting for empirical facts. This paper brings the two literatures together by estimating time non-separable preferences with habit formation in consumption that nests several commonly used preferences. In the absence of wealth effects and external habits, these preferences fail to generate observed inflation inertia and output persistence after a monetary policy shock. Furthermore, the data strongly rejects these preferences in favor of preferences with external habits. An alternative solution is to include habit adjusted intermediate wealth effect preferences which are able to simultaneously generate sluggish responses of the variables to a monetary policy shock and fit the data better. 相似文献
9.
Domenico Giannone Michele Lenza Daphne Momferatou Luca Onorante 《International Journal of Forecasting》2014
In this paper we construct a large Bayesian Vector Autoregressive model (BVAR) for the Euro area that captures the complex dynamic inter-relationships between the main components of the Harmonized Index of Consumer Prices (HICP) and their determinants. The model generates accurate conditional and unconditional forecasts in real-time. We find a significant pass-through effect of oil-price shocks on core inflation and a strong Phillips curve during the Great Recession. 相似文献
10.
《Journal of econometrics》2005,126(2):355-384
In this paper, we propose simulation-based Bayesian inference procedures in a cost system that includes the cost function and the cost share equations augmented to accommodate technical and allocative inefficiency. Markov chain Monte Carlo techniques are proposed and implemented for Bayesian inferences on costs of technical and allocative inefficiency, input price distortions and over- (under-) use of inputs. We show how to estimate a well-specified translog system (in which the error terms in the cost and cost share equations are internally consistent) in a random effects framework. The new methods are illustrated using panel data on U.S. commercial banks. 相似文献
11.
《International Journal of Forecasting》2019,35(2):458-473
This paper considers Bayesian estimation of the threshold vector error correction (TVECM) model in moderate to large dimensions. Using the lagged cointegrating error as a threshold variable gives rise to additional difficulties that typically are solved by utilizing large sample approximations. By relying on Markov chain Monte Carlo methods, we are enabled to circumvent these issues and avoid computationally-prohibitive estimation strategies like the grid search. Due to the proliferation of parameters, we use novel global-local shrinkage priors in the spirit of Griffin and Brown (2010). We illustrate the merits of our approach in an application to five exchange rates vis-á-vis the US dollar by means of a forecasting comparison. Our findings indicate that adopting a non-linear modeling approach improves the predictive accuracy for most currencies relative to a set of simpler benchmark models and the random walk. 相似文献
12.
Trends and cycles in economic time series: A Bayesian approach 总被引:1,自引:0,他引:1
Trends and cyclical components in economic time series are modeled in a Bayesian framework. This enables prior notions about the duration of cycles to be used, while the generalized class of stochastic cycles employed allows the possibility of relatively smooth cycles being extracted. The posterior distributions of such underlying cycles can be very informative for policy makers, particularly with regard to the size and direction of the output gap and potential turning points. From the technical point of view a contribution is made in investigating the most appropriate prior distributions for the parameters in the cyclical components and in developing Markov chain Monte Carlo methods for both univariate and multivariate models. Applications to US macroeconomic series are presented. 相似文献
13.
Jesús Crespo Cuaresma Martin Feldkircher Florian Huber 《Journal of Applied Econometrics》2016,31(7):1371-1391
This paper develops a Bayesian variant of global vector autoregressive (B‐GVAR) models to forecast an international set of macroeconomic and financial variables. We propose a set of hierarchical priors and compare the predictive performance of B‐GVAR models in terms of point and density forecasts for one‐quarter‐ahead and four‐quarter‐ahead forecast horizons. We find that forecasts can be improved by employing a global framework and hierarchical priors which induce country‐specific degrees of shrinkage on the coefficients of the GVAR model. Forecasts from various B‐GVAR specifications tend to outperform forecasts from a naive univariate model, a global model without shrinkage on the parameters and country‐specific vector autoregressions. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
14.
This paper undertakes a Bayesian analysis of optimal monetary policy for the U.K. We estimate a suite of monetary-policy models that include both forward- and backward-looking representations as well as large- and small-scale models. We find an optimal simple Taylor-type rule that accounts for both model and parameter uncertainty. For the most part, backward-looking models are highly fault tolerant with respect to policies optimized for forward-looking representations, while forward-looking models have low fault tolerance with respect to policies optimized for backward-looking representations. In addition, backward-looking models often have lower posterior probabilities than forward-looking models. Bayesian policies therefore have characteristics suitable for inflation and output stabilization in forward-looking models. 相似文献
15.
We study contagion between Real Estate Investment Trusts (REITs) and the equity market in the U.S. over four sub-samples covering January, 2003 to December, 2017, by using Bayesian nonparametric quantile-on-quantile (QQ) regressions with heteroskedasticity. We find that the spillovers from the REITs on to the equity market has varied over time and quantiles defining the states of these two markets across the four sub-samples, thus providing evidence of shift-contagion. Further, contagion from REITs upon the stock market went up during the global financial crisis particularly, and also over the period corresponding to the European sovereign debt crisis, relative to the pre-crisis period. Our main findings are robust to alternative model specifications of the benchmark Bayesian QQ model, especially when we control for omitted variable bias using the heteroskedastic error structure. Our results have important implications for various agents in the economy namely, academics, investors and policymakers. 相似文献
16.
《Socio》2023
Sustainable economic development in the future is driven by public policy on regional, national and global levels. Therefore a comprehensive policy analysis is needed that provides consistent and effective policy support. However, a general problem facing classical policy analysis is model uncertainty. All actors, those involved in the policy choice and those in the policy analysis, are fundamentally uncertain which of the different models corresponds to the true generative mechanism that represents the natural, economic, or social phenomena on which policy analysis is focused. In this paper, we propose a general framework that explicitly incorporates model uncertainty into the derivation of a policy choice. Incorporating model uncertainty into the analysis is limited by the very high required computational effort. In this regard, we apply metamodeling techniques as a way to reduce computational complexity. We demonstrate the effect of different metamodel types using a reduced model for the case of CAADP in Senegal. Furthermore, we explicitly show that ignoring model uncertainty leads to inefficient policy choices and results in a large waste of public resources. 相似文献
17.
Bayesian Hypothesis Testing: a Reference Approach 总被引:1,自引:0,他引:1
For any probability model M={p(x|θ, ω), θεΘ, ωεΩ} assumed to describe the probabilistic behaviour of data xεX, it is argued that testing whether or not the available data are compatible with the hypothesis H0={θ=θ0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x|θ0, ω), ωεΩ}, where the loss difference L(a0, θ, ω) –L(a0, θ, ω) is proportional to the amount of information δ(θ0, ω), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution π(θ, ω), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ∫∫δ(θ0, θ, ω)π(θ, ω|x)dθdω is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(θ0, x) =∫δπ(δ|x)dδ, of the amount of information δ(θ0, θ, ω) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is invariant under reparametrization. The intrinsic statistic d(θ0, x) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log‐likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC), indicates that the null model M0 should only be rejected if the posterior expected loss of information from using the simplified model M0 is too large or, equivalently, if the associated expected average log‐likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing. 相似文献
18.
Pairwise majority voting over alternative nonlinear income tax schedules is considered when there is a continuum of individuals who differ in their labor productivities, which is private information, but share the same quasilinear-in-consumption preferences for labor and consumption. Voting is restricted to those schedules that are selfishly optimal for some individual. The analysis extends that of Brett and Weymark (2016) by adding a minimum-utility constraint to their incentive-compatibility and government budget constraints. It also extends the analysis of Röell (2012) and Bohn and Stuart (2013) by providing a complete characterization of the selfishly optimal tax schedules. It is shown that individuals have single-peaked preferences over the set of selfishly optimal tax schedules, and so the schedule proposed by the median skill type is a Condorcet winner. 相似文献
19.
This paper addresses the design for supporting the optimal decision on tree cutting in a Portuguese eucalyptus production forest. Trees are usually cut at the biological rotation age, i.e. the age which maximizes the yearly volume production. Here we aim the maximization of the long-term yearly volume yield reduced by harvest costs. We consider different growth curves, with a known prior distribution, that can occur in each rotation. The optimal cutting time at each rotation depends both on the current growth curve and on the prior distribution. Different priors and strategies are compared with respect to the long-term production. Optimization of the cutting time allows an improvement of 16% of the long-term volume production. We conclude that the use of optimal designs can be beneficial for tree cutting in modern production forests. 相似文献
20.
Summary Many experimental situations with controllable, independent design variables have an associated null hypothesis
0 of zero dependence of observations on the design variables. The necessity of randomization of the design variables for asymptotic discriminability between
0 and its complement is considered in terms of likelihood ratios. Applications are made to stochastic processes and comparative experiments.
Zusammenfassung Viele experimentelle Situationen mit kontrollierten, unabhängigen Planungsvariablen haben eine assoziierte Null-Hypothese 0, die besagt, daß die Beobachtungen nicht von den Planungsvariablen abhängen. Die Notwendigkeit der Randomisierung der Planungsvariablen für asymptotische Diskriminanz zwischen 0 und ihres Komplementes wird mit Hilfe des Likelihood-Verhältnisses untersucht. Anwendungen auf dem Gebiet stochastischer Prozesse und komparativer Experimente werden diskutiert.相似文献