首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonlinear time series models have become fashionable tools to describe and forecast a variety of economic time series. A closer look at reported empirical studies, however, reveals that these models apparently fit well in‐sample, but rarely show a substantial improvement in out‐of‐sample forecasts, at least over linear models. One of the many possible reasons for this finding is the use of inappropriate model selection criteria and forecast evaluation criteria. In this paper we therefore propose a novel criterion, which we believe does more justice to the very nature of nonlinear models. Simulations show that this criterion outperforms those criteria currently in use, in the sense that the true nonlinear model is more often found to perform better in out‐of‐sample forecasting than a benchmark linear model. An empirical illustration for US GDP emphasizes its relevance.  相似文献   

2.
This paper examines the recent empirical literature in Islamic banking and finance, highlights the main findings and provides a guide for future research. Early studies focus on the efficiency, production technology and general performance features of Islamic versus conventional banks, whereas more recent work looks at profit‐sharing and loss‐bearing behaviour, competition, risks as well as other dimensions such as small business lending and financial inclusion. Apart from key exceptions, the empirical literature suggests no major differences between Islamic and conventional banks in terms of their efficiency, competition and risk features (although small Islamic banks are found to be less risky than their conventional counterparts). There is some evidence that Islamic finance aids inclusion and financial sector development. Results from the empirical finance literature, dominated by studies that focus on the risk/return features of mutual funds, finds that Islamic funds perform as well, if not better, than conventional funds – there is little evidence that they perform worse than standard industry benchmarks.  相似文献   

3.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

4.
This paper proposes new error correction‐based cointegration tests for panel data. The limiting distributions of the tests are derived and critical values provided. Our simulation results suggest that the tests have good small‐sample properties with small size distortions and high power relative to other popular residual‐based panel cointegration tests. In our empirical application, we present evidence suggesting that international healthcare expenditures and GDP are cointegrated once the possibility of an invalid common factor restriction has been accounted for.  相似文献   

5.
We establish the consistency of the selection procedures embodied in PcGets, and compare their performance with other model selection criteria in linear regressions. The significance levels embedded in the PcGets Liberal and Conservative algorithms coincide in very large samples with those implicit in the Hannan–Quinn (HQ) and Schwarz information criteria (SIC), respectively. Thus, both PcGets rules are consistent under the same conditions as HQ and SIC. However, PcGets has a rather different finite‐sample behaviour. Pre‐selecting to remove many of the candidate variables is confirmed as enhancing the performance of SIC.  相似文献   

6.
This paper compares the forecasting performance of models that have been proposed for forecasting in the presence of structural breaks. They differ in their treatment of the break process, the model applied in each regime and the out‐of‐sample probability of a break. In an extensive empirical evaluation, we demonstrate the presence of breaks and their importance for forecasting. We find no single model that consistently works best in the presence of breaks. In many cases, the formal modeling of the break process is important in achieving a good forecast performance. However, there are also many cases where rolling window forecasts perform well. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Despite 40 years of research on the relationship between corporate environmental performance (CEP) and corporate financial performance (CFP), there is no generally accepted theoretical framework that explains the contradictory results that have emerged. This unsatisfactory status may be attributed to the fact that linear models dominate the research. Based on an international sample of 2361 firm‐years from 2008 to 2012, we find empirical evidence of a non‐linear, specifically a U‐shaped, relationship between carbon performance and profitability as well as between waste intensity and profitability. The same result holds for the relationship between carbon performance and stock market performance, but solely for manufacturing industries. Our empirical findings provide evidence for the theoretical framework of a ‘too‐little‐of‐a‐good‐thing’ (TLGT) effect, which indicates that the type of relationship (positive, negative) depends on the level of CEP. More precisely, there is a negative CEP–CFP relationship for companies with low CEP and a positive association for high CEP. Copyright © 2015 John Wiley & Sons, Ltd and ERP Environment  相似文献   

8.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

9.
The evidence from the literature on forecast combination shows that combinations generally perform well. We discuss here how the accuracy and diversity of the methods being combined and the robustness of the combination rule can influence performance, and illustrate this by showing that a simple, robust combination of a subset of the nine methods used in the M4 competition’s best combination performs almost as well as that forecast, and is easier to implement. We screened out methods with low accuracy or highly correlated errors and combined the remaining methods using a trimmed mean. We also investigated the accuracy risk (the risk of a bad forecast), proposing two new accuracy measures for this purpose. Our trimmed mean and the trimmed mean of all nine methods both had lower accuracy risk than either the best combination in the M4 competition or the simple mean of the nine methods.  相似文献   

10.
Drawing from resource‐based theory, we argue that family firm franchisors behave and perform differently compared to non‐family firm franchisors. Our theorizing suggests that compared to a non‐family firm franchisor, a family firm franchisor cultivates stronger relationships with franchisees and provides them with more training. Yet, we predict that a family firm franchisor achieves lower performance than a non‐family firm franchisor. We argue, however, that this performance relationship reverses itself when family firm franchisors are older and larger. We test our hypotheses with a longitudinal dataset including a matched‐pair sample of private U.S. family and non‐family firm franchisors.  相似文献   

11.
This paper develops a very simple test for the null hypothesis of no cointegration in panel data. The test is general enough to allow for heteroskedastic and serially correlated errors, unit‐specific time trends, cross‐sectional dependence and unknown structural breaks in both the intercept and slope of the cointegrated regression, which may be located at different dates for different units. The limiting distribution of the test is derived, and is found to be normal and free of nuisance parameters under the null. A small simulation study is also conducted to investigate the small‐sample properties of the test. In our empirical application, we provide new evidence concerning the purchasing power parity hypothesis.  相似文献   

12.
This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite‐sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71–90] support this explanation. The Monte Carlo evidence, together with a large‐sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight.  相似文献   

13.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

14.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

15.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   

16.
In this paper, a new model to analyze the comovements in the volatilities of a portfolio is proposed. The Pure Variance Common Features model is a factor model for the conditional variances of a portfolio of assets, designed to isolate a small number of variance features that drive all assets’ volatilities. It decomposes the conditional variance into a short-run idiosyncratic component (a low-order ARCH process) and a long-run component (the variance factors). An empirical example provides evidence that models with very few variance features perform well in capturing the long-run common volatilities of the equity components of the Dow Jones.  相似文献   

17.
We extend the analytical results for reduced form realized volatility based forecasting in ABM (2004) to allow for market microstructure frictions in the observed high-frequency returns. Our results build on the eigenfunction representation of the general stochastic volatility class of models developed byMeddahi (2001). In addition to traditional realized volatility measures and the role of the underlying sampling frequencies, we also explore the forecasting performance of several alternative volatility measures designed to mitigate the impact of the microstructure noise. Our analysis is facilitated by a simple unified quadratic form representation for all these estimators. Our results suggest that the detrimental impact of the noise on forecast accuracy can be substantial. Moreover, the linear forecasts based on a simple-to-implement ‘average’ (or ‘subsampled’) estimator obtained by averaging standard sparsely sampled realized volatility measures generally perform on par with the best alternative robust measures.  相似文献   

18.
This study investigates the small‐sample performance of meta‐regression methods for detecting and estimating genuine empirical effects in research literatures tainted by publication selection. Publication selection exists when editors, reviewers or researchers have a preference for statistically significant results. Meta‐regression methods are found to be robust against publication selection. Even if a literature is dominated by large and unknown misspecification biases, precision‐effect testing and joint precision‐effect and meta‐significance testing can provide viable strategies for detecting genuine empirical effects. Publication biases are greatly reduced by combining two biased estimates, the estimated meta‐regression coefficient on precision (1/Se) and the unadjusted‐average effect.  相似文献   

19.
We consider the role of imperfect competition in explaining the relative price of non‐traded to traded goods within the Balassa–Samuelson framework. Under imperfect competition in these two sectors, relative prices depend on both productivity and mark‐up differentials. We test this hypothesis using a panel of sectors for 12 OECD countries. The empirical evidence suggests that relative price movements are well explained by productivity and mark‐up differentials.  相似文献   

20.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号