首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
The introduction of the Basel II Accord has had a huge impact on financial institutions, allowing them to build credit risk models for three key risk parameters: PD (probability of default), LGD (loss given default) and EAD (exposure at default). Until recently, credit risk research has focused largely on the estimation and validation of the PD parameter, and much less on LGD modeling. In this first large-scale LGD benchmarking study, various regression techniques for modeling and predicting LGD are investigated. These include one-stage models, such as those built by ordinary least squares regression, beta regression, robust regression, ridge regression, regression splines, neural networks, support vector machines and regression trees, as well as two-stage models which combine multiple techniques. A total of 24 techniques are compared using six real-life loss datasets from major international banks. It is found that much of the variance in LGD remains unexplained, as the average prediction performance of the models in terms of R2 ranges from 4% to 43%. Nonetheless, there is a clear trend that non-linear techniques, and in particular support vector machines and neural networks, perform significantly better than more traditional linear techniques. Also, two-stage models built by a combination of linear and non-linear techniques are shown to have a similarly good predictive power, with the added advantage of having a comprehensible linear model component.  相似文献   

2.
Since the bubble of the late 1990s the dividend yield appears non-stationary indicating the breakdown of the equilibrium relationship between prices and dividends. Two lines of research have developed in order to explain this apparent breakdown. First, that the dividend yield is better characterised as a non-linear process and second, that it is subject to mean level shifts. This paper jointly models both of these characteristics by allowing non-linear reversion to a changing mean level. Results support stationarity of this model for eight international dividend yield series. This model is than applied to the forecast of monthly stock returns. Evidence supports our time-varying non-linear model over linear alternatives, particularly so on the basis of an out-of-sample R-squared measure and a trading rule exercise. More detailed examination of the trading rule measure suggests that investors could obtain positive returns, as the model forecasts do not imply excessive trading such that costs would not outweigh returns. Finally, the superior performance of the non-linear model largely arises from its ability to forecast negative returns, whereas linear models are unable to do.  相似文献   

3.
It is often suggested that non-linear models are needed to capture business cycle features. In this paper, we subject this view to some critical analysis. We examine two types of non-linear models designed to capture the bounce-back effect in US expansions. This means that these non-linear models produce an improved explanation of the shape of expansions over that provided by linear models. But this is at the expense of making expansions last much longer than they do in reality. Interestingly, the fitted models seem to be influenced by a single point in 1958 when a large negative growth rate in GDP was followed by good positive growth in the next quarter. This seems to have become embedded as a population characteristic and results in overly long and strong expansions. That feature is likely to be a problem for forecasting if another large negative growth rate was observed.  相似文献   

4.
This paper analyses a model of non-linear exchange rate adjustment that extends the literature by allowing asymmetric responses to over- and under-valuations. Applying the model to Greece and Turkey, we find that adjustment is asymmetric and that exchange rates depend on the sign as well as the magnitude of deviations, being more responsive to over-valuations than undervaluations. Our findings support and extend the argument that non-linear models of exchange rate adjustment can help to overcome anomalies in exchange rate behaviour. They also suggest that exchange rate adjustment is non-linear in economies where fundamentals models work well.  相似文献   

5.
This study uses an artificial neural network model to forecast quarterly accounting earnings for a sample of 296 corporations trading on the New York stock exchange. The resulting forecast errors are shown to be significantly larger (smaller) than those generated by the parsimonious Brown-Rozeff and Griffin-Watts (Foster) linear time series models, bringing into question the potential usefulness of neural network models in forecasting quarterly accounting earnings. This study confirms the conjecture by Chatfield and Hill et al. that neural network models are context sensitive. In particular, this study shows that neural network models are not necessarily superior to linear time series models even when the data are financial, seasonal and non-linear.  相似文献   

6.
This research utilises a non-linear Smooth Transition Regression (STR) approach to modelling and forecasting the exchange rate, based on the Taylor rule model of exchange rate determination. The separate literatures on exchange rate models and the Taylor rule have already shown that the non-linear specification can outperform the equivalent linear one. In addition the Taylor rule based exchange rate model used here has been augmented with a wealth effect to reflect the increasing importance of the asset markets in monetary policy. Using STR models, the results offer evidence of non-linearity in the variables used and that the interest rate differential is the most appropriate transition variable. We conduct the conventional out-of-sample forecasting performance test, which indicates that the non-linear models outperform their linear equivalents as well as the non-linear UIP model and random walk.  相似文献   

7.
Since the advent of the horseshoe priors for regularisation, global–local shrinkage methods have proved to be a fertile ground for the development of Bayesian methodology in machine learning, specifically for high-dimensional regression and classification problems. They have achieved remarkable success in computation and enjoy strong theoretical support. Most of the existing literature has focused on the linear Gaussian case; for which systematic surveys are available. The purpose of the current article is to demonstrate that the horseshoe regularisation is useful far more broadly, by reviewing both methodological and computational developments in complex models that are more relevant to machine learning applications. Specifically, we focus on methodological challenges in horseshoe regularisation in non-linear and non-Gaussian models, multivariate models and deep neural networks. We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.  相似文献   

8.
Sparse generalised additive models (GAMs) are an extension of sparse generalised linear models that allow a model's prediction to vary non-linearly with an input variable. This enables the data analyst build more accurate models, especially when the linearity assumption is known to be a poor approximation of reality. Motivated by reluctant interaction modelling, we propose a multi-stage algorithm, called reluctant generalised additive modelling (RGAM), that can fit sparse GAMs at scale. It is guided by the principle that, if all else is equal, one should prefer a linear feature over a non-linear feature. Unlike existing methods for sparse GAMs, RGAM can be extended easily to binary, count and survival data. We demonstrate the method's effectiveness on real and simulated examples.  相似文献   

9.
In causal analysis, path models are an appropriate tool for studying relationships between social phenomena. However, they assume linear linkages between variables, and hence they are not always suitable for describing the complexity and richness of relationships in social phenomena. The aim of this work is to propose an exploratory graphical method to evaluate if the phenomena under analysis are actually characterized by non-linear linkages. In particular, the method is well suited to discovering interactions between the observed variables in path models. The proposed approach, which does not depend on any hypothesis on the error distribution, is based on a series of plots that can be easily interpreted and drawn using standard statistical software. As an additional feature, the plots – which we call joint effect plots – support qualitative interpretation of the non-linear linkages after the path model has been specified. Finally, the proposed method is applied within a case study. Non-linearities are explored in a casual model aiming to find the determinants of remittances of a group of Tunisian migrants in Italy.  相似文献   

10.
Noisy rational expectations models, in which agents have dispersed private information and extract information from an endogenous asset price, are widely used in finance. However, these linear partial equilibrium models do not fit well in modern macroeconomics that is based on non-linear dynamic general equilibrium models. We develop a method for solving a DSGE model with portfolio choice and dispersed private information. We combine and extend existing local approximation methods applied to public information DSGE settings with methods for solving noisy rational expectations models in finance with dispersed private information.  相似文献   

11.
The most popular econometric models in the panel data literature are the class of linear panel data models with unobserved individual- and/or time-specific effects. The consistency of parameter estimators and the validity of their economic interpretations as marginal effects depend crucially on the correct functional form specification of the linear panel data model. In this paper, a new class of residual-based tests is proposed for checking the validity of dynamic panel data models with both large cross-sectional units and time series dimensions. The individual and time effects can be fixed or random, and panel data can be balanced or unbalanced. The tests can detect a wide range of model misspecifications in the conditional mean of a dynamic panel data model, including functional form and lag misspecification. They check a large number of lags so that they can capture misspecification at any lag order asymptotically. No common alternative is assumed, thus allowing for heterogeneity in the degrees and directions of functional form misspecification across individuals. Thanks to the use of panel data with large N and T, the proposed nonparametric tests have an asymptotic normal distribution under the null hypothesis without requiring the smoothing parameters to grow with the sample sizes. This suggests better nonparametric asymptotic approximation for the panel data than for time series or cross sectional data. This is confirmed in a simulation study. We apply the new tests to test linear specification of cross-country growth equations and found significant nonlinearities in mean for OECD countries’ growth equation for annual and quintannual panel data.  相似文献   

12.
This paper extends the links between the non-parametric data envelopment analysis (DEA) models for efficiency analysis, duality theory and multi-criteria decision making models for the linear and non-linear case. By drawing on the properties of a partial Lagrangean relaxation, a correspondence is shown between the CCR, BCC and free disposable hull (FDH) models in DEA and the MCDM model. One of the implications is a characterization that verifies the sufficiency of the weighted scalarizing function, even for the non-convex case FDH. A linearization of FDH is presented along with dual interpretations. Thus, an input/output-oriented model is shown to be equivalent to a maximization of the weighted input/output, subject to production space feasibility. The discussion extends to the recent developments: the free replicability hull (FRH), the new elementary replicability hull (ERH) and the non-convex models by Petersen (1990). FRH is shown to be a true mixed integer program, whereas the latter can be characterized as the CCR and BCC models.  相似文献   

13.
We present and estimate models of an asymmetric relationship between CRSP stock index returns and the U.S. unemployment rate. Based on the Akaike Information Criterion, conventional linear time series models are improved by allowing asymmetric responses. Our results show that negative stock returns are quickly followed by sharp increases in unemployment, while more gradual unemployment declines follow positive stock returns. According to our forecasting model, the unemployment rate rises by 1.12 percentage points during the 12 months after a 10 percent stock decline. Because macroeconomic forecasters have been unable to reliably predict downturns, these findings may provide a useful contribution.  相似文献   

14.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

15.
本文对“产量陷阱”概念进行了修正,提出了“消费量陷阱”概念,证明了消费量陷阱与社会福利的变化方向相同,因而,存在消费量陷阱是实施非线性定价的必要条件。本文将全球通和神州行两种计费方式作为消费者面临的整体价格条件,定义了边际价格折线,进而定义了二阶非线性定价、三阶非线性定价;以消费量陷阱和边际价格折线为基础,定义了分界点、分界线,进而建立了线性最优定价模型、改进的二阶非线性最优定价模型和改进的三阶非线性最优定价模型。  相似文献   

16.
In the paper the problem of simultaneous linear estimation of fixed and random effects in the mixed linear model is considered. A necessary and sufficient conditions for a linear estimator of a linear function of fixed and random effects in balanced nested and crossed classification models to be admissible are given.  相似文献   

17.
ON ERROR CORRECTION MODELS: SPECIFICATION, INTERPRETATION, ESTIMATION   总被引:2,自引:0,他引:2  
Abstract. Error Correction Models (ECMs) have proved a popular organising principle in applied econometrics, despite the lack of consensus as to exactly what constitutes their defining characteristic, and the rather limited role that has been given to economic theory by their proponents. This paper uses a historical survey of the evolution of ECMs to explain the alternative specifications and interpretations and proceeds to examine their implications for estimation. The various approaches are illustrated for wage equations by application to UK labour market data 1855–1987. We demonstrate that error correction models impose strong and testable non-linear restrictions on dynamic econometric equations, and that they do not obviate the need for modelling the process of expectations formation. With the exception of a few special cases, both the non-linear restrictions and the modelling of expectations have been ignored by those who have treated ECMs as merely reparameterisations of dynamic linear regression models or vector autoregressions.  相似文献   

18.
In this paper we investigate the multi-period forecast performance of a number of empirical self-exciting threshold autoregressive (SETAR) models that have been proposed in the literature for modelling exchange rates and GNP, among other variables. We take each of the empirical SETAR models in turn as the DGP to ensure that the ‘non-linearity’ characterizes the future, and compare the forecast performance of SETAR and linear autoregressive models on a number of quantitative and qualitative criteria. Our results indicate that non-linear models have an edge in certain states of nature but not in others, and that this can be highlighted by evaluating forecasts conditional upon the regime. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we assess whether using non-linear dimension reduction techniques pays off for forecasting inflation in real-time. Several recent methods from the machine learning literature are adopted to map a large dimensional dataset into a lower-dimensional set of latent factors. We model the relationship between inflation and the latent factors using constant and time-varying parameter (TVP) regressions with shrinkage priors. Our models are then used to forecast monthly US inflation in real-time. The results suggest that sophisticated dimension reduction methods yield inflation forecasts that are highly competitive with linear approaches based on principal components. Among the techniques considered, the Autoencoder and squared principal components yield factors that have high predictive power for one-month- and one-quarter-ahead inflation. Zooming into model performance over time reveals that controlling for non-linear relations in the data is of particular importance during recessionary episodes of the business cycle or the current COVID-19 pandemic.  相似文献   

20.
Structural equation models with mean structure and non-linear constraints are the most frequent choice for estimating interaction effects when measurement errors are present. This article proposes eliminating the mean structure and all the constraints but one, which leads to a more easily handled model that is more robust to non-normality and more general as it can accommodate endogenous interactions and thus indirect effects. Our approach is compared to other approaches found in the literature with a Monte Carlo simulation and is found to be equally efficient under normality and less biased under non-normality. An empirical illustration is included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号