首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this paper we present an exact maximum likelihood treatment for the estimation of a Stochastic Volatility in Mean (SVM) model based on Monte Carlo simulation methods. The SVM model incorporates the unobserved volatility as an explanatory variable in the mean equation. The same extension is developed elsewhere for Autoregressive Conditional Heteroscedastic (ARCH) models, known as the ARCH in Mean (ARCH‐M) model. The estimation of ARCH models is relatively easy compared with that of the Stochastic Volatility (SV) model. However, efficient Monte Carlo simulation methods for SV models have been developed to overcome some of these problems. The details of modifications required for estimating the volatility‐in‐mean effect are presented in this paper together with a Monte Carlo study to investigate the finite sample properties of the SVM estimators. Taking these developments of estimation methods into account, we regard SV and SVM models as practical alternatives to their ARCH counterparts and therefore it is of interest to study and compare the two classes of volatility models. We present an empirical study of the intertemporal relationship between stock index returns and their volatility for the United Kingdom, the United States and Japan. This phenomenon has been discussed in the financial economic literature but has proved hard to find empirically. We provide evidence of a negative but weak relationship between returns and contemporaneous volatility which is indirect evidence of a positive relation between the expected components of the return and the volatility process. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
Dynamic factor models have been the main “big data” tool used by empirical macroeconomists during the last 30 years. In this context, Kalman filter and smoothing (KFS) procedures can cope with missing data, mixed frequency data, time-varying parameters, non-linearities, non-stationarity, and many other characteristics often observed in real systems of economic variables. The main contribution of this paper is to provide a comprehensive updated summary of the literature on latent common factors extracted using KFS procedures in the context of dynamic factor models, pointing out their potential limitations. Signal extraction and parameter estimation issues are separately analyzed. Identification issues are also tackled in both stationary and non-stationary models. Finally, empirical applications are surveyed in both cases. This survey is relevant to researchers and practitioners interested not only in the theory of KFS procedures for factor extraction in dynamic factor models but also in their empirical application in macroeconomics and finance.  相似文献   

3.
Nonlinear regression models have been widely used in practice for a variety of time series and cross-section datasets. For purposes of analyzing univariate and multivariate time series data, in particular, smooth transition regression (STR) models have been shown to be very useful for representing and capturing asymmetric behavior. Most STR models have been applied to univariate processes, and have made a variety of assumptions, including stationary or cointegrated processes, uncorrelated, homoskedastic or conditionally heteroskedastic errors, and weakly exogenous regressors. Under the assumption of exogeneity, the standard method of estimation is nonlinear least squares. The primary purpose of this paper is to relax the assumption of weakly exogenous regressors and to discuss moment-based methods for estimating STR models. The paper analyzes the properties of the STR model with endogenous variables by providing a diagnostic test of linearity of the underlying process under endogeneity, developing an estimation procedure and a misspecification test for the STR model, presenting the results of Monte Carlo simulations to show the usefulness of the model and estimation method, and providing an empirical application for inflation rate targeting in Brazil. We show that STR models with endogenous variables can be specified and estimated by a straightforward application of existing results in the literature.  相似文献   

4.
Necmi K.   《Socio》2006,40(4):275-296
We develop foreign bank technical, cost and profit efficiency models for particular application with data envelopment analysis (DEA). Key motivations for the paper are (a) the often-observed practice of choosing inputs and outputs where the selection process is poorly explained and linkages to theory are unclear, and (b) foreign bank productivity analysis, which has been neglected in DEA banking literature. The main aim is to demonstrate a process grounded in finance and banking theories for developing bank efficiency models, which can bring comparability and direction to empirical productivity studies. We expect this paper to foster empirical bank productivity studies.  相似文献   

5.
Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revisions.Recent empirical work suggests that measurement errors typically have much more complex dynamics than existing models of data revisions allow. This paper describes a state-space model that allows for richer dynamics in these measurement errors, including the noise, news and spillover effects documented in this literature. We also show how to relax the common assumption that “true” values are observed after a few revisions.The result is a unified and flexible framework that allows for more realistic data revision properties, and allows the use of standard methods for optimal real-time estimation of trends and cycles. We illustrate the application of this framework with real-time data on US real output growth.  相似文献   

6.
Some Recent Developments in Futures Hedging   总被引:5,自引:0,他引:5  
The use of futures contracts as a hedging instrument has been the focus of much research. At the theoretical level, an optimal hedge strategy is traditionally based on the expected–utility maximization paradigm. A simplification of this paradigm leads to the minimum–variance criterion. Although this paradigm is quite well accepted, alternative approaches have been sought. At the empirical level, research on futures hedging has benefited from the recent developments in the econometrics literature. Much research has been done on improving the estimation of the optimal hedge ratio. As more is known about the statistical properties of financial time series, more sophisticated estimation methods are proposed. In this survey we review some recent developments in futures hedging. We delineate the theoretical underpinning of various methods and discuss the econometric implementation of the methods.  相似文献   

7.
This paper proposes new approximate long-memory VaR models that incorporate intra-day price ranges. These models use lagged intra-day range with the feature of considering different range components calculated over different time horizons. We also investigate the impact of the market overnight return on the VaR forecasts, which has not yet been considered with the range in VaR estimation. Model estimation is performed using linear quantile regression. An empirical analysis is conducted on 18 market indices. In spite of the simplicity of the proposed methods, the empirical results show that they successfully capture the main features of the financial returns and are competitive with established benchmark methods. The empirical results also show that several of the proposed range-based VaR models, utilizing both the intra-day range and the overnight returns, are able to outperform GARCH-based methods and CAViaR models.  相似文献   

8.
This paper develops a new model for the analysis of stochastic volatility (SV) models. Since volatility is a latent variable in SV models, it is difficult to evaluate the exact likelihood. In this paper, a non-linear filter which yields the exact likelihood of SV models is employed. Solving a series of integrals in this filter by piecewise linear approximations with randomly chosen nodes produces the likelihood, which is maximized to obtain estimates of the SV parameters. A smoothing algorithm for volatility estimation is also constructed. Monte Carlo experiments show that the method performs well with respect to both parameter estimates and volatility estimates. We illustrate our model by analysing daily stock returns on the Tokyo Stock Exchange. Since the method can be applied to more general models, the SV model is extended so that several characteristics of daily stock returns are allowed, and this more general model is also estimated. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
Recently, there has been considerable work on stochastic time-varying coefficient models as vehicles for modelling structural change in the macroeconomy with a focus on the estimation of the unobserved paths of random coefficient processes. The dominant estimation methods, in this context, are based on various filters, such as the Kalman filter, that are applicable when the models are cast in state space representations. This paper introduces a new class of autoregressive bounded processes that decompose a time series into a persistent random attractor, a time varying autoregressive component, and martingale difference errors. The paper examines, rigorously, alternative kernel based, nonparametric estimation approaches for such models and derives their basic properties. These estimators have long been studied in the context of deterministic structural change, but their use in the presence of stochastic time variation is novel. The proposed inference methods have desirable properties such as consistency and asymptotic normality and allow a tractable studentization. In extensive Monte Carlo and empirical studies, we find that the methods exhibit very good small sample properties and can shed light on important empirical issues such as the evolution of inflation persistence and the purchasing power parity (PPP) hypothesis.  相似文献   

10.
This paper investigates the properties of a linearized stochastic volatility (SV) model originally from Harvey et al. (Rev Econ Stud 61:247–264, 1994) under an extended flexible specification (discrete mixtures of normal). General closed form expressions for the moment conditions are derived. We show that our proposed model captures various tail behavior in a more flexible way than the Gaussian SV model, and it can accommodate certain correlation structure between the two innovations. Rather than using likelihood-based estimation methods via MCMC, we use an alternative procedure based on the characteristic function (CF). We derive analytical expressions for the joint CF and present our estimator as the minimizer of the weighted integrated mean-squared distance between the joint CF and its empirical counterpart (ECF). We complete the paper with an empirical application of our model to three stock indices, including S&P 500, Dow Jones 30 Industrial Average index and Nasdaq Composite index. The proposed model captures the dynamics of the absolute returns well and presents some consistent and supportive evidence for the Taylor effect and Machina effect.  相似文献   

11.
Recently, single‐equation estimation by the generalized method of moments (GMM) has become popular in the monetary economics literature, for estimating forward‐looking models with rational expectations. We discuss a method for analysing the empirical identification of such models that exploits their dynamic structure and the assumption of rational expectations. This allows us to judge the reliability of the resulting GMM estimation and inference and reveals the potential sources of weak identification. With reference to the New Keynesian Phillips curve of Galí and Gertler [Journal of Monetary Economics (1999) Vol. 44, 195] and the forward‐looking Taylor rules of Clarida, Galí and Gertler [Quarterly Journal of Economics (2000) Vol. 115, 147], we demonstrate that the usual ‘weak instruments’ problem can arise naturally, when the predictable variation in inflation is small relative to unpredictable future shocks (news). Hence, we conclude that those models are less reliably estimated over periods when inflation has been under effective policy control.  相似文献   

12.
This paper provides consistent information criteria for the selection of forecasting models that use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor‐augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor‐augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our main contribution is to show the conditions required for selection consistency of a class of information criteria that reflect this additional source of estimation error. We show that existing factor‐augmented model selection criteria are inconsistent in circumstances where N is of larger order than , where N and T are the cross‐section and time series dimensions of the dataset respectively, and that the standard Bayesian information criterion is inconsistent regardless of the relationship between N and T. We therefore propose a new set of information criteria that guarantee selection consistency in the presence of estimated idiosyncratic components. The properties of these new criteria are explored through a Monte Carlo simulation study. The paper concludes with an empirical application to long‐horizon exchange rate forecasting using a recently proposed model with country‐specific idiosyncratic components from a panel of global exchange rates.  相似文献   

13.
14.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

15.
Technical efficiency analysis is a fundamental tool to measure the performance of production activity. Recently, an increasing interest in the state-contingent approach has emerged in the literature although such interest has not yet been accompanied by an increase of empirical applications. This is largely due to the fact that empirical models with state-contingent production frontiers are usually ill-posed. In this work, a discussion on the role of the generalized cross-entropy estimator within the state-contingent production framework is presented. To the best of the authors’ knowledge, the example provided in this work is the first real-world empirical application on technical efficiency analysis with the state-contingent approach using the generalized cross-entropy estimator.  相似文献   

16.
In the knowledge society, universities have assumed new missions and relations in order to contribute to economic and social development, while preserving their own sustainability. This article aims to explore scientific literature on innovation and entrepreneurship in the academic setting, describing how the field is organized, main terms and definitions, theoretical frameworks, and empirical models, in order to direct future research. A systematic literature review was conducted, in which articles indexed at Web of Science were initially submitted to a bibliometric analysis. Then, the content of a set of articles best fitting the objectives of the study was analyzed. Bibliometric analysis shows an increasing literature, with publications over more than 40 years. There are studies from many disciplines, with those in business and economics prevailing, mainly related to management and originating from the USA and Europe. Content analysis shows a fragmented literature, with definitions not showing a clear relationship between innovation and entrepreneurship, or their use within universities in coherence with their traditional definitions. Both theoretical frameworks and empirical models are very heterogeneous, but four groups of studies were identified based on their theoretical frameworks, and also based on their empirical models. With only a few exceptions, empirical models do not share many components and variables, and there are no clear boundaries between the different models. Despite the increasing literature, it is still fragmented and undertheorized, requiring more systematic and holistic studies, considering both the economic and the social aspects of innovation and entrepreneurship within universities.  相似文献   

17.
Vector autoregressive (VAR) models have become popular in marketing literature for analyzing the behavior of competitive marketing systems. One drawback of these models is that the number of parameters can become very large, potentially leading to estimation problems. Pooling data for multiple cross-sectional units (stores) can partly alleviate these problems. An important issue in such models is how heterogeneity among cross-sectional units is accounted for. We investigate the performance of several pooling approaches that accommodate different levels of cross-sectional heterogeneity in a simulation study and in an empirical application. Our results show that the random coefficients modeling approach is an overall good choice when the estimated VAR model is used for out-of-sample forecasting only. When the estimated model is used to compute Impulse Response Functions, we conclude that one should select a modeling approach that matches the level of heterogeneity in the data.  相似文献   

18.
19.
There has been a rapid development of theoretical models to characterise and measure preferences for environmental alternatives. But this development does not seem to have been matched by the development of empirical methodologies to implement these models. The object of this paper is to develop, apply, and test such an empirical method. The method consists of deriving indifference maps to characterize choices between recreational opportunities. Demand curves are derived from the maps. The policy context of the study required the estimation of consumers' surplus values for one of the recreation activities. These values were read off the demand curves. They were validated in a partial prediction test.  相似文献   

20.
This paper discusses the estimation of a class of nonlinear state space models including nonlinear panel data models with autoregressive error components. A health economics example illustrates the usefulness of such models. For the approximation of the likelihood function, nonlinear filtering algorithms developed in the time‐series literature are considered. Because of the relatively simple structure of these models, a straightforward algorithm based on sequential Gaussian quadrature is suggested. It performs very well both in the empirical application and a Monte Carlo study for ordered logit and binary probit models with an AR(1) error component. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号