首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
This paper treats estimation in a class of new nonlinear threshold autoregressive models with both a stationary and a unit root regime. Existing literature on nonstationary threshold models has basically focused on models where the nonstationarity can be removed by differencing and/or where the threshold variable is stationary. This is not the case for the process we consider, and nonstandard estimation problems are the result.  相似文献   

3.
This paper proposes a class of locally stationary diffusion processes. The model has a time varying but locally linear drift and a volatility coefficient that is allowed to vary over time and space. The model is semiparametric because we allow these functions to be unknown and the innovation process is parametrically specified, indeed completely known. We propose estimators of all the unknown quantities based on long span data. Our estimation method makes use of the property of local stationarity. We establish asymptotic theory for the proposed estimators as the time span increases, so we do not rely on infill asymptotics. We apply this method to interest rate data to illustrate the validity of our model. Finally, we present a simulation study to provide the finite-sample performance of the proposed estimators.  相似文献   

4.
In the present paper, we show how a consistent estimator can be derived for the asymptotic covariance matrix of stationary 0–1-valued vector fields in R d , whose supports are jointly stationary random closed sets. As an example, which is of particular interest for statistical applications, we consider jointly stationary random closed sets associated with the Boolean model in R d such that the components indicate the frequency of coverage by the single grains of the Boolean model. For this model, a representation formula for the entries of the covariance matrix is obtained.  相似文献   

5.
Motivated by the need for a positive‐semidefinite estimator of multivariate realized covariance matrices, we model noisy and asynchronous ultra‐high‐frequency asset prices in a state‐space framework with missing data. We then estimate the covariance matrix of the latent states through a Kalman smoother and expectation maximization (KEM) algorithm. Iterating between the two EM steps, we obtain a covariance matrix estimate which is robust to both asynchronicity and microstructure noise, and positive‐semidefinite by construction. We show the performance of the KEM estimator using extensive Monte Carlo simulations that mimic the liquidity and market microstructure characteristics of the S&P 500 universe as well as in a high‐dimensional application on US stocks. KEM provides very accurate covariance matrix estimates and significantly outperforms alternative approaches recently introduced in the literature. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
This paper surveys the conditions under which it is possible to represent a continuous preference ordering using utility functions. We start with a historical perspective on the notions of utility and preferences, continue by defining the mathematical concepts employed in this literature, and then list several key contributions to the topic of representability. These contributions concern both the preference orderings and the spaces where they are defined. For any continuous preference ordering, we show the need for separability and the sufficiency of connectedness and separability, or second countability, of the space where it is defined. We emphasize the need for separability by showing that in any nonseparable metric space, there are continuous preference orderings without utility representation. However, by reinforcing connectedness, we show that countably boundedness of the preference ordering is a necessary and sufficient condition for the existence of a (continuous) utility representation. Finally, we discuss the special case of strictly monotonic preferences.  相似文献   

7.
Because of the increased availability of large panel data sets, common factor models have become very popular. The workhorse of the literature is the principal components (PC) method, which is based on an eigen-analysis of the sample covariance matrix of the data. Some of its uses are to estimate the factors and their loadings, to determine the number of factors, and to conduct inference when estimated factors are used in panel regression models. The bulk of the underlying theory that justifies these uses is based on the assumption that both the number of time periods, T, and the number of cross-section units, N, tend to infinity. This is a drawback, because in practice T and N are always finite, which means that the asymptotic approximation can be poor, and there are plenty of simulation results that confirm this. In the current paper, we focus on the typical micro panel where only N is large and T is finite and potentially very small—a scenario that has not received much attention in the PC literature. A version of PC is proposed, henceforth referred to as cross-section average-based PC (CPC), whereby the eigen-analysis is performed on the covariance matrix of the cross-section averaged data as opposed to on the covariance matrix of the raw data as in original PC. The averaging attenuates the idiosyncratic noise, and this is the reason why in CPC T can be fixed. Mirroring the development in the PC literature, the new method is used to estimate the factors and their average loadings, to determine the number of factors, and to estimate factor-augmented regressions, leading to a complete CPC-based toolbox. The relevant theory is established, and is evaluated using Monte Carlo simulations.  相似文献   

8.
This paper discusses the estimation of a class of nonlinear state space models including nonlinear panel data models with autoregressive error components. A health economics example illustrates the usefulness of such models. For the approximation of the likelihood function, nonlinear filtering algorithms developed in the time‐series literature are considered. Because of the relatively simple structure of these models, a straightforward algorithm based on sequential Gaussian quadrature is suggested. It performs very well both in the empirical application and a Monte Carlo study for ordered logit and binary probit models with an AR(1) error component. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
This paper gives an overview of space–time variability in agriculture and the environment. Analysis is based upon geostatistics in the framework of linear regression theory. Common procedures for spatial data are extended towards the space–time domain. Several practical studies are presented. The first study describes the use of REML to estimate semi-variogram models within the presence of trends. The second study describes the use of probability maps in an environmental statistical study. The third study describes the use of geostatistics in modelling the development of a disease in cabbage. Studies are evaluated using quadratic scoring rules.  相似文献   

10.
This paper is a survey of estimation techniques for stationary and ergodic diffusion processes observed at discrete points in time. The reader is introduced to the following techniques: (i) estimating functions with special emphasis on martingale estimating functions and so-called simple estimating functions; (ii) analytical and numerical approximations of the likelihood function which can in principle be made arbitrarily accurate; (iii) Bayesian analysis and MCMC methods; and (iv) indirect inference and EMM which both introduce auxiliary (but wrong) models and correct for the implied bias by simulation.  相似文献   

11.
Nonlinear regression models have been widely used in practice for a variety of time series and cross-section datasets. For purposes of analyzing univariate and multivariate time series data, in particular, smooth transition regression (STR) models have been shown to be very useful for representing and capturing asymmetric behavior. Most STR models have been applied to univariate processes, and have made a variety of assumptions, including stationary or cointegrated processes, uncorrelated, homoskedastic or conditionally heteroskedastic errors, and weakly exogenous regressors. Under the assumption of exogeneity, the standard method of estimation is nonlinear least squares. The primary purpose of this paper is to relax the assumption of weakly exogenous regressors and to discuss moment-based methods for estimating STR models. The paper analyzes the properties of the STR model with endogenous variables by providing a diagnostic test of linearity of the underlying process under endogeneity, developing an estimation procedure and a misspecification test for the STR model, presenting the results of Monte Carlo simulations to show the usefulness of the model and estimation method, and providing an empirical application for inflation rate targeting in Brazil. We show that STR models with endogenous variables can be specified and estimated by a straightforward application of existing results in the literature.  相似文献   

12.
Prudent statistical analysis of correlated data requires accounting for the correlation among the measurements. Specifying a form for the covariance matrix of the data could reduce the high number of parameters of the covariance and increase efficiency of the inferences about the regression parameters. Motivated by the success of ordinary, partial and inverse correlograms in identifying parsimonious models for stationary time series, we introduce generalizations of these plots for nonstationary data. Their roles in detecting heterogeneity and correlation of the data and identifying parsimonious models for the covariance matrix are illuminated using a longitudinal dataset. Decomposition of a covariance matrix into "variance" and "dependence" components provides the necessary ingredients for the proposed graphs. This amounts to replacing a 3-D correlation plot by a pair of 2-D plots, providing complementary information about dependence and heterogeneity. Models identified and fitted using the variance-correlation decomposition of a covariance matrix are not guaranteed to be positive definite, but those using the modified Cholesky decomposition are. Limitations of our graphical diagnostics for general multivariate data where the measurements are not (time-) ordered are discussed.  相似文献   

13.
This paper considers measurement error from a new perspective. In surveys, response errors are often caused by the fact that respondents recall past events and quantities imperfectly. We explore the consequences of limited recall for the identification of marginal effects. Our identification approach is entirely nonparametric, using Matzkin-type nonseparable models that nest a large class of potential structural models. We show that measurement error due to limited recall will generally exhibit nonstandard behavior, in particular be nonclassical and differential, even for left-hand side variables in linear models. We establish that information reduction by individuals is the critical issue for the severity of recall measurement error. In order to detect information reduction, we propose a nonparametric test statistic. Finally, we propose bounds to address identification problems resulting from recall errors. We illustrate our theoretical findings using real-world data on food consumption.  相似文献   

14.
We propose a measure of predictability based on the ratio of the expected loss of a short‐run forecast to the expected loss of a long‐run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non‐parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

15.
Factor modelling of a large time series panel has widely proven useful to reduce its cross-sectional dimensionality. This is done by explaining common co-movements in the panel through the existence of a small number of common components, up to some idiosyncratic behaviour of each individual series. To capture serial correlation in the common components, a dynamic structure is used as in traditional (uni- or multivariate) time series analysis of second order structure, i.e. allowing for infinite-length filtering of the factors via dynamic loadings. In this paper, motivated from economic data observed over long time periods which show smooth transitions over time in their covariance structure, we allow the dynamic structure of the factor model to be non-stationary over time by proposing a deterministic time variation of its loadings. In this respect we generalize the existing recent work on static factor models with time-varying loadings as well as the classical, i.e. stationary, dynamic approximate factor model. Motivated from the stationary case, we estimate the common components of our dynamic factor model by the eigenvectors of a consistent estimator of the now time-varying spectral density matrix of the underlying data-generating process. This can be seen as a time-varying principal components approach in the frequency domain. We derive consistency of this estimator in a “double-asymptotic” framework of both cross-section and time dimension tending to infinity. The performance of the estimators is illustrated by a simulation study and an application to a macroeconomic data set.  相似文献   

16.
Wold Theorem plays a fundamental role in the decomposition of weakly stationary time series. It provides a moving average representation of the process under consideration in terms of uncorrelated innovations, whatever the nature of the process is. From an empirical point of view, this result enables to identify orthogonal shocks, for instance in macroeconomic and financial time series. More theoretically, the decomposition of weakly stationary stochastic processes can be seen as a special case of the Abstract Wold Theorem, that allows to decompose Hilbert spaces by using isometric operators. In this work we explain this link in detail, employing the Hilbert space spanned by a weakly stationary time series and the lag operator as isometry. In particular, we characterize the innovation subspace by exploiting the adjoint operator. We also show that the isometry of the lag operator is equivalent to weak stationarity. Our methodology, fully based on operator theory, provides novel tools useful to discover new Wold-type decompositions of stochastic processes, in which the involved isometry is no more the lag operator. In such decompositions the orthogonality of innovations is ensured by construction since they are derived from the Abstract Wold Theorem.  相似文献   

17.
Differencing is a very popular stationary transformation for series with stochastic trends. Moreover, when the differenced series is heteroscedastic, authors commonly model it using an ARMA-GARCH model. The corresponding ARIMA-GARCH model is then used to forecast future values of the original series. However, the heteroscedasticity observed in the stationary transformation should be generated by the transitory and/or the long-run component of the original data. In the former case, the shocks to the variance are transitory and the prediction intervals should converge to homoscedastic intervals with the prediction horizon. We show that, in this case, the prediction intervals constructed from the ARIMA-GARCH models could be inadequate because they never converge to homoscedastic intervals. All of the results are illustrated using simulated and real time series with stochastic levels.  相似文献   

18.
This paper proposes two types of stochastic correlation structures for Multivariate Stochastic Volatility (MSV) models, namely the constant correlation (CC) MSV and dynamic correlation (DC) MSV models, from which the stochastic covariance structures can easily be obtained. Both structures can be used for purposes of determining optimal portfolio and risk management strategies through the use of correlation matrices, and for calculating Value-at-Risk (VaR) forecasts and optimal capital charges under the Basel Accord through the use of covariance matrices. A technique is developed to estimate the DC MSV model using the Markov Chain Monte Carlo (MCMC) procedure, and simulated data show that the estimation method works well. Various multivariate conditional volatility and MSV models are compared via simulation, including an evaluation of alternative VaR estimators. The DC MSV model is also estimated using three sets of empirical data, namely Nikkei 225 Index, Hang Seng Index and Straits Times Index returns, and significant dynamic correlations are found. The Dynamic Conditional Correlation (DCC) model is also estimated, and is found to be far less sensitive to the covariation in the shocks to the indexes. The correlation process for the DCC model also appears to have a unit root, and hence constant conditional correlations in the long run. In contrast, the estimates arising from the DC MSV model indicate that the dynamic correlation process is stationary.  相似文献   

19.
The paper is about an approach for parametric inference on instantaneously transformed stationary processes. The paper discusses the asymptotics of the Whittle estimator of the parameters involved and also provides the explicit expression of the asymptotic covariance matrix which does not necessarily require the innovation Gaussianity assumption. As a specific instantaneous transformation, the paper introduces a new version of the Box–Cox transformation and investigates in detail the vector ARMA processes implemented by that transformation, proposing a computation-intensive procedure for parametric estimation and testing. As a computationally feasible test not relying upon the knowledge of the explicit analytic form of the asymptotic covariance matrix or on the information equality, the paper proposes a Monte Carlo Wald test, providing illustrative simulation and real-data examples.  相似文献   

20.
A nonparametric, residual-based stationary bootstrap procedure is proposed for unit root testing in a time series. The procedure generates a pseudoseries which mimics the original, but ensures the presence of a unit root. Unlike many others in the literature, the proposed test is valid for a wide class of weakly dependent processes and is not based on parametric assumptions on the data-generating process. Large sample theory is developed and asymptotic validity is shown via a bootstrap functional central limit theorem. The case of a least squares statistic is discussed in detail, including simulations to investigate the procedure's finite sample performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号