首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 325 毫秒
1.
Although attention has been given to obtaining reliable standard errors for the plug-in estimator of the Gini index, all standard errors suggested until now are either complicated or quite unreliable. An approximation is derived for the estimator by which it is expressed as a sum of IID random variables. This approximation allows us to develop a reliable standard error that is simple to compute. A simple but effective bias correction is also derived. The quality of inference based on the approximation is checked in a number of simulation experiments, and is found to be very good unless the tail of the underlying distribution is heavy. Bootstrap methods are presented which alleviate this problem except in cases in which the variance is very large or fails to exist. Similar methods can be used to find reliable standard errors of other indices which are not simply linear functionals of the distribution function, such as Sen’s poverty index and its modification known as the Sen–Shorrocks–Thon index.  相似文献   

2.
This paper derives an approximation of the mean square error (MSE) of the GMM estimator in dynamic panel data models. The approximation is based on higher-order asymptotic theory under double asymptotics. While first-order theory under double asymptotics provides information about the bias, it does not provide enough information about the variance of the estimator. Higher-order theory enables us to obtain information about the variance. From this result, a procedure for choosing the number of instruments is proposed. The simulations confirm that the proposed procedure improves the precision of the estimator.  相似文献   

3.
We develop a unit‐root test based on a simple variant of Gallant's (1981) flexible Fourier form. The test relies on the fact that a series with several smooth structural breaks can often be approximated using the low frequency components of a Fourier expansion. Hence, it is possible to test for a unit root without having to model the precise form of the break. Our unit‐root test employing Fourier approximation has good size and power for the types of breaks often used in economic analysis. The appropriate use of the test is illustrated using several interest rate spreads.  相似文献   

4.
We show that given a value function approximation V of a strongly concave stochastic dynamic programming problem (SDDP), the associated policy function approximation is Hölder continuous in V.  相似文献   

5.
This paper provides closed-form likelihood approximations for multivariate jump-diffusion processes widely used in finance. For a fixed order of approximation, the maximum-likelihood estimator (MLE) computed from this approximate likelihood achieves the asymptotic efficiency of the true yet uncomputable MLE as the sampling interval shrinks. This method is used to uncover the realignment probability of the Chinese Yuan. Since February 2002, the market-implied realignment intensity has increased fivefold. The term structure of the forward realignment rate, which completely characterizes future realignment probabilities, is hump-shaped and peaks at mid-2004. The realignment probability responds quickly to economic news releases and government interventions.  相似文献   

6.
We demonstrate that when testing for stochastic dominance of order three and above, using a weighted version of the Kolmogorov–Smirnov-type statistic proposed by McFadden [1989. In: Fomby, T.B., Seo, T.K. (Eds.), Studies in the Economics of Uncertainty. Springer, New York, pp. 113–134] is necessary for obtaining a non-degenerate asymptotic distribution. Since the asymptotic distribution is complex, we discuss a bootstrap approximation for it in the context of a real application.  相似文献   

7.
Riesz estimators     
We consider properties of estimators that can be written as vector lattice (Riesz space) operations. Using techniques widely used in economic theory and functional analysis, we study the approximation properties of these estimators paying special attention to additive models. We also provide two algorithms RIESZVAR(i-ii) for the consistent parametric estimation of continuous multivariate piecewise linear functions.  相似文献   

8.
In this paper, a new model to analyze the comovements in the volatilities of a portfolio is proposed. The Pure Variance Common Features model is a factor model for the conditional variances of a portfolio of assets, designed to isolate a small number of variance features that drive all assets’ volatilities. It decomposes the conditional variance into a short-run idiosyncratic component (a low-order ARCH process) and a long-run component (the variance factors). An empirical example provides evidence that models with very few variance features perform well in capturing the long-run common volatilities of the equity components of the Dow Jones.  相似文献   

9.
Equilibrium business cycle models have typically less shocks than variables. As pointed out by Altug (1989) International Economic Review 30 (4) 889–920 and Sargent (1989) The Journal of Political Economy 97 (2) 251–287, if variables are measured with error, this characteristic implies that the model solution for measured variables has a factor structure. This paper compares estimation performance for the impulse response coefficients based on a VAR approximation to this class of models and an estimation method that explicitly takes into account the restrictions implied by the factor structure. Bias and mean-squared error for both factor- and VAR-based estimates of impulse response functions are quantified using, as data-generating process, a calibrated standard equilibrium business cycle model. We show that, at short horizons, VAR estimates of impulse response functions are less accurate than factor estimates while the two methods perform similarly at medium and long run horizons.  相似文献   

10.
This paper shows that the asymptotic normal approximation is often insufficiently accurate for volatility estimators based on high frequency data. To remedy this, we derive Edgeworth expansions for such estimators. The expansions are developed in the framework of small-noise asymptotics. The results have application to Cornish–Fisher inversion and help setting intervals more accurately than those relying on normal distribution.  相似文献   

11.
This paper addresses the issue of testing the ‘hybrid’ New Keynesian Phillips curve (NKPC) through vector autoregressive (VAR) systems and likelihood methods, giving special emphasis to the case where the variables are non‐stationary. The idea is to use a VAR for both the inflation rate and the explanatory variable(s) to approximate the dynamics of the system and derive testable restrictions. Attention is focused on the ‘inexact’ formulation of the NKPC. Empirical results over the period 1971–98 show that the NKPC is far from providing a ‘good first approximation’ of inflation dynamics in the Euro area.  相似文献   

12.
The paper examines a Lagrange Multiplier type test for the constancy of the parameter in general models with dependent data without imposing any artificial choice of the possible location of the break. In order to prove the asymptotic behaviour of the test, we extend a strong approximation result for partial sums of a sequence of random variables. We also present a Monte-Carlo experiment to examine the finite sample performance of the test and how it compares with tests which assume some knowledge of the possible location of the break.  相似文献   

13.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

14.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

15.
In two recent papers Enders and Lee (2009) and Becker, Enders and Lee (2006) provide Lagrange multiplier and ordinary least squares de‐trended unit root tests, and stationarity tests, respectively, which incorporate a Fourier approximation element in the deterministic component. Such an approach can prove useful in providing robustness against a variety of breaks in the deterministic trend function of unknown form and number. In this article, we generalize the unit root testing procedure based on local generalized least squares (GLS) de‐trending proposed by Elliott, Rothenberg and Stock (1996) to allow for a Fourier approximation to the unknown deterministic component in the same way. We show that the resulting unit root tests possess good finite sample size and power properties and the test statistics have stable non‐standard distributions, despite the curious result that their limiting null distributions exhibit asymptotic rank deficiency.  相似文献   

16.
We examine the higher order properties of the wild bootstrap (Wu, 1986) in a linear regression model with stochastic regressors. We find that the ability of the wild bootstrap to provide a higher order refinement is contingent upon whether the errors are mean independent of the regressors or merely uncorrelated with them. In the latter case, the wild bootstrap may fail to match some of the terms in an Edgeworth expansion of the full sample test statistic. Nonetheless, we show that the wild bootstrap still has a lower maximal asymptotic risk as an estimator of the true distribution than a normal approximation, in shrinking neighborhoods of properly specified models. To assess the practical implications of this result we conduct a Monte Carlo study contrasting the performance of the wild bootstrap with a normal approximation and the traditional nonparametric bootstrap.  相似文献   

17.
This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite‐sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71–90] support this explanation. The Monte Carlo evidence, together with a large‐sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight.  相似文献   

18.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

19.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

20.
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号