首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Bootstrapping Financial Time Series   总被引:2,自引:0,他引:2  
It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate.  相似文献   

2.
The bootstrap discrepancy measures the difference in rejection probabilities between a bootstrap test and one based on the true distribution. The order of magnitude of the bootstrap discrepancy is the same under the null hypothesis and under non-null processes described by Pitman drift. If the test statistic is not an exact pivot, critical values depend on which data-generating process (DGP) is used to determine the null distribution. We propose using the DGP which minimizes the bootstrap discrepancy. We also show that, under an asymptotic independence condition, the power of both bootstrap and asymptotic tests can be estimated cheaply by simulation.  相似文献   

3.
We examine the statistical performance of inequality indices in the presence of extreme values in the data and show that these indices are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy, even when standard bootstrap methods are employed. However, use of appropriate semiparametric methods for modelling the upper tail can greatly improve the performance of even those inequality indices that are normally considered particularly sensitive to extreme values.  相似文献   

4.
In this article, we investigate the validity of the univariate autoregressive sieve bootstrap applied to time series panels characterized by general forms of cross‐sectional dependence, including but not restricted to cointegration. Using the final equations approach we show that while it is possible to write such a panel as a collection of infinite order autoregressive equations, the innovations of these equations are not vector white noise. This causes the univariate autoregressive sieve bootstrap to be invalid in such panels. We illustrate this result with a small numerical example using a simple DGP for which the sieve bootstrap is invalid, and show that the extent of the invalidity depends on the value of specific parameters. We also show that Monte Carlo simulations in small samples can be misleading about the validity of the univariate autoregressive sieve bootstrap. The results in this article serve as a warning about the practical use of the autoregressive sieve bootstrap in panels where cross‐sectional dependence of general form may be present.  相似文献   

5.
A random sample drawn from a population would appear to offer an ideal opportunity to use the bootstrap in order to perform accurate inference, since the observations of the sample are IID. In this paper, Monte Carlo results suggest that bootstrapping a commonly used index of inequality leads to inference that is not accurate even in very large samples, although inference with poverty indices is satisfactory. We find that the major cause is the extreme sensitivity of many inequality indices to the exact nature of the upper tail of the income distribution. This leads us to study two non-standard bootstraps, the m out of n bootstrap, which is valid in some situations where the standard bootstrap fails, and a bootstrap in which the upper tail is modelled parametrically. Monte Carlo results suggest that accurate inference can be achieved with this last method in moderately large samples.  相似文献   

6.
This paper uses both Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH) models in order to determine different performance levels in a sample of 353 foreign equities operating in the Greek manufacturing sector. Particularly, convex and non-convex models are used alongside with bootstrap techniques in order to determine the effect of foreign ownership on SMEs’ performance. The study illustrates how the recent developments in efficiency analysis and statistical inference can be applied when evaluating performance issues. The analysis among the foreign equities indicates that the levels of foreign ownership have a positive effect on SMEs’ performance.  相似文献   

7.
We employ bootstrap techniques in a production frontier framework to provide statistical inference for each component in the decomposition of labor productivity growth, which has essentially been ignored in this literature. We show that only two of the four components (efficiency changes and human capital accumulation) have significantly contributed to growth in Africa. Although physical capital accumulation is the largest force, it is not statistically significant on average. Thus, ignoring statistical significance would falsely identify physical capital accumulation as a major driver of growth in Africa when it is not.  相似文献   

8.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

9.
Through Monte Carlo experiments the effects of a feedback mechanism on the accuracy in finite samples of ordinary and bootstrap inference procedures are examined in stable first- and second-order autoregressive distributed-lag models with non-stationary weakly exogenous regressors. The Monte Carlo is designed to mimic situations that are relevant when a weakly exogenous policy variable affects (and is affected by) the outcome of agents’ behaviour. In the parameterizations we consider, it is found that small-sample problems undermine ordinary first-order asymptotic inference procedures irrespective of the presence and importance of a feedback mechanism. We examine several residual-based bootstrap procedures, each of them designed to reduce one or several specific types of bootstrap approximation error. Surprisingly, the bootstrap procedure which only incorporates the conditional model overcomes the small sample problems reasonably well. Often (but not always) better results are obtained if the bootstrap also resamples the marginal model for the policymakers’ behaviour.  相似文献   

10.
Aspects of statistical analysis in DEA-type frontier models   总被引:2,自引:2,他引:2  
In Grosskopf (1995) and Banker (1995) different approaches and problems of statistical inference in DEA frontier models are presented. This paper focuses on the basic characteristics of DEA models from a statistical point of view. It arose from comments and discussions on both papers above. The framework of DEA models is deterministic (all the observed points lie on the same side of the frontier), nevertheless a stochastic model can be constructed once a data generating process is defined. So statistical analysis may be performed and sampling properties of DEA estimators can be established. However, practical statistical inference (such as test of hypothesis, confidence intervals) still needs artifacts like the bootstrap to be performed. A consistent bootstrap relies also on a clear definition of the data generating proces and on a consistent estimator of it: The approach of Simar and Wilson (1995) is described. Finally, some trails are proposed for introducing stochastic noise in DEA models, in the spirit of the Kneip-Simar (1995) approach.  相似文献   

11.
The wild bootstrap is studied in the context of regression models with heteroskedastic disturbances. We show that, in one very specific case, perfect bootstrap inference is possible, and a substantial reduction in the error in the rejection probability of a bootstrap test is available much more generally. However, the version of the wild bootstrap with this desirable property is without the skewness correction afforded by the currently most popular version of the wild bootstrap. Simulation experiments show that this does not prevent the preferred version from having the smallest error in rejection probability in small and medium-sized samples.  相似文献   

12.
We conduct a two-stage (DEA and regression) analysis of the efficiency of New Zealand secondary schools. Unlike previous applications of two-stage semi-parametric modelling of the school “production process”, we use Simar and Wilson’s double bootstrap procedure, which permits valid inference in the presence of unknown serial correlation in the efficiency scores. We are therefore able to draw robust conclusions about a system that has undergone extensive reforms with respect to ideas high on the educational agenda such as decentralised school management and parental choice. Most importantly, we find that school type affects school efficiency and so too does teacher quality.  相似文献   

13.
This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. In this paper, I first consider a canonical large deviations criterion for optimality and show that inference based on the empirical likelihood ratio statistic is optimal. Second, I introduce a new empirical likelihood bootstrap that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of non-pivotal limit distributions. Lastly, I analyze the finite sample properties of the proposed framework using Monte Carlo simulations. The simulation results are encouraging.  相似文献   

14.
This article studies inference of multivariate trend model when the volatility process is nonstationary. Within a quite general framework we analyze four classes of tests based on least squares estimation, one of which is robust to both weak serial correlation and nonstationary volatility. The existing multivariate trend tests, which either use non-robust standard errors or rely on non-standard distribution theory, are generally non-pivotal involving the unknown time-varying volatility function in the limit. Two-step residual-based i.i.d. bootstrap and wild bootstrap procedures are proposed for the robust tests and are shown to be asymptotically valid. Simulations demonstrate the effects of nonstationary volatility on the trend tests and the good behavior of the robust tests in finite samples.  相似文献   

15.
This paper describes a test of the null hypothesis that the first K autocorrelations of a covariance stationary time series are zero in the presence of statistical dependence. The test is based on the Box–Pierce Q statistic with bootstrap-based P-values. The bootstrap is implemented using a double blocks-of-blocks procedure with prewhitening. The finite sample performance of the bootstrap Q   test is investigated by simulation. In our experiments, the performance is satisfactory for samples of n=500n=500. At this sample size, the differences between the empirical and nominal rejection probabilities are essentially eliminated.  相似文献   

16.
We consider the impact of a break in the innovation volatility process on ratio‐based persistence change tests. We demonstrate that the ratio statistics used do not have pivotal limiting null distributions and that the associated tests display a considerable degree of size distortion with size approaching unity in some cases. In practice, therefore, on the basis of these tests the practitioner will face difficulty in discriminating between persistence change processes and processes which display a simple volatility break. A wild bootstrap‐based solution to the identified inference problem is proposed and is shown to work well in practice.  相似文献   

17.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

18.
Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fund size has a negative effect on the average fund manager’s benchmark-adjusted performance. Further, when we allow for time effects and the non-normality of fund returns, we find that there is no evidence that even the best performing fund managers can significantly out-perform the augmented benchmarks after fund management charges are taken into account.  相似文献   

19.
A smoothed least squares estimator for threshold regression models   总被引:1,自引:0,他引:1  
We propose a smoothed least squares estimator of the parameters of a threshold regression model. Our model generalizes that considered in Hansen [2000. Sample splitting and threshold estimation. Econometrica 68, 575–603] to allow the thresholding to depend on a linear index of observed regressors, thus allowing discrete variables to enter. We also do not assume that the threshold effect is vanishingly small. Our estimator is shown to be consistent and asymptotically normal thus facilitating standard inference techniques based on estimated standard errors or standard bootstrap for the slope and threshold parameters.  相似文献   

20.
It is well-known that the naive bootstrap yields inconsistent inference in the context of data envelopment analysis (DEA) or free disposal hull (FDH) estimators in nonparametric frontier models. For inference about efficiency of a single, fixed point, drawing bootstrap pseudo-samples of size m < n provides consistent inference, although coverages are quite sensitive to the choice of subsample size m. We provide a probabilistic framework in which these methods are shown to valid for statistics comprised of functions of DEA or FDH estimators. We examine a simple, data-based rule for selecting m suggested by Politis et al. (Stat Sin 11:1105–1124, 2001), and provide Monte Carlo evidence on the size and power of our tests. Our methods (i) allow for heterogeneity in the inefficiency process, and unlike previous methods, (ii) do not require multivariate kernel smoothing, and (iii) avoid the need for solutions of intermediate linear programs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号