首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Estimating structural changes in regression quantiles   总被引:1,自引:0,他引:1  
This paper considers the estimation of multiple structural changes occurring at unknown dates in one or multiple conditional quantile functions. The analysis covers time series models as well as models with repeated cross-sections. We estimate the break dates and other parameters jointly by minimizing the check function over all permissible break dates. The limiting distribution of the estimator is derived and the coverage property of the resulting confidence interval is assessed via simulations. A procedure to determine the number of breaks is also discussed. Empirical applications to the quarterly US real GDP growth rate and the underage drunk driving data suggest that the method can deliver more informative results than the analysis of the conditional mean function alone.  相似文献   

2.
This paper proposes a new unbiased estimator for the population variance in finite population sample surveys using auxiliary information. This estimator has a smaller mean squared error than the conventional unbiased estimator, the ratio estimator established by Isaki (1983) and it has the same precision than the regression estimator. Furthermore, it is a much more interesting estimator from the computation viewpoint.  相似文献   

3.
Sequential estimation problems for the mean parameter of an exponential distribution has received much attention over the years. Purely sequential and accelerated sequential estimators and their asymptotic second-order characteristics have been laid out in the existing literature, both for minimum risk point as well as bounded length confidence interval estimation of the mean parameter. Having obtained a data set from such sequentially designed experiments, the paper investigates estimation problems for the associatedreliability function. Second-order approximations are provided for the bias and mean squared error of the proposed estimator of the reliability function, first under a general setup. An ad hoc bias-corrected version is also introduced. Then, the proposed estimator is investigated further under some specific sequential sampling strategies, already available in the literature. In the end, simulation results are presented for comparing the proposed estimators of the reliability function for moderate sample sizes and various sequential sampling strategies.  相似文献   

4.
A typical Business Register (BR) is mainly based on administrative data files provided by organisations that produce them as a by-product of their function. Such files do not necessarily yield a perfect Business Register. A good BR should have the following characteristics: (1) It should reflect the complex structures of businesses with multiple activities, in multiple locations or with multiple legal entities; (2) It should be free of duplication, extraneous or missing units; (3) It should be properly classified in terms of key stratification variables, including size, geography and industry; (4) It should be easily updateable to represent the newer business picture, and not lag too much behind it. In reality, not all these desirable features are fully satisfied, resulting in a universe that has missing units, inaccurate structures, as well as improper contact information, to name a few defects.
These defects can be compensated by using sampling and estimation procedures. For example, coverage can be improved using multiple frame techniques, and the sample size can be increased to account for misclassification of units and deaths on the register. At the time of estimation, auxiliary information can be used in a variety of ways. It can be used to impute missing variables, to treat outliers, or to create synthetic variables obtained via modelling. Furthermore, time lags between the birth of units and the time that they are included on the register can be accounted for appropriately inflating the design-based estimates.  相似文献   

5.
A new semi-parametric expected shortfall (ES) estimation and forecasting framework is proposed. The proposed approach is based on a two-step estimation procedure. The first step involves the estimation of value at risk (VaR) at different quantile levels through a set of quantile time series regressions. Then, the ES is computed as a weighted average of the estimated quantiles. The quantile weighting structure is parsimoniously parameterized by means of a beta weight function whose coefficients are optimized by minimizing a joint VaR and ES loss function of the Fissler–Ziegel class. The properties of the proposed approach are first evaluated with an extensive simulation study using two data generating processes. Two forecasting studies with different out-of-sample sizes are then conducted, one of which focuses on the 2008 Global Financial Crisis period. The proposed models are applied to seven stock market indices, and their forecasting performances are compared to those of a range of parametric, non-parametric, and semi-parametric models, including GARCH, conditional autoregressive expectile (CARE), joint VaR and ES quantile regression models, and a simple average of quantiles. The results of the forecasting experiments provide clear evidence in support of the proposed models.  相似文献   

6.
The purpose and methodology of the Norwegian census has changed considerably during the last 35 years. While the census was previously the main source of socio-demographic information, it is to day just one of several sources. After an identification number for each individual was introduced and used in various administrative registers, the dominating role of the census has changed dramatically. For some years it has been the policy of Statistics Norway to collaborate with various governmental agencies in order to use administrative registers in statistics production. This policy has been supported politically, and a new Statistics Act has been useful in these efforts. The purpose of this paper is to present the strategy and methodology used to produce statistics in general, census statistics in particular, when based on a combined use of administrative registers and directly collected data. Experiences from Norwegian censuses since 1960 will be presented.  相似文献   

7.
Forecast evaluations aim to choose an accurate forecast for making decisions by using loss functions. However, different loss functions often generate different ranking results for forecasts, which complicates the task of comparisons. In this paper, we develop statistical tests for comparing performances of forecasting expectiles and quantiles of a random variable under consistent loss functions. The test statistics are constructed with the extremal consistent loss functions of Ehm et al. (2016). The null hypothesis of the tests is that a benchmark forecast at least performs equally well as a competing one under all extremal consistent loss functions. It can be shown that if such a null holds, the benchmark will also perform at least equally well as the competitor under all consistent loss functions. Thus under the null, when different consistent loss functions are used, the result that the competitor does not outperform the benchmark will not be altered. We establish asymptotic properties of the proposed test statistics and propose to use the re-centered bootstrap to construct their empirical distributions. Through simulations, we show that the proposed test statistics perform reasonably well. We then apply the proposed method to evaluations of several different forecast methods.  相似文献   

8.
Given a random variable X with finite mean, for each 0 < p < 1, a new sharp bound is found on the distance between a p -quantile of X and its mean in terms of the central absolute first moment of X . The new bounds strengthen the fact that the mean of X is within one standard deviation of any of its medians, as well as a recent quantile-generalization of this fact by O'Cinneide.  相似文献   

9.
Most studies in the structural change literature focus solely on the conditional mean, while under various circumstances, structural change in the conditional distribution or in conditional quantiles is of key importance. This paper proposes several tests for structural change in regression quantiles. Two types of statistics are considered, namely, a fluctuation type statistic based on the subgradient and a Wald type statistic, based on comparing parameter estimates obtained from different subsamples. The former requires estimating the model under the null hypothesis, and the latter involves estimation under the alternative hypothesis. The tests proposed can be used to test for structural change occurring in a pre-specified quantile, or across quantiles, which can be viewed as testing for change in the conditional distribution with a linear specification of the conditional quantile function. Both single and multiple structural changes are considered. We derive the limiting distributions under the null hypothesis, and show they are nuisance parameter free and can be easily simulated. A simulation study is conducted to assess the size and power in finite samples.  相似文献   

10.
At Statistics Norway administrative data have been extensively used in order to improve the quality of survey data. Various techniques have been used to reduce sampling variance and/or to reduce the effects of non-response. In the present article some of the most commonly used methods are being presented, and based on empirical rather than theoretical evaluations, we give our conclusions concerning their potentials and limitations.  相似文献   

11.
In this paper, an estimator of finite population variance proposed by Isaki (1983) is studied under the two different situations of random non-response suggested by Tracy and Osahan (1994). A distribution is proposed for the number of sampling units on which information could not be obtained due to random non-response. The estimators for the mean square errors of the proposed strategies are also suggested. This paper was written while both authors were members of the Dept. of Econometrics, Monash University, Clayton 3168, Australia. This paper was presented on SISC—1996, Sydney, Australia. The opinions and results discussed in this paper are of authors and not necessarily of their institutes.  相似文献   

12.
A method of Chow (1983) and a method of Dagli and Taylor (1982) for solving and estimating linear simultaneous equations under rational expectations are compared. The latter solution is shown to be a special case of the former in the sense of imposing a set of restrictions on the parameters of the former solution. Statistical methods to test the restrictions implicit in the latter solution are suggested. An illustrated model is provided to demonstrate the two methods, with the Dagli-Taylor method found to give inconsistent estimates when the restrictions are not met.  相似文献   

13.
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner.  相似文献   

14.
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.  相似文献   

15.
In the present investigation, a general set-up for inference from survey data that covers the estimation of variance of estimators of totals and distribution functions has been considered, using known first and second order moments of auxiliary information at the estimation stage. The traditional linear regression estimator of population total owed to Hansen et al. Sample survey methods and theory. vol. 1 & 2, New York, Wiley (1953) is shown to be unique in its class of estimators, and celebrates Golden Jubilee Year-2003 for its outstanding performance in the literature by following Singh Advanced sampling theory with applications: How Michael selected Amy, vols 1 & 2, Kluwer, The Netherlands, pp 1–1247 2003. This particular paper has been designed to repair the methodology of Rao J. Off Stat 10(2):153–165 (1994) and hence that of Singh Ann Ins Stat Math 53(2):404–417 (2001). Although there is no need of simulation study to demonstrate the superiority of the proposed technique, because the theoretical results are crystal clear, but a small scale level simulation study have been designed to show the performance of the proposed estimators over the existing estimators in the literature.  相似文献   

16.
The gamma distribution function can be expressed in terms of the Normal distribution and density functions with sufficient accuracy for most practical purposes.
The distribution function for the density xΛ-1e-x/μΛΓ(A) on 0 -R(Λ){(1 + 1/1 2Λ) φ(z) + 11 -z/4Λ1/2+2(z2+ 2)/45Λ] φ(z) /3 Λ1/2} where φ(z)≅1/[1 +e-2z(√2/π+z2 /28)] and φ(z) = e-z2 /2/√2π are the Normal distribution and density functions, y is the appropriate root of y-y2/6+y3/36-y4/270= In (x/Λμ), z= Λ1/2 y, and R( Λ) is the remainder term in Stirling's approximation for In Γ(Λ).  相似文献   

17.
The Wooldridge method is based on a simple and novel strategy to deal with the initial values problem in nonlinear dynamic random‐effects panel data models. The characteristic of the method makes it very attractive in empirical applications. However, its finite sample performance and robustness are not fully known as of yet. In this paper we investigate the performance and robustness of this method in comparison with an ideal case in which the initial values are known constants; the worst scenario is based on an exogenous initial values assumption, and the Heckman's reduced‐form approximation method, which is widely used in the literature. The dynamic random‐effects probit and Tobit (type I) models are used as working examples. Various designs of the Monte Carlo experiments and two further empirical illustrations are provided. The results suggest that the Wooldridge method works very well only for the panels of moderately long duration (longer than 5–8 periods). Heckman's reduced‐form approximation is suggested for short panels (shorter than 5 periods). It is also found that all the methods tend to perform equally well for panels of long duration (longer than 15–20 periods). Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
An asymptotically distribution–free confidence interval for the difference of the p –th quantiles of two distributions was presented by ALBERS and LOHNBERG (1984). A modification of their procedure is presented for use when the sample sizes are specified.  相似文献   

19.
The use of auxiliary variables to improve the efficiency of estimators is a well‐known strategy in survey sampling. Typically, the auxiliary variables used are the totals of appropriate measurement that are exactly known from registers or administrative sources. Increasingly, however, these totals are estimated from surveys and are then used to calibrate estimators and improve their efficiency. We consider different types of survey structures and develop design‐based estimators that are calibrated on known as well as estimated totals of auxiliary variables. The optimality properties of these estimators are studied. These estimators can be viewed as extensions of the Montanari generalised regression estimator adapted to the more complex situations. The paper studies interesting special cases to develop insights and guidelines to properly manage the survey‐estimated auxiliary totals.  相似文献   

20.
One- and two-factor stochastic volatility models are assessed over three sets of stock returns data: S&P 500, DJIA, and Nasdaq. Estimation is done by simulated maximum likelihood using techniques that are computationally efficient, robust, straightforward to implement, and easy to adapt to different models. The models are evaluated using standard, easily interpretable time-series tools. The results are broadly similar across the three data sets. The tests provide no evidence that even the simple single-factor models are unable to capture the dynamics of volatility adequately; the problem is to get the shape of the conditional returns distribution right. None of the models come close to matching the tails of this distribution. Including a second factor provides only a relatively small improvement over the single-factor models. Fitting this aspect of the data is important for option pricing and risk management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号