首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The interval estimation of the scale parameter and the joint confidence region of the parameters of two-parameter exponential distribution under Type II progressive censoring is proposed. In addition, the simulation study for the performance of all proposed pivotal quantities is done in this paper. The criteria of minimum confidence length and minimum confidence area are used to obtain the optimal estimation. The predictive intervals of the future observation and the future interarrival times based on the Type II progressive censored sample are also provided. One biometrical example is also given to illustrate the proposed methods.  相似文献   

2.
For the slope parameter of the classical errors-in-variables model, existing interval estimations with finite length will have confidence level equal to zero because of the Gleser–Hwang effect. Especially when the reliability ratio is low and the sample size is small, the Gleser–Hwang effect is so serious that it leads to the very liberal coverages and the unacceptable lengths of the existing confidence intervals. In this paper, we obtain two new fiducial intervals for the slope. One is based on a fiducial generalized pivotal quantity and we prove that this interval has the correct asymptotic coverage. The other fiducial interval is based on the method of the generalized fiducial distribution. We also construct these two fiducial intervals for the other parameters of interest of the classical errors-in-variables model and introduce these intervals to a hybrid model. Then, we compare these two fiducial intervals with the existing intervals in terms of empirical coverage and average length. Simulation results show that the two proposed fiducial intervals have better frequency performance. Finally, we provide a real data example to illustrate our approaches.  相似文献   

3.
In this paper, upon using the known expressions for the Best Linear Unbiased Estimators (BLUEs) of the location and scale parameters of the Laplace distribution based on a progressively Type-II right censored sample, we derive the exact moment generating function (MGF) of the linear combination of standard Laplace order statistics. By using this MGF, we obtain the exact density function of the linear combination. This density function is then utilized to develop exact marginal confidence intervals (CIs) for the location and scale parameters through some pivotal quantities. Next, we derive the exact density of the BLUEs-based quantile estimator and use it to develop exact CIs for the population quantile. A brief mention is made about the reliability and cumulative hazard functions and as to how exact CIs can be constructed for these functions based on BLUEs. A Monte Carlo simulation study is then carried out to evaluate the performance of the developed inferential results. Finally, an example is presented to illustrate the point and interval estimation methods developed here.  相似文献   

4.
We consider exact procedures for testing the equality of means (location parameters) of two Laplace populations with equal scale parameters based on corresponding independent random samples. The test statistics are based on either the maximum likelihood estimators or the best linear unbiased estimators of the Laplace parameters. By conditioning on certain quantities we manage to express their exact distributions as mixtures of ratios of linear combinations of standard exponential random variables. This allows us to find their exact quantiles and tabulate them for several sample sizes. The powers of the tests are compared either numerically or by simulation. Exact confidence intervals for the difference of the means corresponding to those tests are also constructed. The exact procedures are illustrated via a real data example.  相似文献   

5.
Summary In this paper, using the pivotal quantity method, new shortest-length confidence intervals and uniformly minimum variance unbiased (UMVU) estimators are constructed, where two independent random samples are available from families of distributions involving truncation parameters. Also, in the case of one sample, we give, for some uniform distributions, confidence intervals which are the shortest among all known confidence intervals.  相似文献   

6.
Quantile estimation is important for a wide range of applications. While point estimates based on one or two order statistics are common, constructing confidence intervals around them, however, is a more difficult problem. This paper has two goals. First, it surveys the numerous distribution-free methods for constructing approximate confidence intervals for quantiles. These techniques can be divided roughly into four categories: using a pivotal quantity, resampling, interpolation, and empirical likelihood methods. Second, a method based on the pivotal quantity that has received limited attention in the past is extended. Comprehensive simulation studies are used to compare performance across methods. The proposed method is simple and performs similarly to linear interpolation methods and a smoothed empirical likelihood method. While the proposed method has slightly wider interval widths, it can be calculated for more extreme quantiles even when there are few observations.  相似文献   

7.
Youn-Min Chou  D. B. Owen 《Metrika》1989,36(1):279-290
Summary Simultaneous prediction intervals for observations inl future samples from a two-parameter exponential distribution are considered. The prediction limits depend upon a previously available complete or type II censored sample from the same distribution. An equation is derived from which the prediction factor is determined and exact prediction limits are illustrated with a numerical example.  相似文献   

8.
Bootstrap prediction intervals for SETAR models   总被引:1,自引:0,他引:1  
This paper considers four methods for obtaining bootstrap prediction intervals (BPIs) for the self-exciting threshold autoregressive (SETAR) model. Method 1 ignores the sampling variability of the threshold parameter estimator. Method 2 corrects the finite sample biases of the autoregressive coefficient estimators before constructing BPIs. Method 3 takes into account the sampling variability of both the autoregressive coefficient estimators and the threshold parameter estimator. Method 4 resamples the residuals in each regime separately. A Monte Carlo experiment shows that (1) accounting for the sampling variability of the threshold parameter estimator is necessary, despite its super-consistency; (2) correcting the small-sample biases of the autoregressive parameter estimators improves the small-sample properties of bootstrap prediction intervals under certain circumstances; and (3) the two-sample bootstrap can improve the long-term forecasts when the error terms are regime-dependent.  相似文献   

9.
Semiparametric quantile regression is employed to flexibly estimate sales response for frequently purchased consumer goods. Using retail store‐level data, we compare the performance of models with and without monotonic smoothing for fit and prediction accuracy. We find that (a) flexible models with monotonicity constraints imposed on price effects dominate both in‐sample and out‐of‐sample comparisons while being robust even at the boundaries of the price distribution when data is sparse; (b) quantile‐based confidence intervals are much more accurate compared to least‐squares‐based intervals; (c) specifications reflecting that managers may not have exact knowledge about future competitive pricing perform extremely well. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
《Journal of econometrics》2002,109(2):275-303
This article considers tests for parameter stability over time in general econometric models, possibly nonlinear-in-variables. Existing test statistics are commonly not asymptotically pivotal under nonstandard conditions. In such cases, the external bootstrap tests proposed in this paper are appealing from a practical viewpoint. We propose to use bootstrap versions of the asymptotic critical values based on a first-order asymptotic expansion of the test statistics under the null hypothesis, which consists of a linear transformation of the unobserved “innovations” partial sum process. The nature of these transformations under nonstandard conditions is discussed for the main testing principles. Also, we investigate the small sample performance of the proposed bootstrap tests by means of a small Monte Carlo experiment.  相似文献   

11.
Maximum likelihood is used to estimate a generalized autoregressive conditional heteroskedastic (GARCH) process where the residuals have a conditional stable distribution (GARCH-stable). The scale parameter is modelled such that a GARCH process with normally distributed residuals is a special case. The usual methods of estimating the parameters of the stable distribution assume constant scale and will underestimate the characteristic exponent when the scale parameter follows a GARCH process. The parameters of the GARCH-stable model are estimated with daily foreign currency returns. Estimates of characteristic exponents are higher with the GARCH-stable than when independence is assumed. Monte Carlo hypothesis testing procedures, however, reject our GARCH-stable model at the 1% significance level in four out of five cases.  相似文献   

12.
M. N. Goria 《Metrika》1980,27(1):189-194
Summary Here we propose two tests for testing in the bivariate normal population assuming that the ratio of the variances in it is known. The first test (U.M.P.U.) is derived by using the Neyman-Pearson lemma, whereas the second test is obtained through testing the scale parameter of the Cauchy distribution. The powers of the first and second tests are compared with a well-known test, based on the sample correlation coefficient for small and large samples respectively.  相似文献   

13.
Here we consider the record data from the two-parameter of bathtub-shaped distribution. First, we develop simplified forms for the single moments, variances and covariance of records. These distributional properties are quite useful in obtaining the best linear unbiased estimators of the location and scale parameters which can be included in the model. The estimation of the unknown shape parameters and prediction of the future unobserved records based on some observed ones are discussed. Frequentist and Bayesian analyses are adopted for conducting the estimation and prediction problems. The likelihood method, moment based method, bootstrap methods as well as the Bayesian sampling techniques are applied for the inference problems. The point predictors and credible intervals of future record values based on an informative set of records can be developed. Monte Carlo simulations are performed to compare the so developed methods and one real data set is analyzed for illustrative purposes.  相似文献   

14.
We consider conditional moment models under semi-strong identification. Identification strength is directly defined through the conditional moments that flatten as the sample size increases. Our new minimum distance estimator is consistent, asymptotically normal, robust to semi-strong identification, and does not rely on the choice of a user-chosen parameter, such as the number of instruments or some smoothing parameter. Heteroskedasticity-robust inference is possible through Wald testing without prior knowledge of the identification pattern. Simulations show that our estimator is competitive with alternative estimators based on many instruments, being well-centered with better coverage rates for confidence intervals.  相似文献   

15.
The relative cost of prediction error in the economic order quantity model that results from the incorrect prediction of parameter values for annual demand, purchase order cost and carrying cost is described as a function solely of the relative error in those parameter values, and is shown to be independent of the absolute magnitude of these parameters. This function is shown to have, over a certain range, an important contraction property so that the relative cost of prediction error will be (significantly) less than the relative error in the parameter value.  相似文献   

16.
Ordered data arise naturally in many fields of statistical practice. Often some sample values are unknown or disregarded due to various reasons. On the basis of some sample quantiles from the Rayleigh distribution, the problems of estimating the Rayleigh parameter, hazard rate and reliability function, and predicting future observations are addressed using a Bayesian perspective. The construction of β-content and β-expectation Bayes tolerance limits is also tackled. Under squared-error loss, Bayes estimators and predictors are deduced analytically. Exact tolerance limits are derived by solving simple nonlinear equations. Highest posterior density estimators and credibility intervals, as well as Bayes estimators and predictors under linear loss, can easily be computed iteratively.  相似文献   

17.
18.
Empirical prediction intervals are constructed based on the distribution of previous out-of-sample forecast errors. Given historical data, a sample of such forecast errors is generated by successively applying a chosen point forecasting model to a sequence of fixed windows of past observations and recording the associated deviations of the model predictions from the actual observations out-of-sample. The suitable quantiles of the distribution of these forecast errors are then used along with the point forecast made by the selected model to construct an empirical prediction interval. This paper re-examines the properties of the empirical prediction interval. Specifically, we provide conditions for its asymptotic validity, evaluate its small sample performance and discuss its limitations.  相似文献   

19.
Consider an i.i.d. sample \({X^*_{1},X^*_{2},\ldots,X^*_{n}}\) from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima) sequence, \({X^*_{1:1},X^*_{2:2},\ldots,X^*_{n:n}}\), where \({X^*_{j:j}=\max\{ X^*_1, \ldots,X^*_j \}}\). This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd’s (in Biometrica 39:88–95, 1952) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/ log n), for a wide class of distributions.  相似文献   

20.
This paper incorporates vintage differences and forecasts into the Markov switching models described by Hamilton (1994). The vintage differences and forecasts induce parameter breaks close to the end of the sample, too close for standard maximum likelihood techniques to produce precise parameter estimates. A supplementary procedure estimates the statistical properties of the end-of-sample observations that behave differently from the rest, allowing inferred probabilities to reflect the breaks. Empirical results using real-time data show that these techniques improve the ability of a Markov switching model based on GDP and GDI to recognize the start of the 2001 recession.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号