首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 840 毫秒
1.
This paper considers a Gaussian first-order autoregressive process with unknown intercept where the initial value of the variable is a known constant. Monte Carlo simulations are used to investigate the sampling distribution of the t statistic for the autoregressive parameter when its value is in the neighborhood of unity. A small sigma asymptotic result is exploited in the construction of exact non-similar tests. The powers of non-similar tests of the random walk and other hypotheses are estimated for sample sizes typical in economic applications.  相似文献   

2.
Summary Admissibility of estimators under vague prior information on the distribution of the unknown parameter is studied which leads to the notion of gamma-admissibility. A sufficient condition for an estimator of the formδ(x)=(ax+b)/(cx+d) to be gamma-admissible in the one-parameter exponential family under squared error loss is established. As an application of this result two equalizer rules are shown to be unique gamma-minimax estimators by proving their gamma-admissibility.  相似文献   

3.
In this paper we introduce a new regression model in which the response variable is bounded by two unknown parameters. A special case is a bounded alternative to the four parameter logistic model. The four parameter model which has unbounded responses is widely used, for instance, in bioassays, nutrition, genetics, calibration and agriculture. In reality, the responses are often bounded although the bounds may be unknown, and in that situation, our model reflects the data-generating mechanism better. Complications arise for the new model, however, because the likelihood function is unbounded, and the global maximizers are not consistent estimators of unknown parameters. Although the two sample extremes, the smallest and the largest observations, are consistent estimators for the two unknown boundaries, they have a slow convergence rate and are asymptotically biased. Improved estimators are developed by correcting for the asymptotic biases of the two sample extremes in the one sample case; but even these consistent estimators do not obtain the optimal convergence rate. To obtain efficient estimation, we suggest using the local maximizers of the likelihood function, i.e., the solution to the likelihood equations. We prove that, with probability approaching one as the sample size goes to infinity, there exists a solution to the likelihood equation that is consistent at the rate of the square root of the sample size and it is asymptotically normally distributed.  相似文献   

4.
Micro-aggregation is a frequently used strategy to anonymize data before they are released to the scientific public. A sample of a continuous random variable is individually micro-aggregated by first sorting and grouping the data into groups of equal size and then replacing the values of the variable in each group by their group mean. In a similar way, data with more than one variable can be anonymized by individual micro-aggregation. Data thus distorted may still be used for statistical analysis. We show that if probabilities and quantiles are estimated in the usual way by computing relative frequencies and sample quantiles, respectively, these estimates are consistent and asymptotically normal under mild conditions.  相似文献   

5.
Choosing the sample size in advance is a familiar problem: often, additional observations appear to be desirable. The final sample size then becomes a random variable, which has rather serious consequences.
Two such sample extension situations will be considered here. In the first situation, the observed sample variance determines whether or not to double the original sample size. In the second situation, the variances observed in two independent samples are compared; their ratio determines the number of additional observations.  相似文献   

6.
Misclassification is found in many of the variables used in social sciences and, in practice, tends to be ignored in statistical analyses, and this can lead to biased results. This paper shows how to correct for differential misclassification in multilevel models and illustrates the extent to which this changes fixed and random parameter estimates. Reliability studies on self-reported behaviour of pregnant women suggest that there may be differential misclassification related to smoking and, thus, to child exposure to smoke. Models are applied to the Millennium Cohort Study data. The response variable is the child cognitive development assessed by the British Ability Scales at 3 years of age and explanatory variables are child exposure to smoke and family income. The proposed method allows a correction for misclassification when the specificity and sensitivity are known, and the assessment of potential biases occurring in the multilevel model parameter estimates if a validation data sample is not available, which is often the case.  相似文献   

7.
This paper describes a method for estimating simultaneously the parameter vector of the systematic component and the distribution function of the random component of a censored linear regression model. The estimator is obtained by minimizing the sum of the squares of the differences between the observed values of the dependent variable and the corresponding expected values of this variable according to the estimated parameter vector and distribution function. The resulting least squares parameter estimator incorporates information on the distribution of the random component of the regression model that is available from the estimation sample. Hence, it may often be more efficient than are parameter estimators that do not use such information. The results of numerical experiments with the least squares estimator tend to support this hypothesis.  相似文献   

8.
The objective of this paper was to investigate differences in male employee experiences in the light of employment equity law and a strong affirmative action drive within present-day South African organizations. This research is important as it can substantiate or invalidate perspectives and beliefs surrounding employment equity issues. A cross-sectional design was used which consisted of a stratified random sample from five corporate organizations (N = 1000). Latent variable modeling with Bayesian estimation was implemented. This paper also demonstrated the use of informative hypothesis testing and subsequent Bayes factors to directly compare the informative hypotheses, in order to show how much more likely one hypothesis is to be the correct hypothesis, compared to the other(s). The results revealed that non-designated (white male) employees experience more job insecurity than their designated (black male) counterparts, but this does not necessarily associate with more turnover intention. It was also found that when designated employees experience less career opportunities, they show more turnover intention. Furthermore, it was shown that designated employees perceive more discrimination, but that this does not associate with more turnover intention. The limitations and future research opportunities are discussed.  相似文献   

9.
本文讨论了局部随机游走STAR模型、局部随机趋势STAR模型的线性性检验问题,构造了Wald类检验统计量,推导出了这些统计量的极限分布,并分析了这些统计量有限样本下的统计特性;本文提出了在局部平稳性未知的条件下,进行STAR模型的线性性检验方法,构建了稳健的检验统计量。检验功效与检验水平分析表明,该统计量具有良好的检验水平及较高的检验功效。  相似文献   

10.
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

11.
Some quality control schemes have been developed when several related quality characteristics are to be monitored. The familiar multivariate process monitoring and control procedure is the Hotelling’s T 2 control chart for monitoring the mean vector of the process. It is a direct analog of the univariate shewhart [`(x)]{\bar{x}} chart. As in the case of univariate, the ARL improvements are very important particularly for small process shifts. In this paper, we study the T 2 control chart with two-state adaptive sample size, when the shift in the process mean does not occur at the beginning but at some random time in the future. Further, the occurrence time of the shift is assumed to be exponentially distributed random variable.  相似文献   

12.
The beta–binomial distribution is reported in literature as a useful generalization of the binomial in case of heterogeneous binomial sampling. An extra model parameter is introduced to accommodate for extra–binomial variation. Some additions to results already available will be given by presenting approximate F–tests for factorial designs, where the response variable is of 0–1 type and sampling is heterogeneous binomial. These tests can be used when sample sizes are large and equal and some degrees of freedom are left from replicates or negligible interactions to estimate the extra model parameter.  相似文献   

13.
Let X 1, X 2, ..., X n be a random sample from a normal distribution with unknown mean μ and known variance σ 2. In many practical situations, μ is known a priori to be restricted to a bounded interval, say [−m, m] for some m > 0. The sample mean , then, becomes an inadmissible estimator for μ. It is also not minimax with respect to the squared error loss function. Minimax and other estimators for this problem have been studied by Casella and Strawderman (Ann Stat 9:870–878, 1981), Bickel (Ann Stat 9:1301–1309, 1981) and Gatsonis et al. (Stat Prob Lett 6:21–30, 1987) etc. In this paper, we obtain some new estimators for μ. The case when the variance σ 2 is unknown is also studied and various estimators for μ are proposed. Risk performance of all estimators is numerically compared for both the cases when σ 2 may be known and unknown.  相似文献   

14.
Very often values of a size variable are known for the elements of a population we want to sample. For example, the elements may be clusters, the size variable denoting the number of units in a cluster. Then, it is quite usual to base the selection of elements on inclusion probabilities which are proportionate to the size values. To estimate the total of all values of an unknown variable for the units in the population of interest (i.e. for the units contained in the clusters) we may use weights, e.g. inverse inclusion probabilities. We want to clarify these ideas by the minimax principle. Especially, we will show that the use of inclusion probabilities equal to 1 is recommendable for units with high values of the size measure. AMS Classification 2000: Primary 62D05. Secondary 62C20  相似文献   

15.
The probability distribution of the i –th and j–th order statistics and of the range R of a sample of size n, taken from a population with probability density function f (x) have been obtained when the sample size n is a random variable N and has: (i) a generalized Poisson distribution; and (ii) a generalized negative bonimial distribution. Specific results are then obtained; (a) when f (x) is uniform over (0,1); and (b) when f(x) is exponential. All the results for N, being a Poisson, binomial and negative binomial rv follow as special cases.  相似文献   

16.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

17.
The traditional rationale for differencing time series data is to attain stationarity. For a nearly non-stationary first-order autoregressive process—AR (1) with positive slope parameter near unity—we were led to a complementary rationale. If one suspects near non-stationarity of the AR (1) process, if the sample size is ‘small’ or ‘moderate’, and if good one-step-ahead prediction performance is the goal, then it is wise to difference the data and treat the differences as observations on a stationary AR (1) process. Estimation by Ordinary Least Squares then appears to be at least as satisfactory as nonlinear least squares. Use of differencing for an already stationary process can be motivated by Bayesian concepts: differencing can be viewed as an easy way to incorporate non-diffuse prior judgement—that the process is nearly non-stationary—into one's analysis. Random walks and near random walks are often encountered in economics. Unless one's sample size is large, the same statistical analyses apply to either.  相似文献   

18.
A general convolution theorem within a Bayesian framework is presented. Consider estimation of the Euclidean parameter θ by an estimator T within a parametric model. Let W be a prior distribution for θ and define G as the W -average of the distribution of T - θ under θ . In some cases, for any estimator T the distribution G can be written as a convolution G = K * L with K a distribution depending only on the model, i.e. on W and the distributions under θ of the observations. In such a Bayes convolution result optimal estimators exist, satisfying G = K . For location models we show that finite sample Bayes convolution results hold in the normal, loggamma and exponential case. Under regularity conditions we prove that normal and loggamma are the only smooth location cases. We also discuss relations with classical convolution theorems.  相似文献   

19.
We explore the consequences of adjoining a symmetry group to a statistical model. Group actions are first induced on the sample space, and then on the parameter space. It is argued that the right invariant measure induced by the group on the parameter space is a natural non-informative prior for the parameters of the model. The permissible sub-parameters are introduced, i.e., the subparameters upon which group actions can be defined. Equivariant estimators are similarly defined. Orbits of the group are defined on the sample space and on the parameter space; in particular the group action is called transitive when there is only one orbit. Credibility sets and confidence sets are shown (under right invariant prior and assuming transitivity on the parameter space) to be equal when defined by permissible sub-parameters and constructed from equivariant estimators. The effect of different choices of transformation group is illustrated by examples, and properties of the orbits on the sample space and on the parameter space are discussed. It is argued that model reduction should be constrained to one or several orbits of the group. Using this and other natural criteria and concepts, among them concepts related to design of experiments under symmetry, leads to links towards chemometrical prediction methods and towards the foundation of quantum theory.  相似文献   

20.
Feng Li  Lu Lin  Yuxia Su 《Metrika》2013,76(2):225-238
Variable selection plays an important role in the high dimensionality data analysis, the Dantzig selector performs variable selection and model fitting for linear and generalized linear models. In this paper we focus on variable selection and parametric estimation for partially linear models via the Dantzig selector. Large sample asymptotic properties of the Dantzig selector estimator are studied when sample size n tends to infinity while p is fixed. We see that the Dantzig selector might not be consistent. To remedy this drawback, we take the adaptive Dantzig selector motivated by Dicker and Lin (submitted). Moreover, we obtain that the adaptive Dantzig selector estimator for the parametric component of partially linear models has the oracle properties under some appropriate conditions. As generalizations of the Dantzig selector, both the adaptive Dantzig selector and the Dantzig selector optimization can be implemented by the efficient algorithm DASSO proposed by James et al. (J R Stat Soc Ser B 71:127–142, 2009). Choices of tuning parameter and bandwidth are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号