首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider the loglinear model for categorical data under the assumption of multinomial sampling. We are interested in testing between various hypotheses on the parameter space when we have some hypotheses relating to the parameters of the models that can be written in terms of constraints on the frequencies. The usual likelihood ratio test, with maximum likelihood estimator for the unspecified parameters, is generalized to tests based on -divergence statistics, using minimum -divergence estimator. These tests yield the classical likelihood ratio test as a special case. Asymptotic distributions for the new -divergence test statistics are derived under the null hypothesis.  相似文献   

2.
The Dirichlet‐multinomial process can be seen as the generalisation of the binomial model with beta prior distribution when the number of categories is larger than two. In such a scenario, setting informative prior distributions when the number of categories is great becomes difficult, so the need for an objective approach arises. However, what does objective mean in the Dirichlet‐multinomial process? To deal with this question, we study the sensitivity of the posterior distribution to the choice of an objective Dirichlet prior from those presented in the available literature. We illustrate the impact of the selection of the prior distribution in several scenarios and discuss the most sensible ones.  相似文献   

3.
Nader Ebrahimi 《Metrika》1993,40(1):339-348
The role of the so-called surplus processes in the assessment of probability of survival of a company is well-known in risk theory and applications thereof. However, the insurance models used in this regard ignore the fact that, in many situations, no relevant information is available for the assessment of survival after the company goes out of business. In this paper, we revisit the classical risk model in order to remedy this situation. Having stopped the deficit process, which is negative of the surplus process, at the time of ruin, under two different sampling schemes, we obtain inference procedure for ruin probabilities. As by products of our methodology, we also obtain procedures to assess the reliability of systems whose survival depends on a cumulative damage process, which is equivalent to the aggregate claim size process of the classical risk model.  相似文献   

4.
This paper provides a test procedure for the problem of testing Bernoulli success probability in the case of costly trials when an inverse sampling is carried out. The proposed test is based on a two population adaptive sampling scheme used in clinical trials. Some exact and asymptotic results related to the test are studied. The proposed procedure is applicable where the alternatives are not too far from the null hypothetical value.  相似文献   

5.
6.
Fixed-width confidence intervals for the difference of location parameters of two independent negative exponential distributions are constructed via triple sampling when the scale parameters are unknown and unequal. The present three-stage estimation methodology is put forth because (i) it is operationally more convenient than the existing purely sequential counterpart, and (ii) the three-stage and the purely sequential estimation techniques have fairly similar asymptotic second-order characteristics.  相似文献   

7.
Recent electricity price forecasting studies have shown that decomposing a series of spot prices into a long-term trend-seasonal and a stochastic component, modeling them independently and then combining their forecasts, can yield more accurate point predictions than an approach in which the same regression or neural network model is calibrated to the prices themselves. Here, considering two novel extensions of this concept to probabilistic forecasting, we find that (i) efficiently calibrated non-linear autoregressive with exogenous variables (NARX) networks can outperform their autoregressive counterparts, even without combining forecasts from many runs, and that (ii) in terms of accuracy it is better to construct probabilistic forecasts directly from point predictions. However, if speed is a critical issue, running quantile regression on combined point forecasts (i.e., committee machines) may be an option worth considering. Finally, we confirm an earlier observation that averaging probabilities outperforms averaging quantiles when combining predictive distributions in electricity price forecasting.  相似文献   

8.
We propose non-nested hypothesis tests for conditional moment restriction models based on the method of generalized empirical likelihood (GEL). By utilizing the implied GEL probabilities from a sequence of unconditional moment restrictions that contains equivalent information of the conditional moment restrictions, we construct Kolmogorov–Smirnov and Cramér–von Mises type moment encompassing tests. Advantages of our tests over Otsu and Whang’s (2011) tests are: (i) they are free from smoothing parameters, (ii) they can be applied to weakly dependent data, and (iii) they allow non-smooth moment functions. We derive the null distributions, validity of a bootstrap procedure, and local and global power properties of our tests. The simulation results show that our tests have reasonable size and power performance in finite samples.  相似文献   

9.
Several jackknife estimators of a relative risk in a single 2×2 contingency table and of a common relative risk in a 2×2× K contingency table are presented. The estimators are based on the maximum likelihood estimator in a single table and on an estimator proposed by Tarone (1981) for stratified samples, respectively. For the stratified case, a sampling scheme is assumed where the number of observations within each table tends to infinity but the number of tables remains fixed. The asymptotic properties of the above estimators are derived. Especially, we present two general results which under certain regularity conditions yield consistency and asymptotic normality of every jackknife estimator of a bunch of functions of binomial probabilities.  相似文献   

10.
In this paper we consider the problem of testing for equality of two density or two conditional density functions defined over mixed discrete and continuous variables. We smooth both the discrete and continuous variables, with the smoothing parameters chosen via least-squares cross-validation. The test statistics are shown to have (asymptotic) normal null distributions. However, we advocate the use of bootstrap methods in order to better approximate their null distribution in finite-sample settings and we provide asymptotic validity of the proposed bootstrap method. Simulations show that the proposed tests have better power than both conventional frequency-based tests and smoothing tests based on ad hoc smoothing parameter selection, while a demonstrative empirical application to the joint distribution of earnings and educational attainment underscores the utility of the proposed approach in mixed data settings.  相似文献   

11.
Early survey statisticians faced a puzzling choice between randomized sampling and purposive selection but, by the early 1950s, Neyman's design-based or randomization approach had become generally accepted as standard. It remained virtually unchallenged until the early 1970s, when Royall and his co-authors produced an alternative approach based on statistical modelling. This revived the old idea of purposive selection, under the new name of “balanced sampling”. Suppose that the sampling strategy to be used for a particular survey is required to involve both a stratified sampling design and the classical ratio estimator, but that, within each stratum, a choice is allowed between simple random sampling and simple balanced sampling; then which should the survey statistician choose? The balanced sampling strategy appears preferable in terms of robustness and efficiency, but the randomized design has certain countervailing advantages. These include the simplicity of the selection process and an established public acceptance that randomization is “fair”. It transpires that nearly all the advantages of both schemes can be secured if simple random samples are selected within each stratum and a generalized regression estimator is used instead of the classical ratio estimator.  相似文献   

12.
This paper establishes the asymptotic distributions of the impulse response functions in panel vector autoregressions with a fixed time dimension. It also proves the asymptotic validity of a bootstrap approximation to their sampling distributions. The autoregressive parameters are estimated using the GMM estimators based on the first differenced equations and the error variance is estimated using an extended analysis-of-variance type estimator. Contrary to the time series setting, we find that the GMM estimator of the autoregressive coefficients is not asymptotically independent of the error variance estimator. The asymptotic dependence calls for variance correction for the orthogonalized impulse response functions. Simulation results show that the variance correction improves the coverage accuracy of both the asymptotic confidence band and the studentized bootstrap confidence band for the orthogonalized impulse response functions.  相似文献   

13.
In this paper, we investigate certain operational and inferential aspects of invariant Post‐randomization Method (PRAM) as a tool for disclosure limitation of categorical data. Invariant PRAM preserves unbiasedness of certain estimators, but inflates their variances and distorts other attributes. We introduce the concept of strongly invariant PRAM, which does not affect data utility or the properties of any statistical method. However, the procedure seems feasible in limited situations. We review methods for constructing invariant PRAM matrices and prove that a conditional approach, which can preserve the original data on any subset of variables, yields invariant PRAM. For multinomial sampling, we derive expressions for variance inflation inflicted by invariant PRAM and variances of certain estimators of the cell probabilities and also their tight upper bounds. We discuss estimation of these quantities and thereby assessing statistical efficiency loss from applying invariant PRAM. We find a connection between invariant PRAM and creating partially synthetic data using a non‐parametric approach, and compare estimation variance under the two approaches. Finally, we discuss some aspects of invariant PRAM in a general survey context.  相似文献   

14.
We revisit the bounded maximal risk point estimation problem as well as the fixed-width confidence interval estimation problem for the largest mean amongk(≥2) independent normal populations having unknown means and unknown but equal variance. In the point estimation setup, we devise appropriate two-stage and modified two-stage methodologies so that the associatedmaximal risk can bebounded from aboveexactly by a preassigned positive number. Kuo and Mukhopadhyay (1990), however, emphasized only the asymptotics in this context. We have also introduced, in both point and interval estimation problems,accelerated sequential methodologies thereby saving sampling operations tremendously over the purely sequential schemes considered in Kuo and Mukhopadhyay (1990), but enjoying at the same time asymptotic second-order characteristics, fairly similar to those of the purely sequential ones.  相似文献   

15.
In this paper we examine the multinomial probit model in the light of recent developments in the field of simulation-based inference. We focus upon five broad areas: specification of multinomial choice models; parameter estimability and the use of simulation techniques, parameter identification; specification testing; and practical issues in simulation-based inference. Although the substitution of simulated probabilities for difficult to compute multidimensional integrals represents a significant step, by examining the more tenuous task of identification and in particular the identification of covariance parameters, we show how the specification and estimation of the multinomial probit still represents a formidable task.  相似文献   

16.
Dr. A. Chaudhuri 《Metrika》1992,39(1):341-357
Summary General procedures are described to generate quantitative randomized response (RR) required to estimate the finite population total of a sensitive variable. Permitting sample selection with arbitrary probabilities a formula for the mean square error (MSE) of a linear estimator of total based on RR is noted indicating the simple modification over one that might be based on direct response (DR) if the latter were available. A general formula for an unbiased estimator of the MSE is presented. A simple approximation is proposed in case the RR ratio estimator is employed based on a simple random sample (SRS) taken without replacement (WOR). Among sampling strategies employing unbiased but not necessarily linear estimators based on RR, certain optimal ones are identified under two alternative models analogously to well-known counterparts based on DR, if available. Unlike Warner’s (1965) treatment of categorical RR we consider quantitative RR here.  相似文献   

17.
The response from a factorial experiment carried out in a time sequence may be affected by uncontrollable variables that are highly correlated with the time in which they occur. In such a situation, one possibility is to randomize the run order of the experiment. Another possibility is to use a systematic run order that is robust against time trends. Since randomized run orders make the time trend part of the error, it can be hoped that systematic run orders will be more effective to identify truly active factors. In this paper, a simulation study is used to compare the performances of the randomized and the systematic run orders. The response from an experiment where we have observed a strong time trend is used to demonstrate the influence of a realistic time trend on the run orders under consideration. The performance of the run orders is then measured by taking the probabilities of false rejection and the probabilities of detection of active contrasts. Our results show that the randomized run order managed to keep the nominal level, while the systematic did not. Additionally, when there were active factors, then the systematic run orders did not achieve more power than did the randomized run order.  相似文献   

18.
Some asymptotic tests for testing distributional assumptions, namely, the half-normal and truncated normal distributions for the stochastic frontier functions have been proposed. The tests are Lagrangean multiplier tests based on the Pearson family of truncated distributions. The statistics can be easily computed. Simple interpretations of the statistics and two empirical examples are provided.  相似文献   

19.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

20.
In latent theory the measurement properties of a mental test can be expressed in the test information function. The relative merits of two tests for the same latent trait can be described by the relative efficiency function, i.e. the ratio of the test information functions. It is argued that these functions have to be estimated if the values of the item difficulties are unknown. Using conditional maximum likelihood estimation as indicated by Andersen (1973), pointwise asymptotic distributions of the test information and relative efficiency function are derived for the case of dichotomously scored Rasch homogeneous items. Formulas for confidence intervals are derived from the asymptotic distributions. An application to a mathematics test is given and extensions to other latent trait models are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号