首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study the asymptotic properties of simulation extrapolation (SIMEX) based variance estimation that was proposed by Wang et al. (J R Stat Soc Series B 71:425–445, 2009). We first investigate the asymptotic normality of the parameter estimator in general parametric variance function and the local linear estimator for nonparametric variance function when permutation SIMEX (PSIMEX) is used. The asymptotic optimal bandwidth selection with respect to approximate mean integrated squared error (AMISE) for nonparametric estimator is also studied. We finally discuss constructing confidence intervals/bands of the parameter/function of interest. Other than applying the asymptotic results so that normal approximation can be used, we recommend a nonparametric Monte Carlo algorithm to avoid estimating the asymptotic variance of estimator. Simulation studies are carried out for illustration.  相似文献   

2.
Faraz and Parsian (Stat Pap 47:569–593, 2006) have shown that the double warning lines (DWL) scheme detects process shifts more quickly than the other variable ratio sampling schemes such as variable sample sizes (VSS), variable sampling intervals (VSI) and variable sample sizes and sampling intervals (VSSVSI). In this paper, the DWL T2 control chart for monitoring the process mean vector is economically designed. The cost model proposed by Costa and Rahim (J Appl Stat 28:875–885, 2001) is used here and is minimized through a genetic algorithm (GA) approach. Then the effects of the model parameters on the chart parameters and resulting operating loss is studied and finally a comparison between all possible variable ratio sampling (VRS) schemes are made to choose the best option economically.  相似文献   

3.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

4.
The Baysian estimation of the mean vector θ of a p-variate normal distribution under linear exponential (LINEX) loss function is studied when as a special restricted model, it is suspected that for a p × r known matrix Z the hypothesis θ = , ${\beta\in\Re^r}The Baysian estimation of the mean vector θ of a p-variate normal distribution under linear exponential (LINEX) loss function is studied when as a special restricted model, it is suspected that for a p × r known matrix Z the hypothesis θ = , b ? ?r{\beta\in\Re^r} may hold. In this area we show that the Bayes and empirical Bayes estimators dominate the unrestricted estimator (when nothing is known about the mean vector θ).  相似文献   

5.
The paper provides one of the first applications of the double bootstrap procedure (Simar and Wilson 2007) in a two-stage estimation of the effect of environmental variables on non-parametric estimates of technical efficiency. This procedure enables consistent inference within models explaining efficiency scores, while simultaneously producing standard errors and confidence intervals for these efficiency scores. The application is to 88 livestock and 256 crop farms in the Czech Republic, split into individual and corporate.
Laure LatruffeEmail:
  相似文献   

6.
Bayes sequential estimation in a family of transformed Chi-square distributions using a linex loss function and a cost c > 0 for each observation is considered in this paper. It is shown that an asymptotic pointwise optimal rule (A.P.O.) is asymptotically non-deficient, i.e., the difference between the Bayes risk of the A.P.O. rule and the Bayes risk of the optimal procedure is of smaller order of magnitude than c, the cost of single observation, as c → 0.  相似文献   

7.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

8.
Ordered data arise naturally in many fields of statistical practice. Often some sample values are unknown or disregarded due to various reasons. On the basis of some sample quantiles from the Rayleigh distribution, the problems of estimating the Rayleigh parameter, hazard rate and reliability function, and predicting future observations are addressed using a Bayesian perspective. The construction of β-content and β-expectation Bayes tolerance limits is also tackled. Under squared-error loss, Bayes estimators and predictors are deduced analytically. Exact tolerance limits are derived by solving simple nonlinear equations. Highest posterior density estimators and credibility intervals, as well as Bayes estimators and predictors under linear loss, can easily be computed iteratively.  相似文献   

9.
For estimating an unknown scale parameter of Gamma distribution, we introduce the use of an asymmetric scale invariant loss function reflecting precision of estimation. This loss belongs to the class of precautionary loss functions. The problem of estimation of scale parameter of a Gamma distribution arises in several theoretical and applied problems. Explicit form of risk-unbiased, minimum risk scale-invariant, Bayes, generalized Bayes and minimax estimators are derived. We characterized the admissibility and inadmissibility of a class of linear estimators of the form $cX\,{+}\,d$ , when $X\sim \varGamma (\alpha ,\eta )$ . In the context of Bayesian statistical inference any statistical problem should be treated under a given loss function by specifying a prior distribution over the parameter space. Hence, arbitrariness of a unique prior distribution is a critical and permanent question. To overcome with this issue, we consider robust Bayesian analysis and deal with Gamma minimax, conditional Gamma minimax, the stable and characterize posterior regret Gamma minimax estimation of the unknown scale parameter under the asymmetric scale invariant loss function in detail.  相似文献   

10.
An important issue when conducting stochastic frontier analysis is how to choose a proper parametric model, which includes choices of the functional form of the frontier function, distributions of the composite errors, and also the exogenous variables. In this paper, we extend the likelihood ratio test of Vuong, Econometrica 57(2):307–333, (1989) and Takeuchi’s, Suri-Kagaku (Math Sci) 153:12–18, (1976) model selection criterion to the stochastic frontier models. The most attractive feature of this test is that it can not only be used for testing a non-nested model, but also still be applicable even when the general model is misspecified. Finally, we also demonstrate how to apply this test to the Indian farm data used by Battese and Coelli, J Prod Anal 3:153–169, (1992), Empir Econ 20(2):325–332, (1995) and Alvarez et al., J Prod Anal 25:201–212, (2006).  相似文献   

11.
This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.  相似文献   

12.
In this paper, the two-step generalized estimating equations (GEE) approach developed by Wang and Fitzmaurice (Biom J 2:302–318, 2006) is employed to handle income non-responses in the Panel Study of Family Dynamics survey conducted in Taiwan. In our analysis, we first construct a conditional logit model of the paid work equation by taking the missing patterns into account. We then use the estimation results to impute whether or not the nonresponses were working for pay. For those who were imputed or observed to work for pay, we adopt the two-step GEE method to estimate the income equation. Compared to simply deleting the missing cases, the two-step imputation procedure is found to improve the estimation results.  相似文献   

13.
Manoj Chacko 《Metrika》2017,80(3):333-349
In this paper we consider Bayes estimation based on ranked set sample when ranking is imperfect, in which units are ranked based on measurements made on an easily and exactly measurable auxiliary variable X which is correlated with the study variable Y. Bayes estimators under squared error loss function and LINEX loss function for the mean of the study variate Y, when (XY) follows a Morgenstern type bivariate exponential distribution, are obtained based on both usual ranked set sample and extreme ranked set sample. Estimation procedures developed in this paper are illustrated using simulation studies and a real data.  相似文献   

14.
Summary In case of absolute error loss we investigate for an arbitrary class of probability distributions, if or if not a two point prior can be least favourable and a corresponding Bayes estimator can be minimax when the parameter is restricted to a closed and bounded interval of ℝ. The general results are applied to several examples, for instance location and scale parameter families are considered. We give examples for which, independent of the length of the parameter interval, no two point priors exist. On the other hand examples are given having a least favourable two point prior when the parameter interval is sufficiently small.  相似文献   

15.
This paper deals with the estimation of P[Y < X] when X and Y are two independent generalized exponential distributions with different shape parameters but having the same scale parameters. The maximum likelihood estimator and its asymptotic distribution is obtained. The asymptotic distribution is used to construct an asymptotic confidence interval of P[Y < X]. Assuming that the common scale parameter is known, the maximum likelihood estimator, uniformly minimum variance unbiased estimator and Bayes estimator of P[Y < X] are obtained. Different confidence intervals are proposed. Monte Carlo simulations are performed to compare the different proposed methods. Analysis of a simulated data set has also been presented for illustrative purposes.Part of the work was supported by a grant from the Natural Sciences and Engineering Research Council  相似文献   

16.
The Marginal Trader Hypothesis (Forsythe et al. 1992, in American Economic Review 82(5): 1142–1161) posits that a small group of well-informed traders keep an asset’s market price equal to its fundamental value. Forsythe et al. base this claim on evidence from U.S. presidential prediction markets. We test the Marginal Trader Hypothesis by examining a decision task that precludes marginal traders. Specifically, students are asked to predict the class average for a given exam. We show that performance on our task is similar to that reported for the Iowa Electronic Markets, and that accuracy is unrelated to academic performance and does not correlate across tasks.  相似文献   

17.
Mukhopadhyay and Padmanabhan (Metrika 40:121–128, 1993) considered the construction of fixed-width confidence intervals for the difference of location parameters of two negative exponential distributions via triple sampling when the scale parameters are unknown and unequal. Under the same setting, this paper deals with the problem of fixed-width confidence interval estimation for a linear combination of location parameters, using the above mentioned three-stage procedure.  相似文献   

18.
A stochastic frontier model with correction for sample selection   总被引:3,自引:2,他引:1  
Heckman’s (Ann Econ Soc Meas 4(5), 475–492, 1976; Econometrica 47, 153–161, 1979) sample selection model has been employed in three decades of applications of linear regression studies. This paper builds on this framework to obtain a sample selection correction for the stochastic frontier model. We first show a surprisingly simple way to estimate the familiar normal-half normal stochastic frontier model using maximum simulated likelihood. We then extend the technique to a stochastic frontier model with sample selection. In an application that seems superficially obvious, the method is used to revisit the World Health Organization data (WHO in The World Health Report, WHO, Geneva 2000; Tandon et al. in Measuring the overall health system performance for 191 countries, World Health Organization, 2000) where the sample partitioning is based on OECD membership. The original study pooled all 191 countries. The OECD members appear to be discretely different from the rest of the sample. We examine the difference in a sample selection framework.  相似文献   

19.
This paper proposes a dual-level inefficiency model for analysing datasets with a sub-company structure, which permits firm inefficiency to be decomposed into two parts: a component that varies across different sub-companies within a firm (internal inefficiency); and a persistent component that applies across all sub-companies in the same firm (external inefficiency). We adapt the models developed by Kumbhakar and Hjalmarsson (J Appl Econom 10:33–47, 1995) and Kumbhakar and Heshmati (Am J Agric Econ 77:660–674, 1995), making the same distinction between persistent and residual inefficiency, but in our case across sub-companies comprising a firm, rather than over time. The proposed model is important in a regulatory context, where datasets with a sub-company structure are commonplace, and regulators are interested in identifying and eliminating both persistent and sub-company varying inefficiency. Further, as regulators often have to work with small cross-sections, the utilisation of sub-company data can be seen as an additional means of expanding cross-sectional datasets for efficiency estimation. Using an international dataset of rail infrastructure managers we demonstrate the possibility of separating firm inefficiency into its persistent and sub-company varying components. The empirical illustration highlights the danger that failure to allow for the dual-level nature of inefficiency may cause overall firm inefficiency to be underestimated.  相似文献   

20.
This paper applies Novak’s (1998) theory of learning to the problem of workplace bullying. Novak’s theory offers an understanding of how actions of bullying and responses to bullying can be seen as deriving from individualized conceptualizations of workplace bullying by those involved. Further, Novak’s theory suggests that training involving Ausubel’s concept of meaningful learning (Ausubel Educational Theory 11(1): 15–25, 1961; Ausubel et al. 1978) which attends to learners’ pre-existing knowledge and allows for new meaning to be constructed regarding workplace bullying can lead to new actions related to workplace bullying. Ideally, these new actions can involve both a reduction in workplace bullying prevalence, and responses to workplace bullying which recognize and are informed by the negative consequences of this workplace dynamic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号