首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Bing Guo  Qi Zhou  Runchu Zhang 《Metrika》2014,77(6):721-732
Zhang et al. (Stat Sinica 18:1689–1705, 2008) introduced an aliased effect-number pattern for two-level regular designs and proposed a general minimum lower-order confounding (GMC) criterion for choosing optimal designs. All the GMC \(2^{n-m}\) designs with \(N/4+1\le n\le N-1\) were constructed by Li et al. (Stat Sinica 21:1571–1589, 2011), Zhang and Cheng (J Stat Plan Inference 140:1719–1730, 2010) and Cheng and Zhang (J Stat Plan Inference 140:2384–2394, 2010), where \(N=2^{n-m}\) is run number and \(n\) is factor number. In this paper, we first study some further properties of GMC design, then we construct all the GMC \(2^{n-m}\) designs respectively with the three parameter cases of \(n\le N-1\) : (i) \(m\le 4\) , (ii) \(m\ge 5\) and \(n=(2^m-1)u+r\) for \(u>0\) and \(r=0,1,2\) , and (iii) \(m\ge 5\) and \(n=(2^m-1)u+r\) for \(u\ge 0\) and \(r=2^m-3,2^m-2\) .  相似文献   

2.
In this paper, we discuss in a general framework the design-based estimation of population parameters when sensitive data are collected by randomized response techniques. We show in close detail the procedure for estimating the distribution function of a sensitive quantitative variable and how to estimate simultaneously the population prevalence of individuals bearing a stigmatizing attribute and the distribution function for the members belonging to the hidden group. The randomized response devices by Greenberg et al. (J Am Stat Assoc 66:243–250, 1971), Franklin (Commun Stat Theory Methods 18:489–505, 1989), and Singh et al. (Aust NZ J Stat 40:291–297 1998) are here considered as data-gathering tools.  相似文献   

3.
The purpose of this note is twofold. First, we survey the study of the percolation phase transition on the Hamming hypercube $\{0,1\}^{m}$ obtained in the series of papers (Borgs et al. in Random Struct Algorithms 27:137–184, 2005; Borgs et al. in Ann Probab 33:1886–1944, 2005; Borgs et al. in Combinatorica 26:395–410, 2006; van der Hofstad and Nachmias in Hypercube percolation, Preprint 2012). Secondly, we explain how this study can be performed without the use of the so-called “lace expansion” technique. To that aim, we provide a novel simple proof that the triangle condition holds at the critical probability.  相似文献   

4.
Qiqing Yu  Yuting Hsu  Kai Yu 《Metrika》2014,77(8):995-1011
The non-parametric likelihood L(F) for censored data, including univariate or multivariate right-censored, doubly-censored, interval-censored, or masked competing risks data, is proposed by Peto (Appl Stat 22:86–91, 1973). It does not involve censoring distributions. In the literature, several noninformative conditions are proposed to justify L(F) so that the GMLE can be consistent (see, for examples, Self and Grossman in Biometrics 42:521–530 1986, or Oller et al. in Can J Stat 32:315–326, 2004). We present the necessary and sufficient (N&S) condition so that \(L(F)\) is equivalent to the full likelihood under the non-parametric set-up. The statement is false under the parametric set-up. Our condition is slightly different from the noninformative conditions in the literature. We present two applications to our cancer research data that satisfy the N&S condition but has dependent censoring.  相似文献   

5.
While the high prevalence of mental illness in workplaces is more readily documented in the literature than it was ten or so years ago, it continues to remain largely within the medical and health sciences fields. This may account for the lack of information about mental illness in workplaces (Dewa et al. Healthcare Papers 5:12–25, 2004) by operational managers and human resource departments even though such illnesses effect on average 17 % to 20 % of employees in any 12-month period (MHCC 2012; SAMHSA 2010; ABS 2007). As symptoms of mental illness have the capacity to impact negatively on employee work performance and/or attendance, the ramifications on employee performance management systems can be significant, particularly when employees choose to deliberately conceal their illness, such that any work concerns appear to derive from issues other than illness (Dewa et al. Healthcare Papers 5:12–25, 2004; De Lorenzo 2003). When employee non-disclosure of a mental illness impacts negatively in the workplace, it presents a very challenging issue in relation to performance management for both operational managers and human resource staff. Without documented medical evidence to show that impaired work performance and/or attendance is attributable to a mental illness, the issue of performance management arises. Currently, when there is no documented medical illness, performance management policies are often brought into place to improve employee performance and/or attendance by establishing achievable employee targets. Yet, given that in any twelve-month period at least a fifth of the workforce sustains a mental illness (MHCC 2012; SAMHSA 2010; ABS 2007), and that non-disclosure is significant (Barney et al. BMC Public Health 9:1–11, 2009; Munir et al. Social Science & Medicine 60:1397–1407, 2005) such targets may be unachievable for employees with a hidden mental illness. It is for these reasons that this paper reviews the incidence of mental illness in western economies, its costs, and the reasons why it is often concealed and proposes the adoption of what are termed ‘Buffer Stage’ policies as an added tool that organisations may wish to utilise in the management of hidden medical illnesses such as mental illness.  相似文献   

6.
This paper considers three ratio estimators of the population mean using known correlation coefficient between the study and auxiliary variables in simple random sample when some sample observations are missing. The suggested estimators are compared with the estimators of Singh and Horn (Metrika 51:267–276, 2000), Singh and Deo (Stat Pap 44:555–579, 2003) and Kadilar and Cingi (Commun Stat Theory Methods 37:2226–2236, 2008). They are compared with other imputation estimators based on the mean or a ratio. It is found that the suggested estimators are approximately unbiased for the population mean. Also, it turns out that the suggested estimators perform well when compared with the other estimators considered in this study.  相似文献   

7.
Xin Liu  Rong-Xian Yue 《Metrika》2013,76(4):483-493
This paper considers the optimal design problem for multiresponse regression models. The $R$ -optimality introduced by Dette (J R Stat Soc B 59:97–110, 1997) for single response experiments is extended to the case of multiresponse parameter estimation. A general equivalence theorem for the $R$ -optimality is provided for multiresponse models. Illustrative examples of the $R$ -optimal designs for two multiresponse models are presented based on the general equivalence theorem.  相似文献   

8.
Despite the pervasiveness of Six Sigma programs, there is rising concern regarding the failure of many Six Sigma programs. One explanation for many Six Sigma failures could be escalation of commitment. Escalation of commitment refers to the propensity of decision-makers to continue investing in a failing course of action. Many researchers have applied escalation of commitment to explain the behavior of individuals, groups, companies, and nations. Using the escalation of commitment model (Staw and Ross 1987a; Ross and Staw Acad. Manag. J. 36:701–732 1993) as a basis, this research describes a Six Sigma failure in an electrical components company. In documenting this failure, this research contributes in two ways, both in the practice and in the theory of Six Sigma. First, while examining the Six Sigma failure, this research uncovers important factors for successful implementation, which should improve the practice of Six Sigma. Second, academic research (e.g., Schroeder et al. J. Oper. Manag, 26:536–554 2008; Zu et al. J. Oper. Manag, 26:630–650 2008) is engaged in uncovering the definition of Six Sigma, and its differences from other improvement programs. This research provides a new direction to academic research and has the potential to impact the theory of Six Sigma.  相似文献   

9.
Sangun Park 《Metrika》2014,77(5):609-616
The representation of the entropy in terms of the hazard function and its extensions have been studied by many authors including Teitler et al. (IEEE Trans Reliab 35:391–395, 1986). In this paper, we consider a representation of the Kullback–Leibler information of the first \(r\) order statistics in terms of the relative risk (Park and Shin in Statistics, 2012), the ratio of hazard functions, and extend it to the progressively Type II censored data. Then we study the change in Kullback–Leibler information of the first \(r\) order statistics according to \(r\) and discuss its relation with Fisher information in order statistics.  相似文献   

10.
This paper proposes a new two-step stochastic frontier approach to estimate technical efficiency (TE) scores for firms in different groups adopting distinct technologies. Analogous to Battese et al. (J Prod Anal 21:91–103, 2004), the metafrontier production function allows for calculating comparable TE measures, which can be decomposed into group specific TE measures and technology gap ratios. The proposed approach differs from Battese et al. (J Prod Anal 21:91–103, 2004) and O’Donnell et al. (Empir Econ 34:231–255, 2008) mainly in the second step, where a stochastic frontier analysis model is formulated and applied to obtain the estimates of the metafrontier, instead of relying on programming techniques. The so-derived estimators have the desirable statistical properties and enable the statistical inferences to be drawn. While the within-group variation in firms’ technical efficiencies is frequently assumed to be associated with firm-specific exogenous variables, the between-group variation in technology gaps can be specified as a function of some exogenous variables to take account of group-specific environmental differences. Two empirical applications are illustrated and the results appear to support the use of our model.  相似文献   

11.
This paper investigates the relationship between secure implementability (Saijo et al. in Theor Econ 2:203–229, 2007) and full implementability in truthful strategies (Nicolò in Rev Econ Des 8:373–382, 2004). Although secure implementability is in general stronger than full implementability in truthful strategies, this paper shows that both properties are equivalent under the social choice function that satisfies non-wastefulness (Li and Xue in Econ Theory, doi:10.1007/s00199-012-0724-0) in pure exchange economies with Leontief utility functions.  相似文献   

12.
We study a keyword auction model where bidders have constrained budgets. In the absence of budget constraints, Edelman et al. (Am Econ Rev 97(1):242–259, 2007) and Varian (Int J Ind Organ 25(6):1163–1178, 2007) analyze “locally envy-free equilibrium” or “symmetric Nash equilibrium” bidding strategies in generalized second-price auctions. However, bidders often have to set their daily budgets when they participate in an auction; once a bidder’s payment reaches his budget, he drops out of the auction. This raises an important strategic issue that has been overlooked in the previous literature: Bidders may change their bids to inflict higher prices on their competitors because under generalized second-price, the per-click price paid by a bidder is the next highest bid. We provide budget thresholds under which equilibria analyzed in Edelman et al. (Am Econ Rev 97(1):242–259, 2007) and Varian (Int J Ind Organ 25(6):1163–1178, 2007) are sustained as “equilibria with budget constraints” in our setting. We then consider a simple environment with one position and two bidders and show that a search engine’s revenue with budget constraints may be larger than its revenue without budget constraints.  相似文献   

13.
Qingming Zou  Zhongyi Zhu 《Metrika》2014,77(2):225-246
The single-index model is an important tool in multivariate nonparametric regression. This paper deals with M-estimators for the single-index model. Unlike the existing M-estimator for the single-index model, the unknown link function is approximated by B-spline and M-estimators for the parameter and the nonparametric component are obtained in one step. The proposed M-estimator of unknown function is shown to attain the convergence rate as that of the optimal global rate of convergence of estimators for nonparametric regression according to Stone (Ann Stat 8:1348–1360, 1980; Ann Stat 10:1040–1053, 1982), and the M-estimator of parameter is $\sqrt{n}$ -consistent and asymptotically normal. A small sample simulation study showed that the M-estimators proposed in this paper are robust. An application to real data illustrates the estimator’s usefulness.  相似文献   

14.
The recombining binomial tree approach, which has been initiated by Cox et?al. (J Financ Econ 7: 229?C263, 1979) and extended to arbitrary diffusion models by Nelson and Ramaswamy (Rev Financ Stud 3(3): 393?C430, 1990) and Hull and White (J Financ Quant Anal 25: 87?C100, 1990a), is applied to the simultaneous evaluation of price and Greeks for the amortized fixed and variable rate mortgage prepayment option. We consider the simplified binomial tree approximation to arbitrary diffusion processes by Costabile and Massabo (J Deriv 17(3): 65?C85, 2010) and analyze its numerical applicability to the mortgage valuation problem for some Vasicek and CIR-like interest rate models. For fixed rates and binomial trees with about thousand steps, we obtain very good results. For the Vasicek model, we also compare the closed-form analytical approximation of the callable fixed rate mortgage price by Xie (IAENG Int J Appl Math 39(1): 9, 2009) with its binomial tree counterpart. With respect to the binomial tree values one observes a systematic underestimation (overestimation) of the callable mortgage price (prepayment option price) analytical approximation. This numerical discrepancy increases at longer maturities and becomes impractical for a valuable estimation of the prepayment option price.  相似文献   

15.
We find necessary and sufficient conditions for the market symmetry property, introduced by Fajardo and Mordecki (Quant Finance 6(3):219–227, 2006), to hold in the Ornstein–Uhlenbeck stochastic volatility model, henceforth OU–SV. In particular, we address the non-Gaussian OU–SV model proposed by Barndorff-Nielsen and Shephard (J R Stat Soc B 63(Part 2):167–241, 2001). Also, we prove the Bates’ rule for these models.  相似文献   

16.
In this paper, we discuss asymptotic infimum coverage probability (ICP) of eight widely used confidence intervals for proportions, including the Agresti–Coull (A–C) interval (Am Stat 52:119–126, 1998) and the Clopper–Pearson (C–P) interval (Biometrika 26:404–413, 1934). For the A–C interval, a sharp upper bound for its asymptotic ICP is derived. It is less than nominal for the commonly applied nominal values of 0.99, 0.95 and 0.9 and is equal to zero when the nominal level is below 0.4802. The \(1-\alpha \) C–P interval is known to be conservative. However, we show through a brief numerical study that the C–P interval with a given average coverage probability \(1-\gamma \) typically has a similar or larger ICP and a smaller average expected length than the corresponding A–C interval, and its ICP approaches to \(1-\gamma \) when the sample size goes large. All mathematical proofs and R-codes for computation in the paper are given in Supplementary Materials.  相似文献   

17.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

18.
This study follows the structure of Grifell-Tatjé and Lovell (Manag Sci 45:1177–1193, 1999) and uses the non-parametric approach to decompose the change in profit of Taiwanese banks into various drivers. However, risk was never considered in the papers based on profit decomposition. Without considering risk, the empirical results will be biased while decomposing the change in profit. In fact, risk is a joint but undesirable output which cannot be freely disposed of by various regulations. The non-performing loan (NPL) is employed as a risk indicator for decomposing the change in profit in this study. This study also performs a three-way comparison among (1) the original Grifell-Tatjé and Lovell (Manag Sci 45:1177–1193, 1999) analysis (OGLA) model that ignores NPL, (2) the extended Grifell-Tatjé and Lovell (Manag Sci 45:1177–1193, 1999) analysis (EGLA) model that is based on the OGLA model and incorporates NPL, and (3) the directional distance function (DDF) model that is based on Juo et al. (Omega 40:550–561, 2012) and incorporates NPLs to see if incorporating the undesirable output matters. The decomposition of the change in profit in the above three models is then illustrated using Taiwanese banks over the period 2006–2010.  相似文献   

19.
Zhaoping Hong  Yuao Hu  Heng Lian 《Metrika》2013,76(7):887-908
In this paper, we consider the problem of simultaneous variable selection and estimation for varying-coefficient partially linear models in a “small $n$ , large $p$ ” setting, when the number of coefficients in the linear part diverges with sample size while the number of varying coefficients is fixed. Similar problem has been considered in Lam and Fan (Ann Stat 36(5):2232–2260, 2008) based on kernel estimates for the nonparametric part, in which no variable selection was investigated besides that $p$ was assume to be smaller than $n$ . Here we use polynomial spline to approximate the nonparametric coefficients which is more computationally expedient, demonstrate the convergence rates as well as asymptotic normality of the linear coefficients, and further present the oracle property of the SCAD-penalized estimator which works for $p$ almost as large as $\exp \{n^{1/2}\}$ under mild assumptions. Monte Carlo studies and real data analysis are presented to demonstrate the finite sample behavior of the proposed estimator. Our theoretical and empirical investigations are actually carried out for the generalized varying-coefficient partially linear models, including both Gaussian data and binary data as special cases.  相似文献   

20.
In this paper, we have employed the non-standard log-linear models to fit the double symmetry models and some of its decompositions to square contingency tables having ordered categories. SAS PROC GENMOD was employed to fit these models although we could similarly have used GENLOG in SPSS or GLM in STATA. A SAS macro generates the factor or scalar variables required to fit these models. Two sets of \(4 \times 4\) unaided distance vision data that have been previously analyzed in (Tahata and Tomizawa, Journal of the Japan Statistical Society 36:91–106, 2006) were employed for verification of results. We also extend the approach to the Danish \(5 \times 5\) Mobility data as well as to the \(3 \times 3\) Danish longitudinal study data of subjective health, firstly reported in (Andersen, The Statistical Analysis of Categorical Data, Springer:Berlin, 1994) and analyzed in (Tahata and Tomizawa, Statistical Methods and Applications 19:307–318, 2010). Results obtained agree with those published in previous literature on the subject. The approaches suggest here eliminate any programming that might be required in order to apply these class of models to square contingency tables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号