首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the present investigation, a new forced quantitative randomized response (FQRR) model has been proposed. Both situations when the values of the forced quantitative response are known and unknown are studied. The forced qualitative randomized response models due to Liu and Chow (J Am Stat Assoc 71:72–73, 1976a, Biometrics 32:607–618, 1976b) and Stem and Steinhorst (J Am Stat Assoc 79:555–564, 1984) are shown as a special case of the situation when the value of the forced quantitative randomized response is simply replaced by a forced “yes” response. The proposed FQRR model remains more efficient than the recent Bar-Lev et al. (Metrika, 60:255–260, 2004), say BBB model. The relative efficiency of the proposed FQRR model with respect to the existing competitors, like the BBB model, has been investigated under different situations. No doubt the present model will lead to several new developments in the field of randomized response sampling. The proposed FQRR model will encourage researchers/scientists to think more on these lines.  相似文献   

2.
This paper uses firm-level data recorded in the Amadeus database to investigate the distribution of labour productivity in different European countries. We find that the upper tail of the empirical productivity distributions follows a decaying power-law, whose exponent α is obtained by a semi-parametric estimation technique recently developed by Clementi et al. [Physica A 370(1):49–53, 2006]. The emergence of “fat tails” in productivity distribution has already been detected in Di Matteo et al. [Eur Phys J B 47(3):459–466, 2005] and explained by means of a model of social network. Here we show that this model is tested on a broader sample of countries having different patterns of social network structure. These different social attitudes, measured using a social capital indicator, reflect in the power-law exponent estimates, verifying in this way the existence of linkages among firms’ productivity performance and social network.  相似文献   

3.
Many models to analyze incomplete data that allow the missingness to be non-random have been developed. Since such models necessarily rely on unverifiable assumptions, considerable research nowadays is devoted to assess the sensitivity of resulting inferences. A popular sensitivity route, next to local influence (Cook in J Roy Stat Soc Ser B 2:133–169, 1986; Jansen et al. in Biometrics 59:410–419, 2003) and so-called intervals of ignorance (Molenberghs et al. in Appl Stat 50:15–29, 2001), is based on contrasting more conventional selection models with members from the pattern-mixture model family. In the first family, the outcome of interest is modeled directly, while in the second family the natural parameter describes the measurement process, conditional on the missingness pattern. This implies that a direct comparison ought not to be done in terms of parameter estimates, but rather should pass by marginalizing the pattern-mixture model over the patterns. While this is relatively straightforward for linear models, the picture is less clear for the nevertheless important setting of categorical outcomes, since models ordinarily exhibit a certain amount of non-linearity. Following ideas laid out in Jansen and Molenberghs (Pattern-mixture models for categorical outcomes with non-monotone missingness. Submitted for publication, 2007), we offer ways to marginalize pattern-mixture-model-based parameter estimates, and supplement these with asymptotic variance formulas. The modeling context is provided by the multivariate Dale model. The performance of the method and its usefulness for sensitivity analysis is scrutinized using simulations.  相似文献   

4.
In the present investigation, a general set-up for inference from survey data that covers the estimation of variance of estimators of totals and distribution functions has been considered, using known first and second order moments of auxiliary information at the estimation stage. The traditional linear regression estimator of population total owed to Hansen et al. Sample survey methods and theory. vol. 1 & 2, New York, Wiley (1953) is shown to be unique in its class of estimators, and celebrates Golden Jubilee Year-2003 for its outstanding performance in the literature by following Singh Advanced sampling theory with applications: How Michael selected Amy, vols 1 & 2, Kluwer, The Netherlands, pp 1–1247 2003. This particular paper has been designed to repair the methodology of Rao J. Off Stat 10(2):153–165 (1994) and hence that of Singh Ann Ins Stat Math 53(2):404–417 (2001). Although there is no need of simulation study to demonstrate the superiority of the proposed technique, because the theoretical results are crystal clear, but a small scale level simulation study have been designed to show the performance of the proposed estimators over the existing estimators in the literature.  相似文献   

5.
In this paper, we take up an approach of (Lindberg, in Bernoulli, 15(2):464–474, 2009) who introduced a new parameterization of the Black–Scholes model that allows for an easy solution of the continuous-time Markowitz mean-variance problem. We generalize the results of (Lindberg, in Bernoulli, 15(2):464–474, 2009) to a jump-diffusion market setting and slightly correct the proof and the assertion of the main result. Further, we demonstrate the implications of the Lindberg parameterization for the stock price drift vector in different market settings, analyse the dependence of the optimal portfolio from jump and diffusion risk and finally indicate how to use the method. We particularly also show how the optimal strategy can be obtained with the restricted use of historical data.  相似文献   

6.
Pareto optimality (sometimes known as Pareto efficiency) is an important notion in social sciences and related areas, see e.g. Klaus (2006), Chun (2005), Hild (2004), Kibris (2003), Nandeibam (2003), Papai (2001), Peris and Sanchez (2001), Brams and Fishburn (2000), Denicolo (1999), Klaus et al. (1998), Peremans et al. (1997), and Vohra (1992). This notion invariably involves the comparison of the utility of one outcome versus another, i.e. the ratio of two utilities or in general the ratio of two random variables. In this note, we derive the exact distribution of the ratio X/(XY) when X and Y are Pareto random variables, Pareto distribution being the first and the most popular distribution used in social sciences and related areas.  相似文献   

7.
Nigm et al. (2003, statistics 37: 527–536) proposed Bayesian method to obtain predictive interval of future ordered observation Y (j) (r < jn ) based on the right type II censored samples Y (1) < Y (2) < ... < Y (r) from the Pareto distribution. If some of Y (1) < ... < Y (r-1) are missing or false due to artificial negligence of typist or recorder, then Nigm et al.’s method may not be an appropriate choice. Moreover, the conditional probability density function (p.d.f.) of the ordered observation Y (j) (r < jn ) given Y (1) <Y (2) < ... < Y (r) is equivalent to the conditional p.d.f. of Y (j) (r < jn ) given Y (r). Therefore, we propose another Bayesian method to obtain predictive interval of future ordered observations based on the only ordered observation Y (r), then compares the length of the predictive intervals when using the method of Nigm et al. (2003, statistics 37: 527–536) and our proposed method. Numerical examples are provided to illustrate these results.  相似文献   

8.
In the present investigation, we propose a new method to calibrate the estimator of the general parameter of interest in survey sampling. We demonstrate that the linear regression estimator due to Hansen et al. (Sample Survey Method and Theory. Wiley, NY, 1953) is a special case of this. We reconfirm that the sum of calibrated weights has to be set equal to sum of the design weights within a given sample as shown in Singh (Advanced sampling theory with applications: How Michael ‘selected’ Amy, Vol. 1 and 2. Kluwer, The Netherlands, pp 1–1247, 2003; Proceedings of the American Statistical Association, Survey Method Section [CD-ROM], Toronto, Canada: American Statistical Association, pp 4382–4389, 2004; Metrika:1–18, 2006a; Presented at INTERFACE 2006, Pasadena, CA, USA, 2006b) and Stearns and Singh (Presented at Joint Statistical Meeting, MN, USA (Available on the CD), 2005; Comput Stat Data Anal 52:4253–4271, 2008). Thus, it shows that the Sir. R.A. Fisher’s brilliant idea of keeping sum of observed frequencies equal to that of expected frequencies leads to a “Honest-Balance” while weighing design weights in survey sampling. The major benefit of the proposed new estimator is that it always works unlike the pseudo empirical likelihood estimators listed in Owen (Empirical Likelihood. Chapman & Hall, London, 2001), Chen and Sitter (Stat Sin 9:385–406, 1999) and Wu (Sur Methodol 31(2):239–243, 2005). The main endeavor of this paper is to bring a change in the existing calibration technology, which is based on only positive distance functions, with a displacement function that has the flexibility of taking positive, negative, or zero value. At the end, the proposed technology has been compared with its competitors under several kinds of linear and non-linear non-parametric models using an extensive simulation study. A couple of open questions are raised.  相似文献   

9.
The original Data Envelopment Analysis (DEA) models developed by Charnes et al. (Eur J Oper Res 2:429–444, 1978), Banker et al. (Manag Sci 30:1078–1092, 1984) were both radial models. These models and their varied extensions have remained the most popular DEA models in terms of utilization. The benchmark targets they determined for inefficient units are primarily based on the notion of maintaining the same input and output mixes originally employed by the evaluated unit (i.e. disregarding allocative considerations). This paper presents a methodology to investigate allocative and overall efficiency in the absence of defined input and output prices. The benchmarks determined from models based on this methodology will consider all possible input and/or output mixes. Application of this methodology is illustrated on a model of the financial intermediary function of a bank branch network.  相似文献   

10.
We study the problem of predicting future k-records based on k-record data for a large class of distributions, which includes several well-known distributions such as: Exponential, Weibull (one parameter), Pareto, Burr type XII, among others. With both Bayesian and non-Bayesian approaches being investigated here, we pay more attention to Bayesian predictors under balanced type loss functions as introduced by Jafari Jozani et al. (Stat Probab Lett 76:773–780, 2006a). The results are presented under the balanced versions of some well-known loss functions, namely squared error loss, Varian’s linear-exponential loss and absolute error loss or L 1 loss functions. Some of the previous results in the literatures such as Ahmadi et al. (Commun Stat Theory Methods 34:795–805, 2005), and Raqab et al. (Statistics 41:105–108, 2007) can be achieved as special cases of our results. Partial support from Ordered and Spatial Data Center of Excellence of Ferdowsi University of Mashhad is acknowledged by J. Ahmadi. M. J. Jozani’s research supported partially by a grant of Statistical Research and Training Center. é. Marchand’s research supported by NSERC of Canada. A. Parsian’s research supported by a grant of the Research Council of the University of Tehran.  相似文献   

11.
An important issue when conducting stochastic frontier analysis is how to choose a proper parametric model, which includes choices of the functional form of the frontier function, distributions of the composite errors, and also the exogenous variables. In this paper, we extend the likelihood ratio test of Vuong, Econometrica 57(2):307–333, (1989) and Takeuchi’s, Suri-Kagaku (Math Sci) 153:12–18, (1976) model selection criterion to the stochastic frontier models. The most attractive feature of this test is that it can not only be used for testing a non-nested model, but also still be applicable even when the general model is misspecified. Finally, we also demonstrate how to apply this test to the Indian farm data used by Battese and Coelli, J Prod Anal 3:153–169, (1992), Empir Econ 20(2):325–332, (1995) and Alvarez et al., J Prod Anal 25:201–212, (2006).  相似文献   

12.
The world’s nations often produce commodities for which they have no apparent comparative advantage, and do so with techniques that are not particularly efficient by world standards. These inefficiencies may arise from various forms of trade and domestic distortions, as described in Chau et al., Int Econ Rev 44:1079–1095, (2003). We estimate these distortions for 33 countries of the world using a newly compiled data set. We find that domestic distortions tend to be slightly more important than trade distortions. For the average country, revenues in the agricultural sector would be 26% higher if domestic distortions were eliminated, but 21% higher if trade distortions were eliminated. Our measures of trade and domestic distortions across countries provide a complement to measures of protectionism such as producer subsidy equivalents.  相似文献   

13.
We assess market valuation of airline convertible preferred stocks using a contingent claims valuation model that was extensively tested by Ramanlal et al. (Rev Quant Financ Account 10:303–319, 1998). Our sample consists of 4,096 daily price observations of 11 convertible preferred stocks issued by the U.S. airlines in 1980–1991. For each convertible we estimate daily model prices for 2 years after issuance and compare them with market prices by calculating pricing errors. While the entire sample’s mean pricing error is found to be negative 3.8%, the panel data analysis and the mean pricing errors of the sub-samples indicate that the undervaluation is much more severe in the first 6 months of trading. The results suggest that airlines leave about 10% on the table when they raise capital by issuing convertible securities.  相似文献   

14.
For reasons of methodological convenience statistical models analysing judicial decisions tend to focus on the duration of custodial sentences. These types of sentences are however quite rare (7% of the total in England and Wales), which generates a serious problem of selection bias. Typical adjustments employed in the literature, such as Tobit models, are based on questionable assumptions and are incapable to discriminate between different types of non-custodial sentences (such as discharges, fines, community orders, or suspended sentences). Here we implement an original approach to model custodial and non-custodial sentence outcomes simultaneously avoiding problems of selection bias while making the most of the information recorded for each of them. This is achieved by employing Pina-Sánchez et al. (Br J Criminol 59:979–1001, 2019) scale of sentence severity as the outcome variable of a Bayesian regression model. A sample of 7242 theft offences sentenced in the Crown Court is used to further illustrate: (a) the pervasiveness of selection bias in studies restricted to custodial sentences, which leads us to question the external validity of previous studies in the literature limited to custodial sentence length; and (b) the inadequacy of Tobit models and similar methods used in the literature to adjust for such bias.  相似文献   

15.
Sanyu Zhou 《Metrika》2017,80(2):187-200
A simultaneous confidence band is a useful statistical tool in a simultaneous inference procedure. In recent years several papers were published that consider various applications of simultaneous confidence bands, see for example Al-Saidy et al. (Biometrika 59:1056–1062, 2003), Liu et al. (J Am Stat Assoc 99:395–403, 2004), Piegorsch et al. (J R Stat Soc 54:245–258, 2005) and Liu et al. (Aust N Z J Stat 55(4):421–434, 2014). In this article, we provide methods for constructing one-sided hyperbolic imultaneous confidence bands for both the multiple regression model over a rectangular region and the polynomial regression model over an interval. These methods use numerical quadrature. Examples are included to illustrate the methods. These approaches can be applied to more general regression models such as fixed-effect or random-effect generalized linear regression models to construct large sample approximate one-sided hyperbolic simultaneous confidence bands.  相似文献   

16.
We consider the ability to detect interaction structure from data in a regression context. We derive an asymptotic power function for a likelihood-based test for interaction in a regression model, with possibly misspecified alternative distribution. This allows a general investigation of different types of interactions which are poorly or well detected via data. Principally we contrast pairwise-interaction models with ‘diffuse interaction models’ as introduced in Gustafson et al. (Stat Med 24:2089–2104, 2005).  相似文献   

17.
The succession process in family firms has by far been determined to be the most critical phase in the family business life-cycle (e.g. Morris et al. Journal of Business Venturing 18:513–531, 1997; Wang et al. 2000) and characterized as the period in which most family firm fatalities occur (Handler and Kram Family Business Review 1:361–381, 1988). This paper is an empirical study on Greek family firms and seeks to identify the critical success factors that have a major impact on the outcome of a generational transition in the leadership of the family firm. Based on an integrated conceptual framework proposed by Pyromalis et al. (2006), we test the impact of five factors, namely the incumbent’s propensity to step aside, the successor’s willingness to take over, the positive family relations and communication, succession planning, and the successor’s appropriateness and preparation on both the satisfaction of the stakeholders with the succession process and the effectiveness of the succession process per se. The results provide a useful insight and confirm the importance of the aforementioned factors in the succession process by mapping a safe passage through the family business succession process, and by contributing not only to the overall family business literature but also generating strong arguments in favor of the family firm as an integral entrepreneurial element for a region’s sustainable economic development.  相似文献   

18.
The Marginal Trader Hypothesis (Forsythe et al. 1992, in American Economic Review 82(5): 1142–1161) posits that a small group of well-informed traders keep an asset’s market price equal to its fundamental value. Forsythe et al. base this claim on evidence from U.S. presidential prediction markets. We test the Marginal Trader Hypothesis by examining a decision task that precludes marginal traders. Specifically, students are asked to predict the class average for a given exam. We show that performance on our task is similar to that reported for the Iowa Electronic Markets, and that accuracy is unrelated to academic performance and does not correlate across tasks.  相似文献   

19.
A stochastic frontier model with correction for sample selection   总被引:3,自引:2,他引:1  
Heckman’s (Ann Econ Soc Meas 4(5), 475–492, 1976; Econometrica 47, 153–161, 1979) sample selection model has been employed in three decades of applications of linear regression studies. This paper builds on this framework to obtain a sample selection correction for the stochastic frontier model. We first show a surprisingly simple way to estimate the familiar normal-half normal stochastic frontier model using maximum simulated likelihood. We then extend the technique to a stochastic frontier model with sample selection. In an application that seems superficially obvious, the method is used to revisit the World Health Organization data (WHO in The World Health Report, WHO, Geneva 2000; Tandon et al. in Measuring the overall health system performance for 191 countries, World Health Organization, 2000) where the sample partitioning is based on OECD membership. The original study pooled all 191 countries. The OECD members appear to be discretely different from the rest of the sample. We examine the difference in a sample selection framework.  相似文献   

20.
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27–61, 2000; Pap Psicól Revist Col Of Psicó 29:92–106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665–683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号