首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
In a recent paper, Majumder and Chakravarty (1990) propose a four-parameter model which they find provides a better fit to some income data than the lognormal, gamma, Singh-Maddala, Dagum, and generalized beta of the second kind (GB2) distributions. This note (1) demonstrates that the model proposed by Majumder and Chakravarty is a reparameterization of the GB2 and (2) reconciles the corresponding contradictory empirical findings reported by Majumder and Chakravarty.  相似文献   

2.
J. Medhi 《Metrika》1973,20(1):215-218
In this note we consider a generalisation of theStirling noumber of the second kind. The distribution of the sum ofn independent zero-truncated Poisson variables, which can be expressed in terms of such a gneralised number, may be called horizontally generalised Stirling distribution of the second kind. A recurrence relation for the probability function of this distribution, which will be useful for tabulation purposes, is given. The distribution function is obtained in terms of a linear combination of incomplete gamma functions.  相似文献   

3.
Robustness issues in multilevel regression analysis   总被引:8,自引:0,他引:8  
A multilevel problem concerns a population with a hierarchical structure. A sample from such a population can be described as a multistage sample. First, a sample of higher level units is drawn (e.g. schools or organizations), and next a sample of the sub‐units from the available units (e.g. pupils in schools or employees in organizations). In such samples, the individual observations are in general not completely independent. Multilevel analysis software accounts for this dependence and in recent years these programs have been widely accepted. Two problems that occur in the practice of multilevel modeling will be discussed. The first problem is the choice of the sample sizes at the different levels. What are sufficient sample sizes for accurate estimation? The second problem is the normality assumption of the level‐2 error distribution. When one wants to conduct tests of significance, the errors need to be normally distributed. What happens when this is not the case? In this paper, simulation studies are used to answer both questions. With respect to the first question, the results show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second‐level standard errors. The answer to the second question is that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level‐2 errors are not fulfilled. Robust standard errors turn out to be more reliable than the asymptotic standard errors based on maximum likelihood.  相似文献   

4.
In toxicity studies, model mis‐specification could lead to serious bias or faulty conclusions. As a prelude to subsequent statistical inference, model selection plays a key role in toxicological studies. It is well known that the Bayes factor and the cross‐validation method are useful tools for model selection. However, exact computation of the Bayes factor is usually difficult and sometimes impossible and this may hinder its application. In this paper, we recommend to utilize the simple Schwarz criterion to approximate the Bayes factor for the sake of computational simplicity. To illustrate the importance of model selection in toxicity studies, we consider two real data sets. The first data set comes from a study of dietary fortification with carbonyl iron in which the Bayes factor and the cross‐validation are used to determine the number of sub‐populations in a mixture normal model. The second example involves a developmental toxicity study in which the selection of dose–response functions in a beta‐binomial model is explored.  相似文献   

5.
Scattered reports of multiple maxima in posterior distributions or likelihoods for mixed linear models appear throughout the literature. Less scrutinised is the restricted likelihood, which is the posterior distribution for a specific prior distribution. This paper surveys existing literature and proposes a unifying framework for understanding multiple maxima. For those problems with covariance structures that are diagonalisable in a specific sense, the restricted likelihood can be viewed as a generalised linear model with gamma errors, identity link and a prior distribution on the error variance. The generalised linear model portion of the restricted likelihood can be made to conflict with the portion of the restricted likelihood that functions like a prior distribution on the error variance, giving two local maxima in the restricted likelihood. Applying in addition an explicit conjugate prior distribution to variance parameters permits a second local maximum in the marginal posterior distribution even if the likelihood contribution has a single maximum. Moreover, reparameterisation from variance to precision can change the posterior modality; the converse also is true. Modellers should beware of these potential pitfalls when selecting prior distributions or using peak‐finding algorithms to estimate parameters.  相似文献   

6.
The effect of a program or treatment may vary according to observed characteristics. In such a setting, it may not only be of interest to determine whether the program or treatment has an effect on some sub‐population defined by these observed characteristics, but also to determine for which sub‐populations, if any, there is an effect. This paper treats this problem as a multiple testing problem in which each null hypothesis in the family of null hypotheses specifies whether the program has an effect on the outcome of interest for a particular sub‐population. We develop our methodology in the context of PROGRESA, a large‐scale poverty‐reduction program in Mexico. In our application, the outcome of interest is the school enrollment rate and the sub‐populations are defined by gender and highest grade completed. Under weak assumptions, the testing procedure we construct controls the familywise error rate—the probability of even one false rejection—in finite samples. Similar to earlier studies, we find that the program has a significant effect on the school enrollment rate, but only for a much smaller number of sub‐populations when compared to results that do not adjust for multiple testing. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This article examines how different levels of internal organization are reflected in the residential patterns of different population groups. In this case, the Haredi community comprises sects and sub‐sects, whose communal identity plays a central role in everyday life and spatial organization. The residential preferences of Haredi individuals are strongly influenced by the need to live among ‘friends’ — that is, other members of the same sub‐sect. This article explores the dynamics of residential patterns in two of Jerusalem's Haredi neighbourhoods: Ramat Shlomo, a new neighbourhood on the urban periphery, and Sanhedria, an old yet attractive inner‐city neighbourhood. We reveal two segregation mechanisms: the first is top‐down determination of residence, found in relatively new neighbourhoods that are planned, built and populated with the intense involvement of community leaders; the second is the bottom‐up emergence of residential patterns typical of inner‐city neighbourhoods that have gradually developed over time.  相似文献   

8.
Many asset prices, including exchange rates, exhibit periods of stability punctuated by infrequent, substantial, often one‐sided adjustments. Statistically, this generates empirical distributions of exchange rate changes that exhibit high peaks, long tails, and skewness. This paper introduces a GARCH model, with a flexible parametric error distribution based on the exponential generalized beta (EGB) family of distributions. Applied to daily US dollar exchange rate data for six major currencies, evidence based on a comparison of actual and predicted higher‐order moments and goodness‐of‐fit tests favours the GARCH‐EGB2 model over more conventional GARCH‐t and EGARCH‐t model alternatives, particularly for exchange rate data characterized by skewness. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we derive both primal and dual‐cost systems in which the stochastic specifications arise from the model (random environment or measurement errors and optimization errors)—not tacked on at the end after the deterministic system is worked out. Derivation of the error structures is based on cost‐minimizing behavior on the firms. The primal systems constitute the production function and the first‐order conditions of cost minimization. We consider two dual‐cost systems. The first dual system is based on the cost function and cost share equations. The second dual system is based on a multiplicative general error production model that is an alternative to McElroy's additive general error production model. Our multiplicative general error model gives a clear and intuitive economic meaning to the error components. The resulting cost system is easy to estimate compared to the alternative cost systems. The error components in the multiplicative general error model can capture heterogeneity in the technology parameters even in a cross‐sectional model. Panel data are not necessary to estimate either the primal or dual systems. The models are estimated using data on 72 fossil fuel‐fired steam electric power generation plants (observed for the period 1986–1999) in the USA. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

11.
This paper presents a careful investigation of the three popular calibration weighting methods: (i) generalised regression; (ii) generalised exponential tilting and (iii) generalised pseudo empirical likelihood, with a major focus on computational aspects of the methods and some empirical evidences on calibrated weights. We also propose a simple weight trimming method for range‐restricted calibration. The finite sample behaviour of the weights obtained by the three calibration weighting methods and the effectiveness of the proposed weight trimming method are examined through limited simulation studies.  相似文献   

12.
A price takes the form of a cost for either one unit (single‐unit pricing) or multiple units (multi‐unit pricing). I consider a monopolist selling units of a good to a population of homogeneous consumers to explain why one is preferred to the other. A mental cost arises if the division problem a multi‐unit price causes is done. If marginal utility remains high multiple units are desired. Multi‐unit pricing is preferred since it creates a cost if fewer units are purchased. If utility exhibits strong diminishing returns single‐unit pricing is used to avoid the calculation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
Urban-wide gas distribution cost models are developed and estimated, using capital costs, gas market, and population density data over cross-sections of communities served by two different utilities, with a particular emphasis on the multiproduct, multidimensional character of gas distribution: These models are used to clarify such policy issues as the allocation of joint costs through marginal cost pricing, the existence of ecpnomies of scale and density, and the appropriateness of natural monopoly status for gas distribution utilities.  相似文献   

14.
Disruptive innovation is necessary in healthcare if we are to solve America's healthcare crisis. Patients need access to high quality, convenient, cost‐effective healthcare. Industry leadership and the government together need to facilitate the transformation of the healthcare industry through innovative multidisciplinary models that will improve health outcomes, decrease costs, and improve access to care. Such a transformation can be realized by embracing the concept of precision medicine, where outcomes are managed and government facilities advancements through effective regulation.  相似文献   

15.
In this note, we investigate if the standard result by the managerial delegation literature, i.e., the sub‐game perfect Nash equilibrium is not Pareto‐optimal from the firms' viewpoint, still applies when asymmetric and convex costs are introduced into the analysis. In such a framework, the managerial delegation choice still represents a sub‐game Nash perfect equilibrium, but the more efficient firm may obtain higher profits provided that the degree of cost asymmetry between firms is sufficiently large. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
We analyse job‐training effects on Korean women for the period January 1999 to March 2000, using a large data set of size about 52,000. We employ a number of estimation techniques: Weibull MLE and accelerated failure time approach, which are both parametric; Cox partial likelihood estimator, which is semiparametric; and two pair‐matching estimators, which are in essence nonparametric. All of these methods gave the common conclusion that job training for Korean women increased their unemployment duration. The trainings were not cost‐effective in the sense that they took too much time ‘locking in’ the trainees during the training span, compared with the time they took to place the trainees afterwards. Despite this negative finding, some sub‐groups had positive effects: white‐collar workers trained for finance/insurance or information/communication. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
This paper studies the estimation of the distribution of non‐sequential search costs. We show that the search cost distribution is identified by combining data from multiple markets with common search technology but varying consumer valuations, firms' costs, and numbers of competitors. To exploit such data optimally, we provide a new method based on semi‐nonparametric estimation. We apply our method to a dataset of online prices for memory chips and find that the search cost density is essentially bimodal, such that a large fraction of consumers searches very little, whereas a smaller fraction searches a relatively large number of stores. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
An implicit partial differential equation (PDE) method is used to determine the cost of hedging for a Guaranteed Lifelong Withdrawal Benefit (GLWB) variable annuity contract. In the basic setting, the underlying risky asset is assumed to evolve according to geometric Brownian motion, but this is generalised to the case of a Markov regime switching process. A similarity transformation is used to reduce a pricing problem with K regimes to the solution of K coupled one dimensional PDEs, resulting in a considerable gain in computational efficiency. The methodology developed is flexible in the sense that it can calculate the cost of hedging for a variety of different withdrawal strategies by investors. Cases considered here include both optimal withdrawal strategies (i.e. strategies which generate the highest possible cost of hedging for the insurer) and sub-optimal withdrawal strategies in which the policy holder׳s decisions depend on the moneyness of the embedded options. Numerical results are presented which demonstrate the sensitivity of the cost of hedging (given the withdrawal specification) to various economic and contractual assumptions.  相似文献   

19.
Surveys usually include questions where individuals must select one in a series of possible options that can be sorted. On the other hand, multiple frame surveys are becoming a widely used method to decrease bias due to undercoverage of the target population. In this work, we propose statistical techniques for handling ordinal data coming from a multiple frame survey using complex sampling designs and auxiliary information. Our aim is to estimate proportions when the variable of interest has ordinal outcomes. Two estimators are constructed following model‐assisted generalised regression and model calibration techniques. Theoretical properties are investigated for these estimators. Simulation studies with different sampling procedures are considered to evaluate the performance of the proposed estimators in finite size samples. An application to a real survey on opinions towards immigration is also included.  相似文献   

20.
This paper illustrates the pitfalls of the conventional heteroskedasticity and autocorrelation robust (HAR) Wald test and the advantages of new HAR tests developed by Kiefer and Vogelsang in 2005 and by Phillips, Sun and Jin in 2003 and 2006. The illustrations use the 1993 Fama–French three‐factor model. The null that the intercepts are zero is tested for 5‐year, 10‐year and longer sub‐periods. The conventional HAR test with asymptotic P‐values rejects the null for most 5‐year and 10‐year sub‐periods. By contrast, the null is not rejected by the new HAR tests. This conflict is explained by showing that inferences based on the conventional HAR test are misleading for the sample sizes used in this application. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号