首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An early example of a compound decision problem of Robbins (1951) is employed to illustrate some features of the development of empirical Bayes methods. Our primary objective is to draw attention to the constructive role that the nonparametric maximum likelihood estimator for mixture models introduced by Kiefer & Wolfowitz (1956) can play in these developments.  相似文献   

2.
This paper proposes a Bayesian nonparametric modeling approach for the return distribution in multivariate GARCH models. In contrast to the parametric literature the return distribution can display general forms of asymmetry and thick tails. An infinite mixture of multivariate normals is given a flexible Dirichlet process prior. The GARCH functional form enters into each of the components of this mixture. We discuss conjugate methods that allow for scale mixtures and nonconjugate methods which provide mixing over both the location and scale of the normal components. MCMC methods are introduced for posterior simulation and computation of the predictive density. Bayes factors and density forecasts with comparisons to GARCH models with Student-tt innovations demonstrate the gains from our flexible modeling approach.  相似文献   

3.
The paper develops a general Bayesian framework for robust linear static panel data models usingε-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects. The ML-II posterior means are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman–Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case.  相似文献   

4.
This paper illustrates the pitfalls of the conventional heteroskedasticity and autocorrelation robust (HAR) Wald test and the advantages of new HAR tests developed by Kiefer and Vogelsang in 2005 and by Phillips, Sun and Jin in 2003 and 2006. The illustrations use the 1993 Fama–French three‐factor model. The null that the intercepts are zero is tested for 5‐year, 10‐year and longer sub‐periods. The conventional HAR test with asymptotic P‐values rejects the null for most 5‐year and 10‐year sub‐periods. By contrast, the null is not rejected by the new HAR tests. This conflict is explained by showing that inferences based on the conventional HAR test are misleading for the sample sizes used in this application. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Empirical implementation of nonparametric first-price auction models   总被引:1,自引:0,他引:1  
Nonparametric estimators provide a flexible means of uncovering salient features of auction data. Although these estimators are popular in the literature, many key features necessary for proper implementation have yet to be uncovered. Here we provide several suggestions for nonparametric estimation of first-price auction models. Specifically, we show how to impose monotonicity of the equilibrium bidding strategy; a key property of structural auction models not guaranteed in standard nonparametric estimation. We further develop methods for automatic bandwidth selection. Finally, we discuss how to impose monotonicity in auctions with differing numbers of bidders, reserve prices, and auction-specific characteristics. Finite sample performance is examined using simulated data as well as experimental auction data.  相似文献   

6.
This article presents the empirical Bayes method for estimation of the transition probabilities of a generalized finite stationary Markov chain whose ith state is a multi-way contingency table. We use a log-linear model to describe the relationship between factors in each state. The prior knowledge about the main effects and interactions will be described by a conjugate prior. Following the Bayesian paradigm, the Bayes and empirical Bayes estimators relative to various loss functions are obtained. These procedures are illustrated by a real example. Finally, asymptotic normality of the empirical Bayes estimators are established.  相似文献   

7.
Recently, using mixed data on Canadian housing, Parmeter, Henderson, and Kumbhakar (Journal of Applied Econometrics 2007; 22 : 695–699) found that a nonparametric approach for estimating a hedonic house price function is superior to formerly suggested parametric and semiparametric specifications. We carefully reanalyze these specifications for this dataset by applying a recent nonparametric specification test and simulation‐based prediction comparisons. For the case at issue our results suggest that a previously proposed parametric specification does not have to be rejected and we illustrate how nonparametric methods provide valuable insights during all modeling steps. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

9.
This paper develops an asymptotic theory for test statistics in linear panel models that are robust to heteroskedasticity, autocorrelation and/or spatial correlation. Two classes of standard errors are analyzed. Both are based on nonparametric heteroskedasticity autocorrelation (HAC) covariance matrix estimators. The first class is based on averages of HAC estimators across individuals in the cross-section, i.e. “averages of HACs”. This class includes the well known cluster standard errors analyzed by Arellano (1987) as a special case. The second class is based on the HAC of cross-section averages and was proposed by Driscoll and Kraay (1998). The ”HAC of averages” standard errors are robust to heteroskedasticity, serial correlation and spatial correlation but weak dependence in the time dimension is required. The “averages of HACs” standard errors are robust to heteroskedasticity and serial correlation including the nonstationary case but they are not valid in the presence of spatial correlation. The main contribution of the paper is to develop a fixed-b asymptotic theory for statistics based on both classes of standard errors in models with individual and possibly time fixed-effects dummy variables. The asymptotics is carried out for large time sample sizes for both fixed and large cross-section sample sizes. Extensive simulations show that the fixed-b approximation is usually much better than the traditional normal or chi-square approximation especially for the Driscoll-Kraay standard errors. The use of fixed-b critical values will lead to more reliable inference in practice especially for tests of joint hypotheses.  相似文献   

10.
In the compound decision problem the same decision problem. called the component decision problem, occurs repeatedly. Data from all repetitions are used to reach decisions concerning the parameter values in each component; such rules are called compound decision rules. The compound risk is the average risk across all component decisions. In the analogous empirical Bayes problem the parameters are taken to be independent and identically distributed with unknown distribution. This paper explores the relationship between compound risk admissibility and component risk admissibility in these problems. The definitions are general and the results are fairly obvious and proved for the finite parameter set component only.  相似文献   

11.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

12.
p‐Values are commonly transformed to lower bounds on Bayes factors, so‐called minimum Bayes factors. For the linear model, a sample‐size adjusted minimum Bayes factor over the class of g‐priors on the regression coefficients has recently been proposed (Held & Ott, The American Statistician 70(4), 335–341, 2016). Here, we extend this methodology to a logistic regression to obtain a sample‐size adjusted minimum Bayes factor for 2 × 2 contingency tables. We then study the relationship between this minimum Bayes factor and two‐sided p‐values from Fisher's exact test, as well as less conservative alternatives, with a novel parametric regression approach. It turns out that for all p‐values considered, the maximal evidence against the point null hypothesis is inversely related to the sample size. The same qualitative relationship is observed for minimum Bayes factors over the more general class of symmetric prior distributions. For the p‐values from Fisher's exact test, the minimum Bayes factors do on average not tend to the large‐sample bound as the sample size becomes large, but for the less conservative alternatives, the large‐sample behaviour is as expected.  相似文献   

13.
This study presents a citation-based systematic literature review on banking sector performance, particularly in terms of profitability, productivity, and efficiency. Specifically, the study aims to identify the leading sources of knowledge in terms of the most influential journals, authors, and papers. The paper presents a content analysis of the 100 most cited papers. In total, 1996 peer-review papers were found relevant in the Scopus database by using a comprehensive list of keywords. The results show that the Journal of Banking & Finance appears to be the leading journal in terms of publication count and citations. Based on total citations, Allen Berger is the most prolific author. The most cited paper is “Problem loans and cost efficiency in commercial banks” by Allan Berger and Robert DeYoung. The content analysis of the top 100 papers identifies five essential themes: determinants of efficiency, methodology, ownership, financial crises, and scale economies. In terms of estimation approaches, 74% of papers employed frontier analysis, which includes 34% parametric and 40% nonparametric methods, and remaining 26% have used financial ratio analysis. Additionally, stochastic frontier and data envelopment analysis are widely used in parametric and nonparametric methods, respectively. An intermediate approach is extensively adopted for the specification of inputs and outputs.  相似文献   

14.
In this paper, we provide an intensive review of the recent developments for semiparametric and fully nonparametric panel data models that are linearly separable in the innovation and the individual-specific term. We analyze these developments under two alternative model specifications: fixed and random effects panel data models. More precisely, in the random effects setting, we focus our attention in the analysis of some efficiency issues that have to do with the so-called working independence condition. This assumption is introduced when estimating the asymptotic variance–covariance matrix of nonparametric estimators. In the fixed effects setting, to cope with the so-called incidental parameters problem, we consider two different estimation approaches: profiling techniques and differencing methods. Furthermore, we are also interested in the endogeneity problem and how instrumental variables are used in this context. In addition, for practitioners, we also show different ways of avoiding the so-called curse of dimensionality problem in pure nonparametric models. In this way, semiparametric and additive models appear as a solution when the number of explanatory variables is large.  相似文献   

15.
The familiar logit and probit models provide convenient settings for many binary response applications, but a larger class of link functions may be occasionally desirable. Two parametric families of link functions are investigated: the Gosset link based on the Student t latent variable model with the degrees of freedom parameter controlling the tail behavior, and the Pregibon link based on the (generalized) Tukey λ family, with two shape parameters controlling skewness and tail behavior. Both Bayesian and maximum likelihood methods for estimation and inference are explored, compared and contrasted. In applications, like the propensity score matching problem discussed below, where it is critical to have accurate estimates of the conditional probabilities, we find that misspecification of the link function can create serious bias. Bayesian point estimation via MCMC performs quite competitively with MLE methods; however nominal coverage of Bayes credible regions is somewhat more problematic.  相似文献   

16.
区位基尼系数的计算、性质及其应用   总被引:1,自引:0,他引:1  
基于产业规模探索区位基尼系数的简化计算、区域分解和两位数产业分解,给出产业份额、区位商以及产业份额空间测度加权区位基尼系数的分解公式,提供相对边际效应和增量分解的计算公式。利用我国2004年、2008年的经济普查数据,计算国民经济19个字母产业的区位基尼系数,得出以产业份额计算的区位基尼系数更能反映产业聚集程度的结论,并使用产业份额区位基尼系数进行实证分析。  相似文献   

17.
We address the problem of the estimation of the population mean and the distribution function using nonparametric regression. These methods are being used in a wide range of settings and areas of research. In particular, they are a good alternative to other classical methods in the survey sampling context, since they work under the assumption that the underlying regression function is smooth. Some relevant nonparametric regression methods in survey sampling are presented. Data on breast cancer prevalence derived from 40 European countries are used to study the application of the nonparametric estimators to the estimation of cancer prevalence. Result derived from an empirical study show that nonparametric estimators have a good empirical performance in this study on cancer prevalence.  相似文献   

18.
Nonparametric methodologies are proposed to assess college students' performance. Emphasis is given to gender and sector of high school. The application concerns the University of Campinas, a research university in Southeast Brazil. In Brazil college studies are based on a somewhat rigid set of subjects for each major. For this reason a simple GPA comparison may hide true performance. Therefore, we define individual vectors of course grades. These vectors are used in pairwise comparisons of common subject grades for individuals who entered college in the same year. The relative college performances of any two students are compared with their relative performances on the entrance exam score. A procedure based on generalized U-statistics is developed to test if there is selection bias in the entrance exam by some predefined groups, which is equipped with asymptotically normal distribution under both null and alternative hypotheses. Maximum power is attained by employing the union intersection principle, and resampling techniques such as nonparametric bootstrap are employed to generate the empirical distribution of the test statistics and get p-values.  相似文献   

19.
Two alternative approaches of efficiency measurement, nonparametric and statistical, are employed to calculate three types of efficiency indexes for the U.S. beer industry over the period 1950–1986. The results indicate that the beer industry was operating at a high level of pure technical efficiency over that period. The mean value of this efficiency measure is 93.7 percent based on the nonparametric approach and 87.5 percent based on the statistical approach. The two approaches yield dissimilar values and sources for overall technical inefficiency. The overall technical efficiency index computed under the nonparametric approach stands at 91.10 percent and the observed inefficiency is found to be more due to pure technical inefficiency than to scale inefficiency. Using the statistical approach, the beer industry is found to be less overall technically efficient (68.42 percent) than indicated by the nonparametric methodology and the observed inefficiency is found to be primarily contributed to by scale inefficiency.  相似文献   

20.
This paper uses survey-based data of the Argentinian province of Córdoba to conduct an empirical test of the performance of the Flegg's location quotient (FLQ) and augmented FLQ (AFLQ) formulae for estimating regional input coefficients. A comparison is made with conventional methods based on location quotients. The possibility of using prior information about the extent of self-sufficiency of particular sectors is explored. The empirical work employs a range of statistical criteria with contrasting properties, and examines performance in terms of each method's ability to estimate regional input coefficients, output multipliers and imports. Particular attention is paid to the problem of choosing a value for the unknown parameter δ in the FLQ and AFLQ formulae. These formulae are found to give the best overall results of the non-survey methods considered in the paper. However, the AFLQ typically produces slightly more accurate results than the FLQ, in line with the findings of previous studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号