首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Small area estimation (SAE) entails estimating characteristics of interest for domains, often geographical areas, in which there may be few or no samples available. SAE has a long history and a wide variety of methods have been suggested, from a bewildering range of philosophical standpoints. We describe design-based and model-based approaches and models that are specified at the area level and at the unit level, focusing on health applications and fully Bayesian spatial models. The use of auxiliary information is a key ingredient for successful inference when response data are sparse, and we discuss a number of approaches that allow the inclusion of covariate data. SAE for HIV prevalence, using data collected from a Demographic Health Survey in Malawi in 2015–2016, is used to illustrate a number of techniques. The potential use of SAE techniques for outcomes related to coronavirus disease 2019 is discussed.  相似文献   

2.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

3.
In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research.  相似文献   

4.
Factor analysis models are used in data dimensionality reduction problems where the variability among observed variables can be described through a smaller number of unobserved latent variables. This approach is often used to estimate the multidimensionality of well-being. We employ factor analysis models and use multivariate empirical best linear unbiased predictor (EBLUP) under a unit-level small area estimation approach to predict a vector of means of factor scores representing well-being for small areas. We compare this approach with the standard approach whereby we use small area estimation (univariate and multivariate) to estimate a dashboard of EBLUPs of the means of the original variables and then averaged. Our simulation study shows that the use of factor scores provides estimates with lower variability than weighted and simple averages of standardised multivariate EBLUPs and univariate EBLUPs. Moreover, we find that when the correlation in the observed data is taken into account before small area estimates are computed, multivariate modelling does not provide large improvements in the precision of the estimates over the univariate modelling. We close with an application using the European Union Statistics on Income and Living Conditions data.  相似文献   

5.
Current economic theory typically assumes that all the macroeconomic variables belonging to a given economy are driven by a small number of structural shocks. As recently argued, apart from negligible cases, the structural shocks can be recovered if the information set contains current and past values of a large, potentially infinite, set of macroeconomic variables. However, the usual practice of estimating small size causal Vector AutoRegressions can be extremely misleading as in many cases such models could fully recover the structural shocks only if future values of the few variables considered were observable. In other words, the structural shocks may be non‐fundamental with respect to the small dimensional vector used in current macroeconomic practice. By reviewing a recent strand of econometric literature, we show that, as a solution, econometricians should enlarge the space of observations, and thus consider models able to handle very large panels of related time series. Among several alternatives, we review dynamic factor models together with their economic interpretation, and we show how non‐fundamentalness is non‐generic in this framework. Finally, using a factor model, we provide new empirical evidence on the effect of technology shocks on labour productivity and hours worked.  相似文献   

6.
Two‐state models (working/failed or alive/dead) are widely used in reliability and survival analysis. In contrast, multi‐state stochastic processes provide a richer framework for modeling and analyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multi‐state models, focusing on time‐homogeneous semi‐Markov processes (SMPs), and then describe the statistical flowgraph framework, which comprises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral transforms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several applications.  相似文献   

7.
Multivariate Clustered Data Analysis in Developmental Toxicity Studies   总被引:1,自引:0,他引:1  
In this paper we review statistical methods for analyzing developmental toxicity data. Such data raise a number of challenges. Models that try to accommodate the complex data generating mechanism of a developmental toxicity study, should take into account the litter effect and the number of viable fetuses, malformation indicators, weight and clustering, as a function of exposure. Further, the size of the litter may be related to outcomes among live fetuses. Scientific interest may be in inference about the dose effect, on implications of model misspecification, on assessment of model fit, and on the calculation of derived quantities such as safe limits, etc. We describe the relative merits of conditional, marginal and random-effects models for multivariate clustered binary data and present joint models for both continuous and discrete data.  相似文献   

8.
9.
Measuring differences in the economic standard of living of between natives and other ethnic groups can inform us about the relative disadvantages and inequalities within Italian society. Despite the importance of this question, the measurement of this gap is not an easy task because, when using the usual design-based approach to survey sampling inference, the available micro-data lack sufficient sample size for the majority of immigrant communities needed to obtain reliable estimates. In this paper, we show that small area estimation (SAE) techniques can be applied in a fruitful way to avoid this issue. In particular, we use an approach based on M-quantile regression for estimating the economic standard of living in each community in Italy. Our findings highlight economic disparities between natives and other ethnic groups and suggest the need to adopt specific policies that target the most vulnerable immigrant communities and are designed to improve their economic standard of living.  相似文献   

10.
Different change point models for AR(1) processes are reviewed. For some models, the change is in the distribution conditional on earlier observations. For others, the change is in the unconditional distribution. Some models include an observation before the first possible change time – others not. Earlier and new CUSUM type methods are given, and minimax optimality is examined. For the conditional model with an observation before the possible change, there are sharp results of optimality in the literature. The unconditional model with possible change at (or before) the first observation is of interest for applications. We examined this case and derived new variants of four earlier suggestions. By numerical methods and Monte Carlo simulations, it was demonstrated that the new variants dominate the original ones. However, none of the methods is uniformly minimax optimal.  相似文献   

11.
This article surveys various strategies for modeling ordered categorical (ordinal) response variables when the data have some type of clustering, extending a similar survey for binary data by Pendergast, Gange, Newton, Lindstrom, Palta & Fisher (1996). An important special case is when repeated measurement occurs at various occasions for each subject, such as in longitudinal studies. A much greater variety of models and fitting methods are available than when a similar survey for repeated ordinal response data was prepared a decade ago (Agresti, 1989). The primary emphasis of the review is on two classes of models, marginal models for which effects are averaged over all clusters at particular levels of predictors, and cluster-specific models for which effects apply at the cluster level. We present the two types of models in the ordinal context, review the literature for each, and discuss connections between them. Then, we summarize some alternative modeling approaches and ways of estimating parameters, including a Bayesian approach. We also discuss applications and areas likely to be popular for future research, such as ways of handling missing data and ways of modeling agreement and evaluating the accuracy of diagnostic tests. Finally, we review the current availability of software for using the methods discussed in this article.  相似文献   

12.
In this paper, we focus on the different methods which have been proposed in the literature to date for dealing with mixed-frequency and ragged-edge datasets: bridge equations, mixed-data sampling (MIDAS), and mixed-frequency VAR (MF-VAR) models. We discuss their performances for nowcasting the quarterly growth rate of the Euro area GDP and its components, using a very large set of monthly indicators. We investigate the behaviors of single indicator models, forecast combinations and factor models, in a pseudo real-time framework. MIDAS with an AR component performs quite well, and outperforms MF-VAR at most horizons. Bridge equations perform well overall. Forecast pooling is superior to most of the single indicator models overall. Pooling information using factor models gives even better results. The best results are obtained for the components for which more economically related monthly indicators are available. Nowcasts of GDP components can then be combined to obtain nowcasts for the total GDP growth.  相似文献   

13.
Goodman (1972) proposed several models for the analysis of the general I x I square tables with particular emphasis on social mobility data. We demonstrate in this paper, that most of his models can be reproduced by combinations of both new models proposed here and the various well known models that have received considerable attention in the literature. Our presentation here is both concise and simple to comprehend. The various models considered in this study are fitted to ten data sets that include the much analyzed 5×5 Danish and British Social mobility data sets. Results suggest that in some cases more parsimonious models than those considered earlier by various authors are possible for the explanations of the variations in the data analyzed in this study.  相似文献   

14.
Sequential Data Assimilation Techniques in Oceanography   总被引:8,自引:0,他引:8  
We review recent developments of sequential data assimilation techniques used in oceanography to integrate spatio-temporal observations into numerical models describing physical and ecological dynamics. Theoretical aspects from the simple case of linear dynamics to the general case of nonlinear dynamics are described from a geostatistical point-of-view. Current methods derived from the Kalman filter are presented from the least complex to the most general and perspectives for nonlinear estimation by sequential importance resampling filters are discussed. Furthermore an extension of the ensemble Kalman filter to transformed Gaussian variables is presented and illustrated using a simplified ecological model. The described methods are designed for predicting over geographical regions using a high spatial resolution under the practical constraint of keeping computing time sufficiently low to obtain the prediction before the fact. Therefore the paper focuses on widely used and computationally efficient methods.  相似文献   

15.
Longitudinal methods have been widely used in biomedicine and epidemiology to study the patterns of time-varying variables, such as disease progression or trends of health status. Data sets of longitudinal studies usually involve repeatedly measured outcomes and covariates on a set of randomly chosen subjects over time. An important goal of statistical analyses is to evaluate the effects of the covariates, which may or may not depend on time, on the outcomes of interest. Because fully parametric models may be subject to model misspecification and completely unstructured nonparametric models may suffer from the drawbacks of "curse of dimensionality", the varying-coefficient models are a class of structural nonparametric models which are particularly useful in longitudinal analyses. In this article, we present several important nonparametric estimation and inference methods for this class of models, demonstrate the advantages, limitations and practical implementations of these methods in different longitudinal settings, and discuss some potential directions of further research in this area. Applications of these methods are illustrated through two epidemiological examples.  相似文献   

16.
This is a review article that unifies several important examples using constrained optimisation techniques. The basic tools are three simple mathematical optimisation results subject to certain constraints. Applications include calibration, benchmarking in small area estimation and imputation. A final illustration is constrained optimisation under a general divergence loss.  相似文献   

17.
In this article, we propose a mean linear regression model where the response variable is inverse gamma distributed using a new parameterization of this distribution that is indexed by mean and precision parameters. The main advantage of our new parametrization is the straightforward interpretation of the regression coefficients in terms of the expectation of the positive response variable, as usual in the context of generalized linear models. The variance function of the proposed model has a quadratic form. The inverse gamma distribution is a member of the exponential family of distributions and has some distributions commonly used for parametric models in survival analysis as special cases. We compare the proposed model to several alternatives and illustrate its advantages and usefulness. With a generalized linear model approach that takes advantage of exponential family properties, we discuss model estimation (by maximum likelihood), black further inferential quantities and diagnostic tools. A Monte Carlo experiment is conducted to evaluate the performances of these estimators in finite samples with a discussion of the obtained results. A real application using minerals data set collected by Department of Mines of the University of Atacama, Chile, is considered to demonstrate the practical potential of the proposed model.  相似文献   

18.
Phenomena such as the Great Moderation have increased the attention of macroeconomists towards models where shock processes are not (log-)normal. This paper studies a class of discrete-time rational expectations models where the variance of exogenous innovations is subject to stochastic regime shifts. We first show that, up to a second-order approximation using perturbation methods, regime switching in the variances has an impact only on the intercept coefficients of the decision rules. We then demonstrate how to derive the exact model likelihood for the second-order approximation of the solution when there are as many shocks as observable variables. We illustrate the applicability of the proposed solution and estimation methods in the case of a small DSGE model.  相似文献   

19.
杜诗晨  汪飞星 《价值工程》2007,26(4):161-165
金融时间序列具有分布的厚尾性、波动的集聚性等特征,传统的方法难以准确的度量其风险。文中运用一种新的估计VaR和ES的方法,即采取两阶段法。首先用GARCH-M类模型(GARCH-M、EGARCH-M和TGARCH-M)拟和原始收益率数据,得到残差序列;第二步用极值分析的方法分析的尾部,最后得到收益率序列的动态VaR和ES。最后对三个模型的计算结果进行比较。  相似文献   

20.
A survey of models used for forecasting exchange rates and inflation reveals that the factor‐based and time‐varying parameter or state space models generate superior forecasts relative to all other models. This survey also finds that models based on Taylor rule and portfolio balance theory have moderate predictive power for forecasting exchange rates. The evidence on the use of Bayesian Model Averaging approach in forecasting exchange rates reveals limited predictive power, but strong support for forecasting inflation. Overall, the evidence overwhelmingly points to the context of the forecasts, relevance of the historical data, data transformation, choice of the benchmark, selected time horizons, sample period and forecast evaluation methods as the crucial elements in selecting forecasting models for exchange rate and inflation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号