首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The construction of an importance density for partially non‐Gaussian state space models is crucial when simulation methods are used for likelihood evaluation, signal extraction, and forecasting. The method of efficient importance sampling is successful in this respect, but we show that it can be implemented in a computationally more efficient manner using standard Kalman filter and smoothing methods. Efficient importance sampling is generally applicable for a wide range of models, but it is typically a custom‐built procedure. For the class of partially non‐Gaussian state space models, we present a general method for efficient importance sampling. Our novel method makes the efficient importance sampling methodology more accessible because it does not require the computation of a (possibly) complicated density kernel that needs to be tracked for each time period. The new method is illustrated for a stochastic volatility model with a Student's t distribution.  相似文献   

2.
This paper derives a procedure for simulating continuous non‐normal distributions with specified L‐moments and L‐correlations in the context of power method polynomials of order three. It is demonstrated that the proposed procedure has computational advantages over the traditional product‐moment procedure in terms of solving for intermediate correlations. Simulation results also demonstrate that the proposed L‐moment‐based procedure is an attractive alternative to the traditional procedure when distributions with more severe departures from normality are considered. Specifically, estimates of L‐skew and L‐kurtosis are superior to the conventional estimates of skew and kurtosis in terms of both relative bias and relative standard error. Further, the L‐correlation also demonstrated to be less biased and more stable than the Pearson correlation. It is also shown how the proposed L‐moment‐based procedure can be extended to the larger class of power method distributions associated with polynomials of order five.  相似文献   

3.
We propose a beta spatial linear mixed model with variable dispersion using Monte Carlo maximum likelihood. The proposed method is useful for those situations where the response variable is a rate or a proportion. An approach to the spatial generalized linear mixed models using the Box–Cox transformation in the precision model is presented. Thus, the parameter optimization process is developed for both the spatial mean model and the spatial variable dispersion model. All the parameters are estimated using Markov chain Monte Carlo maximum likelihood. Statistical inference over the parameters is performed using approximations obtained from the asymptotic normality of the maximum likelihood estimator. Diagnosis and prediction of a new observation are also developed. The method is illustrated with the analysis of one simulated case and two studies: clay and magnesium contents. In the clay study, 147 soil profile observations were taken from the research area of the Tropenbos Cameroon Programme, with explanatory variables: elevation in metres above sea level, agro‐ecological zone, reference soil group and land cover type. In the magnesium content, the soil samples were taken from 0‐ to 20‐cm‐depth layer at each of the 178 locations, and the response variable is related to the spatial locations, altitude and sub‐region.  相似文献   

4.
We present a modern perspective of the conditional likelihood approach to the analysis of capture‐recapture experiments, which shows the conditional likelihood to be a member of generalized linear model (GLM). Hence, there is the potential to apply the full range of GLM methodologies. To put this method in context, we first review some approaches to capture‐recapture experiments with heterogeneous capture probabilities in closed populations, covering parametric and non‐parametric mixture models and the use of covariates. We then review in more detail the analysis of capture‐recapture experiments when the capture probabilities depend on a covariate.  相似文献   

5.
Phylogenetic trees are types of networks that describe the temporal relationship between individuals, species, or other units that are subject to evolutionary diversification. Many phylogenetic trees are constructed from molecular data that is often only available for extant species, and hence they lack all or some of the branches that did not make it into the present. This feature makes inference on the diversification process challenging. For relatively simple diversification models, analytical or numerical methods to compute the likelihood exist, but these do not work for more realistic models in which the likelihood depends on properties of the missing lineages. In this article, we study a general class of species diversification models, and we provide an expectation-maximization framework in combination with a uniform sampling scheme to perform maximum likelihood estimation of the parameters of the diversification process.  相似文献   

6.
7.
The paper estimates a large‐scale mixed‐frequency dynamic factor model for the euro area, using monthly series along with gross domestic product (GDP) and its main components, obtained from the quarterly national accounts (NA). The latter define broad measures of real economic activity (such as GDP and its decomposition by expenditure type and by branch of activity) that we are willing to include in the factor model, in order to improve its coverage of the economy and thus the representativeness of the factors. The main problem with their inclusion is not one of model consistency, but rather of data availability and timeliness, as the NA series are quarterly and are available with a large publication lag. Our model is a traditional dynamic factor model formulated at the monthly frequency in terms of the stationary representation of the variables, which however becomes nonlinear when the observational constraints are taken into account. These are of two kinds: nonlinear temporal aggregation constraints, due to the fact that the model is formulated in terms of the unobserved monthly logarithmic changes, but we observe only the sum of the monthly levels within a quarter, and nonlinear cross‐sectional constraints, since GDP and its main components are linked by the NA identities, but the series are expressed in chained volumes. The paper provides an exact treatment of the observational constraints and proposes iterative algorithms for estimating the parameters of the factor model and for signal extraction, thereby producing nowcasts of monthly GDP and its main components, as well as measures of their reliability.  相似文献   

8.
Typical data that arise from surveys, experiments, and observational studies include continuous and discrete variables. In this article, we study the interdependence among a mixed (continuous, count, ordered categorical, and binary) set of variables via graphical models. We propose an ?1‐penalized extended rank likelihood with an ascent Monte Carlo expectation maximization approach for the copula Gaussian graphical models and establish near conditional independence relations and zero elements of a precision matrix. In particular, we focus on high‐dimensional inference where the number of observations are in the same order or less than the number of variables under consideration. To illustrate how to infer networks for mixed variables through conditional independence, we consider two datasets: one in the area of sports and the other concerning breast cancer.  相似文献   

9.
The problem of finding an explicit formula for the probability density function of two zero‐mean correlated normal random variables dates back to 1936. Perhaps, surprisingly, this problem was not resolved until 2016. This is all the more surprising given that a very simple proof is available, which is the subject of this note; we identify the product of two zero‐mean correlated normal random variables as a variance‐gamma random variable, from which an explicit formula for the probability density function is immediate.  相似文献   

10.
We consider kernel density estimation for univariate distributions. The question of interest is as follows: given that the data analyst has some background knowledge on the modality of the data (for instance, ‘data of this type are usually bimodal’), what is the adequate bandwidth to choose? We answer this question by extending Silverman's idea of ‘normal‐reference’ to that of ‘reference to a Gaussian mixture’. The concept is illustrated in the light of real data examples.  相似文献   

11.
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D‐, A‐ or E‐optimality. As an illustrative example, we demonstrate the approach using the power‐logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D‐optimal designs with two regressors for a logistic model and a two‐variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.  相似文献   

12.
Second‐order orientation methods provide a natural tool for the analysis of spatial point process data. In this paper, we extend to the spatiotemporal setting the spatial point pair orientation distribution function. The new space–time orientation distribution function is used to detect space–time anisotropic configurations. An edge‐corrected estimator is defined and illustrated through a simulation study. We apply the resulting estimator to data on the spatiotemporal distribution of fire ignition events caused by humans in a square area of 30 × 30 km2 for 4 years. Our results confirm that our approach is able to detect directional components at distinct spatiotemporal scales. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

13.
Current economic theory typically assumes that all the macroeconomic variables belonging to a given economy are driven by a small number of structural shocks. As recently argued, apart from negligible cases, the structural shocks can be recovered if the information set contains current and past values of a large, potentially infinite, set of macroeconomic variables. However, the usual practice of estimating small size causal Vector AutoRegressions can be extremely misleading as in many cases such models could fully recover the structural shocks only if future values of the few variables considered were observable. In other words, the structural shocks may be non‐fundamental with respect to the small dimensional vector used in current macroeconomic practice. By reviewing a recent strand of econometric literature, we show that, as a solution, econometricians should enlarge the space of observations, and thus consider models able to handle very large panels of related time series. Among several alternatives, we review dynamic factor models together with their economic interpretation, and we show how non‐fundamentalness is non‐generic in this framework. Finally, using a factor model, we provide new empirical evidence on the effect of technology shocks on labour productivity and hours worked.  相似文献   

14.
Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates‐dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstrated that under mild assumptions on the dependence of the intensity on covariates, this approach can provide better results than the classical nonparametric method based on local smoothing in the spatial domain. In comparison with the parametric pseudo‐likelihood estimation, the nonparametric approach can be more accurate particularly when the dependence on covariates is weak or if there is uncertainty about the model or about the range of interactions. An important supplementary task is the dimension reduction of the covariate space. It is shown that the techniques based on the inverse regression, previously applied to Cox processes, are useful even when the interactions are present. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

15.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

16.
Abstract. In JOSHI and LALITHA (1986) a test for two outliers in the same direction in a linear model is discussed. Here the performance of this statistic is studied. For this, the exact non–null density function of the random variables involved in defining the statistic is obtained. Then a measure of performance is defined and it is applied to the case of a random sample from a normal distribution, as in this case the above said statistic reduces to the well known Murphy's test statistic. These values are then compared with the power values obtained by HAWKINS (1978).  相似文献   

17.
This paper implements the generalized maximum entropy (GME) method in longitudinal data setup to investigate the regression para meters and correlation among the repeated measurements. We derive the GME system using Shannon classical entropy as well as some higher‐order entropies assuming an autoregressive correlation structure. This method is illustrated using two simulated examples to study the effect of changing the support range and compare the performance of the GME approach with the classical estimation methods.  相似文献   

18.
Abstract. In JOSHI and LALITHA (1986) a statistic K is proposed for detecting two outliers present in the opposite directions in a linear model. This statistic reduces to the Studentized range statistic for the case of a simple random sample from a normal population. In this paper, the performance of this statistic is studied. For this an approximate non–null density of the random variables involved in defining the statistic is derived. An exact density is also obtained by using the methods of LALITHA and JOSHI (1986), where the results are proved for Murphy's statistic. A measure of performance is then defined and an example is also given for highlighting the usage of the above said statistic.  相似文献   

19.
This paper reviews the literature on local government efficiency by meta‐reviewing 360 observations retrieved from 54 papers published from 1993 to 2016. The meta‐regression is based on a random‐effects model estimated with the two‐step random‐effects maximum likelihood (REML) technique proposed by Gallet and Doucouliagos. Results indicate that the study design matters when estimating a frontier in local government. We find that studies focusing on technical efficiency provide higher efficiency scores than works evaluating cost efficiency. The same applies when using panel data instead of cross‐section data. Interestingly, studies that use the Free Disposal Hull (FDH) approach yield, on average, higher efficiency scores than papers employing the data envelopment analysis (DEA) method, thereby suggesting that in this literature the convexity hypothesis of the production set is a matter. Finally, the efficiency of local government increases with the level of development of the analysed countries and is positively related to the national integrity of the legal system. The opposite holds when considering the corruption.  相似文献   

20.
This paper is concerned with the statistical analysis of proportions involving extra-binomial variation. Extra-binomial variation is inherent to experimental situations where experimental units are subject to some source of variation, e.g. biological or environmental variation. A generalized linear model for proportions does not account for random variation between experimental units. In this paper an extended version of the generalized linear model is discussed with special reference to experiments in agricultural research. In this model it is assumed that both treatment effects and random contributions of plots are part of the linear predictor. The methods are applied to results from two agricultural experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号