首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a modern perspective of the conditional likelihood approach to the analysis of capture‐recapture experiments, which shows the conditional likelihood to be a member of generalized linear model (GLM). Hence, there is the potential to apply the full range of GLM methodologies. To put this method in context, we first review some approaches to capture‐recapture experiments with heterogeneous capture probabilities in closed populations, covering parametric and non‐parametric mixture models and the use of covariates. We then review in more detail the analysis of capture‐recapture experiments when the capture probabilities depend on a covariate.  相似文献   

2.
Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates‐dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstrated that under mild assumptions on the dependence of the intensity on covariates, this approach can provide better results than the classical nonparametric method based on local smoothing in the spatial domain. In comparison with the parametric pseudo‐likelihood estimation, the nonparametric approach can be more accurate particularly when the dependence on covariates is weak or if there is uncertainty about the model or about the range of interactions. An important supplementary task is the dimension reduction of the covariate space. It is shown that the techniques based on the inverse regression, previously applied to Cox processes, are useful even when the interactions are present. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

3.
Dickey and Fuller [Econometrica (1981) Vol. 49, pp. 1057–1072] suggested unit‐root tests for an autoregressive model with a linear trend conditional on an initial observation. TPower of tests for unit roots in the presence of a linear trendightly different model with a random initial value in which nuisance parameters can easily be eliminated by an invariant reduction of the model. We show that invariance arguments can also be used when comparing power within a conditional model. In the context of the conditional model, the Dickey–Fuller test is shown to be more stringent than a number of unit‐root tests motivated by models with random initial value. The power of the Dickey–Fuller test can be improved by making assumptions to the initial value. The practitioner therefore has to trade‐off robustness and power, as assumptions about initial values are hard to test, but can give more power.  相似文献   

4.
We review some results on the analysis of longitudinal data or, more generally, of repeated measures via linear mixed models starting with some exploratory statistical tools that may be employed to specify a tentative model. We follow with a summary of inferential procedures under a Gaussian set‐up and then discuss different diagnostic methods focusing on residual analysis but also addressing global and local influence. Based on the interpretation of diagnostic plots related to three types of residuals (marginal, conditional and predicted random effects) as well as on other tools, we proceed to identify remedial measures for possible violations of the proposed model assumptions, ranging from fine‐tuning of the model to the use of elliptically symmetric or skew‐elliptical linear mixed models as well as of robust estimation methods. We specify many results available in the literature in a unified notation and highlight those with greater practical appeal. In each case, we discuss the availability of model diagnostics as well as of software and give general guidelines for model selection. We conclude with analyses of three practical examples and suggest further directions for research.  相似文献   

5.
We develop a novel high‐dimensional non‐Gaussian modeling framework to infer measures of conditional and joint default risk for numerous financial sector firms. The model is based on a dynamic generalized hyperbolic skewed‐t block equicorrelation copula with time‐varying volatility and dependence parameters that naturally accommodates asymmetries and heavy tails, as well as nonlinear and time‐varying default dependence. We apply a conditional law of large numbers in this setting to define joint and conditional risk measures that can be evaluated quickly and reliably. We apply the modeling framework to assess the joint risk from multiple defaults in the euro area during the 2008–2012 financial and sovereign debt crisis. We document unprecedented tail risks between 2011 and 2012, as well as their steep decline following subsequent policy actions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper, we study an estimation problem where the variables of interest are subject to both right censoring and measurement error. In this context, we propose a nonparametric estimation strategy of the hazard rate, based on a regression contrast minimized in a finite‐dimensional functional space generated by splines bases. We prove a risk bound of the estimator in terms of integrated mean square error and discuss the rate of convergence when the dimension of the projection space is adequately chosen. Then we define a data‐driven criterion of model selection and prove that the resulting estimator performs an adequate compromise. The method is illustrated via simulation experiments that show that the strategy is successful.  相似文献   

7.
We consider a semiparametric method to estimate logistic regression models with missing both covariates and an outcome variable, and propose two new estimators. The first, which is based solely on the validation set, is an extension of the validation likelihood estimator of Breslow and Cain (Biometrika 75:11–20, 1988). The second is a joint conditional likelihood estimator based on the validation and non-validation data sets. Both estimators are semiparametric as they do not require any model assumptions regarding the missing data mechanism nor the specification of the conditional distribution of the missing covariates given the observed covariates. The asymptotic distribution theory is developed under the assumption that all covariate variables are categorical. The finite-sample properties of the proposed estimators are investigated through simulation studies showing that the joint conditional likelihood estimator is the most efficient. A cable TV survey data set from Taiwan is used to illustrate the practical use of the proposed methodology.  相似文献   

8.
State space models play an important role in macroeconometric analysis and the Bayesian approach has been shown to have many advantages. This paper outlines recent developments in state space modelling applied to macroeconomics using Bayesian methods. We outline the directions of recent research, specifically the problems being addressed and the solutions proposed. After presenting a general form for the linear Gaussian model, we discuss the interpretations and virtues of alternative estimation routines and their outputs. This discussion includes the Kalman filter and smoother, and precision-based algorithms. As the advantages of using large models have become better understood, a focus has developed on dimension reduction and computational advances to cope with high-dimensional parameter spaces. We give an overview of a number of recent advances in these directions. Many models suggested by economic theory are either non-linear or non-Gaussian, or both. We discuss work on the particle filtering approach to such models as well as other techniques that use various approximations – to either the time state and measurement equations or to the full posterior for the states – to obtain draws.  相似文献   

9.
We consider residuals for the linear model with a general covariance structure. In contrast to the situation where observations are independent there are several alternative definitions. We draw attention to three quite distinct types of residuals: the marginal residuals, the model‐specified residuals and the full‐conditional residuals. We adopt a very broad perspective including linear mixed models, time series and smoothers as well as models for spatial and multivariate data. We concentrate on defining these different residual types and discussing their interrelationships. The full‐conditional residuals are seen to play several important roles.  相似文献   

10.
This paper considers multiple regression procedures for analyzing the relationship between a response variable and a vector of d covariates in a nonparametric setting where tuning parameters need to be selected. We introduce an approach which handles the dilemma that with high dimensional data the sparsity of data in regions of the sample space makes estimation of nonparametric curves and surfaces virtually impossible. This is accomplished by abandoning the goal of trying to estimate true underlying curves and instead estimating measures of dependence that can determine important relationships between variables. These dependence measures are based on local parametric fits on subsets of the covariate space that vary in both dimension and size within each dimension. The subset which maximizes a signal to noise ratio is chosen, where the signal is a local estimate of a dependence parameter which depends on the subset dimension and size, and the noise is an estimate of the standard error (SE) of the estimated signal. This approach of choosing the window size to maximize a signal to noise ratio lifts the curse of dimensionality because for regions with sparsity of data the SE is very large. It corresponds to asymptotically maximizing the probability of correctly finding nonspurious relationships between covariates and a response or, more precisely, maximizing asymptotic power among a class of asymptotic level αt-tests indexed by subsets of the covariate space. Subsets that achieve this goal are called features. We investigate the properties of specific procedures based on the preceding ideas using asymptotic theory and Monte Carlo simulations and find that within a selected dimension, the volume of the optimally selected subset does not tend to zero as n → ∞ unless the volume of the subset of the covariate space where the response depends on the covariate vector tends to zero.  相似文献   

11.
In this paper, we assess whether using non-linear dimension reduction techniques pays off for forecasting inflation in real-time. Several recent methods from the machine learning literature are adopted to map a large dimensional dataset into a lower-dimensional set of latent factors. We model the relationship between inflation and the latent factors using constant and time-varying parameter (TVP) regressions with shrinkage priors. Our models are then used to forecast monthly US inflation in real-time. The results suggest that sophisticated dimension reduction methods yield inflation forecasts that are highly competitive with linear approaches based on principal components. Among the techniques considered, the Autoencoder and squared principal components yield factors that have high predictive power for one-month- and one-quarter-ahead inflation. Zooming into model performance over time reveals that controlling for non-linear relations in the data is of particular importance during recessionary episodes of the business cycle or the current COVID-19 pandemic.  相似文献   

12.
The behavior of estimators for misspecified parametric models has been well studied. We consider estimators for misspecified nonlinear regression models, with error and covariates possibly dependent. These models are described by specifying a parametric model for the conditional expectation of the response given the covariates. This is a parametric family of conditional constraints, which makes the model itself close to nonparametric. We study the behavior of weighted least squares estimators both when the regression function is correctly specified, and when it is misspecified and also involves possible additional covariates.  相似文献   

13.
Asymmetric information models of market microstructure claim that variables such as trading intensity are proxies for latent information on the value of financial assets. We consider the interval‐valued time series (ITS) of low/high returns and explore the relationship between these extreme returns and the intensity of trading. We assume that the returns (or prices) are generated by a latent process with some unknown conditional density. At each period of time, from this density, we have some random draws (trades) and the lowest and highest returns are the realized extreme observations of the latent process over the sample of draws. In this context, we propose a semiparametric model of extreme returns that exploits the results provided by extreme value theory. If properly centered and standardized extremes have well‐defined limiting distributions, the conditional mean of extreme returns is a nonlinear function of the conditional moments of the latent process and of the conditional intensity of the process that governs the number of draws. We implement a two‐step estimation procedure. First, we estimate parametrically the regressors that will enter into the nonlinear function, and in a second step we estimate nonparametrically the conditional mean of extreme returns as a function of the generated regressors. Unlike current models for ITS, the proposed semiparametric model is robust to misspecification of the conditional density of the latent process. We fit several nonlinear and linear models to the 5‐minute and 1‐minute low/high returns to seven major banks and technology stocks, and find that the nonlinear specification is superior to the current linear models and that the conditional volatility of the latent process and the conditional intensity of the trading process are major drivers of the dynamics of extreme returns.  相似文献   

14.
For a large heterogeneous group of patients, we analyse probabilities of hospital admission and distributional properties of lengths of hospital stay conditional on individual determinants. Bayesian structured additive regression models for zero‐inflated and overdispersed count data are employed. In addition, the framework is extended towards hurdle specifications, providing an alternative approach to cover particularly large frequencies of zero quotes in count data. As a specific merit, the model class considered embeds linear and nonlinear effects of covariates on all distribution parameters. Linear effects indicate that the quantity and severity of prior illness are positively correlated with the risk of hospital admission, while medical prevention (in the form of general practice visits) and rehabilitation reduce the expected length of future hospital stays. Flexible nonlinear response patterns are diagnosed for age and an indicator of a patients' socioeconomic status. We find that social deprivation exhibits a positive impact on the risk of admission and a negative effect on the expected length of future hospital stays of admitted patients. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

16.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

17.
Data with large dimensions will bring various problems to the application of data envelopment analysis (DEA). In this study, we focus on a “big data” problem related to the considerably large dimensions of the input-output data. The four most widely used approaches to guide dimension reduction in DEA are compared via Monte Carlo simulation, including principal component analysis (PCA-DEA), which is based on the idea of aggregating input and output, efficiency contribution measurement (ECM), average efficiency measure (AEC), and regression-based detection (RB), which is based on the idea of variable selection. We compare the performance of these methods under different scenarios and a brand-new comparison benchmark for the simulation test. In addition, we discuss the effect of initial variable selection in RB for the first time. Based on the results, we offer guidelines that are more reliable on how to choose an appropriate method.  相似文献   

18.
Hawkes processes are used in statistical modeling for event clustering and causal inference, while they also can be viewed as stochastic versions of popular compartmental models used in epidemiology. Here we show how to develop accurate models of COVID-19 transmission using Hawkes processes with spatial-temporal covariates. We model the conditional intensity of new COVID-19 cases and deaths in the U.S. at the county level, estimating the dynamic reproduction number of the virus within an EM algorithm through a regression on Google mobility indices and demographic covariates in the maximization step. We validate the approach on both short-term and long-term forecasting tasks, showing that the Hawkes process outperforms several models currently used to track the pandemic, including an ensemble approach and an SEIR-variant. We also investigate which covariates and mobility indices are most important for building forecasts of COVID-19 in the U.S.  相似文献   

19.
We propose a computationally efficient and statistically principled method for kernel smoothing of point pattern data on a linear network. The point locations, and the network itself, are convolved with a two‐dimensional kernel and then combined into an intensity function on the network. This can be computed rapidly using the fast Fourier transform, even on large networks and for large bandwidths, and is robust against errors in network geometry. The estimator is consistent, and its statistical efficiency is only slightly suboptimal. We discuss bias, variance, asymptotics, bandwidth selection, variance estimation, relative risk estimation and adaptive smoothing. The methods are used to analyse spatially varying frequency of traffic accidents in Western Australia and the relative risk of different types of traffic accidents in Medellín, Colombia.  相似文献   

20.
Nonlinear deterministic forecasting of daily dollar exchange rates   总被引:2,自引:0,他引:2  
We perform out-of-sample predictions on several dollar exchange rate returns by using time-delay embedding techniques and a local linear predictor. We compared our predictions with those by a mean value predictor. Some of our predictions of the exchange rate returns outperform the predictions of the same series by the mean value predictor. However, these improvements were not statistically significant. Another interesting result in this paper which was obtained by using a recently developed technique of nonlinear dynamics is that all exchange rate return series we tested have a very high embedding dimension. Additionally, evidence indicates that these series are likely generated by high dimensional systems with measurement noise or by high dimensional nonlinear stochastic systems, that is, nonlinear deterministic systems with dynamic noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号