首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The class of p2 models is suitable for modeling binary relation data in social network analysis. A p2 model is essentially a regression model for bivariate binary responses, featuring within‐dyad dependence and correlated crossed random effects to represent heterogeneity of actors. Despite some desirable properties, these models are used less frequently in empirical applications than other models for network data. A possible reason for this is due to the limited possibilities for this model for accounting for (and explicitly modeling) structural dependence beyond the dyad as can be done in exponential random graph models. Another motive, however, may lie in the computational difficulties existing to estimate such models by means of the methods proposed in the literature, such as joint maximization methods and Bayesian methods. The aim of this article is to investigate maximum likelihood estimation based on the Laplace approximation approach, that can be refined by importance sampling. Practical implementation of such methods can be performed in an efficient manner, and the article provides details on a software implementation using R . Numerical examples and simulation studies illustrate the methodology.  相似文献   

2.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

3.
A new estimator is proposed for linear triangular systems, where identification results from the model errors following a bivariate and diagonal GARCH(1,1) process with potentially time‐varying error covariances. This estimator applies when traditional instruments are unavailable. I demonstrate its usefulness on asset pricing models like the capital asset pricing model and Fama–French three‐factor model. In the context of a standard two‐pass cross‐sectional regression approach, this estimator improves the pricing performance of both models. Set identification bounds and an associated estimator are also provided for cases where the conditions supporting point identification fail. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
A simultaneous confidence band provides a variety of inferences on the unknown components of a regression model. There are several recent papers using confidence bands for various inferential purposes; see for example, Sun et al. (1999) , Spurrier (1999) , Al‐Saidy et al. (2003) , Liu et al. (2004) , Bhargava & Spurrier (2004) , Piegorsch et al. (2005) and Liu et al. (2007) . Construction of simultaneous confidence bands for a simple linear regression model has a rich history, going back to the work of Working & Hotelling (1929) . The purpose of this article is to consolidate the disparate modern literature on simultaneous confidence bands in linear regression, and to provide expressions for the construction of exact 1 ?α level simultaneous confidence bands for a simple linear regression model of either one‐sided or two‐sided form. We center attention on the three most recognized shapes: hyperbolic, two‐segment, and three‐segment (which is also referred to as a trapezoidal shape and includes a constant‐width band as a special case). Some of these expressions have already appeared in the statistics literature, and some are newly derived in this article. The derivations typically involve a standard bivariate t random vector and its polar coordinate transformation.  相似文献   

5.
Households' choice of the number of leisure trips and the total number of overnight stays is empirically studied using Swedish tourism data. A bivariate hurdle approach separating the participation (to travel and stay the night or not) from the quantity (the number of trips and nights) decision is employed. The quantity decision is modelled with a bivariate mixed Poisson lognormal model allowing for both positive as well as negative correlation between count variables. The observed endogenous variables are drawn from a truncated density and estimation is pursued by simulated maximum likelihood. The estimation results indicate a negative correlation between the number of trips and nights. In most cases own price effects are as expected negative, while estimates of cross‐price effects vary between samples. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
In epidemiology and clinical research, there is often a proportion of unexposed individuals resulting in zero values of exposure, meaning that some individuals are not exposed and those exposed have some continuous distribution. Examples are smoking or alcohol consumption. We will call these variables with a spike at zero (SAZ). In this paper, we performed a systematic investigation on how to model covariates with a SAZ and derived theoretical odds ratio functions for selected bivariate distributions. We consider the bivariate normal and bivariate log normal distribution with a SAZ. Both confounding and effect modification can be elegantly described by formalizing the covariance matrix given the binary outcome variable Y. To model the effect of these variables, we use a procedure based on fractional polynomials first introduced by Royston and Altman (1994, Applied Statistics 43: 429–467) and modified for the SAZ situation (Royston and Sauerbrei, 2008, Multivariable model‐building: a pragmatic approach to regression analysis based on fractional polynomials for modelling continuous variables, Wiley; Becher et al., 2012, Biometrical Journal 54: 686–700). We aim to contribute to theory, practical procedures and application in epidemiology and clinical research to derive multivariable models for variables with a SAZ. As an example, we use data from a case–control study on lung cancer.  相似文献   

7.
Hedonic methods are a prominent approach in the construction of quality‐adjusted price indexes. This paper shows that the process of computing such indexes is substantially simplified if arithmetic (geometric) price indexes are computed based on exponential (log‐linear) hedonic functions estimated by the Poisson pseudo‐maximum likelihood (ordinary least squares) method. A Monte Carlo simulation study based on housing data illustrates the convenience of the links identified and the very attractive properties of the Poisson estimator in the hedonic framework.  相似文献   

8.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

9.
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images and so on are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorising the basic model types as linear, non‐linear and non‐parametric. We discuss publicly available software packages and illustrate some of the procedures by application to a functional magnetic resonance imaging data set.  相似文献   

10.
Bairamov et al. (Aust N Z J Stat 47:543–547, 2005) characterize the exponential distribution in terms of the regression of a function of a record value with its adjacent record values as covariates. We extend these results to the case of non-adjacent covariates. We also consider a more general setting involving monotone transformations. As special cases, we present characterizations involving weighted arithmetic, geometric, and harmonic means.  相似文献   

11.
Various models have been proposed as bivariate forms of the exponential distribution. A brief but comprehensive review is presented which classifies, interrelates and contrasts the different models and outlines what is known about distributional properties, applicability and estimation and testing of parameters (particularly the association parameter). Some new results are presented for one particular model. Maximum likelihood, and moment–type, estimators of the association parameter are examined. Asymptotic variances are derived and attention is given to the relative efficiency of the estimators and to problems of their evaluation.  相似文献   

12.
In this paper we develop a dynamic discrete-time bivariate probit model, in which the conditions for Granger non-causality can be represented and tested. The conditions for simultaneous independence are also worked out. The model is extended in order to allow for covariates, representing individual as well as time heterogeneity. The proposed model can be estimated by Maximum Likelihood. Granger non-causality and simultaneous independence can be tested by Likelihood Ratio or Wald tests. A specialized version of the model, aimed at testing Granger non-causality with bivariate discrete-time survival data is also discussed. The proposed tests are illustrated in two empirical applications.  相似文献   

13.
ABSTRACTS A Galton model, in logarithmic form is used to explain the convergence of countries' productivities over time, including β and σ convergence. The ‘regression fallacy’ does not arise. When comparing growths and levels of productivity, countries should be classified by their initial levels of productivity, preferably after logarithmic transformation. Errors in the data may justify using an error-in-variables model, or the geometric mean of the original and reverse regressions, but this is not certain.  相似文献   

14.
Short-term forecasting of crime   总被引:2,自引:0,他引:2  
The major question investigated is whether it is possible to accurately forecast selected crimes 1 month ahead in small areas, such as police precincts. In a case study of Pittsburgh, PA, we contrast the forecast accuracy of univariate time series models with naïve methods commonly used by police. A major result, expected for the small-scale data of this problem, is that average crime count by precinct is the major determinant of forecast accuracy. A fixed-effects regression model of absolute percent forecast error shows that such counts need to be on the order of 30 or more to achieve accuracy of 20% absolute forecast error or less. A second major result is that practically any model-based forecasting approach is vastly more accurate than current police practices. Holt exponential smoothing with monthly seasonality estimated using city-wide data is the most accurate forecast model for precinct-level crime series.  相似文献   

15.
Typically, a Poisson model is assumed for count data. In many cases, there are many zeros in the dependent variable, thus the mean is not equal to the variance value of the dependent variable. Therefore, Poisson model is not suitable anymore for this kind of data because of too many zeros. Thus, we suggest using a hurdle‐generalized Poisson regression model. Furthermore, the response variable in such cases is censored for some values because of some big values. A censored hurdle‐generalized Poisson regression model is introduced on count data with many zeros in this paper. The estimation of regression parameters using the maximum likelihood method is discussed and the goodness‐of‐fit for the regression model is examined. An example and a simulation will be used to illustrate the effects of right censoring on the parameter estimation and their standard errors.  相似文献   

16.
In this paper, we develop a bivariate unobserved components model for inflation and unemployment. The unobserved components are trend inflation and the non‐accelerating inflation rate of unemployment (NAIRU). Our model also incorporates a time‐varying Phillips curve and time‐varying inflation persistence. What sets this paper apart from the existing literature is that we do not use unbounded random walks for the unobserved components, but rather bounded random walks. For instance, NAIRU is assumed to evolve within bounds. Our empirical work shows the importance of bounding. We find that our bounded bivariate model forecasts better than many alternatives, including a version of our model with unbounded unobserved components. Our model also yields sensible estimates of trend inflation, NAIRU, inflation persistence and the slope of the Phillips curve. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This article proposes a new technique for estimating trend and multiplicative seasonality in time series data. The technique is computationally quite straightforward and gives better forecasts (in a sense described below) than other commonly used methods. Like many other methods, the one presented here is basically a decomposition technique, that is, it attempts to isolate and estimate the several subcomponents in the time series. It draws primarily on regression analysis for its power and has some of the computational advantages of exponential smoothing. In particular, old estimates of base, trend, and seasonality may be smoothed with new data as they occur. The basic technique was developed originally as a way to generate initial parameter values for a Winters exponential smoothing model [4], but it proved to be a useful forecasting method in itself.The objective in all decomposition methods is to separate somehow the effects of trend and seasonality in the data, so that the two may be estimated independently. When seasonality is modeled with an additive form (Datum = Base + Trend + Seasonal Factor), techniques such as regression analysis with dummy variables or ratio-to-moving-average techniques accomplish this task well. It is more common, however, to model seasonality as a multiplicative form (as in the Winters model, for example, where Datum = [Base + Trend] * Seasonal Factor). In this case, it can be shown that neither of the techniques above achieves a proper separation of the trend and seasonal effects, and in some instances may give highly misleading results. The technique described in this article attempts to deal properly with multiplicative seasonality, while remaining computationally tractable.The technique is built on a set of simple regression models, one for each period in the seasonal cycle. These models are used to estimate individual seasonal effects and then pooled to estimate the base and trend. As new data occur, they are smoothed into the least-squares formulas with computations that are quite similar to those used in ordinary exponential smoothing. Thus, the full least-squares computations are done only once, when the forecasting process is first initiated. Although the technique is demonstrated here under the assumption that trend is linear, the trend may, in fact, assume any form for which the curve-fitting tools are available (exponential, polynomial, etc.).The method has proved to be easy to program and execute, and computational experience has been quite favorable. It is faster than the RTMA method or regression with dummy variables (which requires a multiple regression routine), and it is competitive with, although a bit slower than, ordinary triple exponential smoothing.  相似文献   

18.
Quantile regression techniques have been widely used in empirical economics. In this paper, we consider the estimation of a generalized quantile regression model when data are subject to fixed or random censoring. Through a discretization technique, we transform the censored regression model into a sequence of binary choice models and further propose an integrated smoothed maximum score estimator by combining individual binary choice models, following the insights of Horowitz (1992) and Manski (1985). Unlike the estimators of Horowitz (1992) and Manski (1985), our estimators converge at the usual parametric rate through an integration process. In the case of fixed censoring, our approach overcomes a major drawback of existing approaches associated with the curse-of-dimensionality problem. Our approach for the fixed censored case can be extended readily to the case with random censoring for which other existing approaches are no longer applicable. Both of our estimators are consistent and asymptotically normal. A simulation study demonstrates that our estimators perform well in finite samples.  相似文献   

19.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

20.
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster‐specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow‐up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster‐specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow‐up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within‐cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号