首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT In this paper we present the first application to unemployment duration analysis of a mixture distribution model initially proposed in the biosciences literature by Blackstone, Naftel and Turner (1986). The model is characterized by the decomposition of an aggregate hazard function into a number of distinct hazard functions. The approach allows us to attribute to each function a different set of covariates as well as coefficients. Using US data on displaced workers, we are able to decompose the time varying hazard into two distinct phases — corresponding to short-term and long-term unemployment — and in the process evaluate (and reject) the proportionality assumption. We also compare the results from the model with those obtained from the Cox proportional hazards model and with a parametric hazards model in which a Burr specification is employed for the baseline hazard.  相似文献   

2.
In this paper we examine a multiplicative intensity model in which a covariate interacts with two other covariates in the same model. We demonstrate, analytically, that in such situations a log-linear parameterization based on two pairs of baseline levels cannot be transformed, uniquely, to the, otherwise equivalent, multiplicative parameterization. We show that the problem lies in an oversight of the conditional independence between the two covariates interacting with a common third covariate. As a solution, therefore, we propose an approach that takes due account of such dependence. Our proposed approach uses a common baseline level for the three covariates involved in interaction while estimating the corresponding relative intensities. The issues addressed are illustrated with a demographic data set involving the estimation of rates of transition to parenthood.  相似文献   

3.
In the competing risks context, the effect of a covariate on the hazard function for a particular cause may be very different from its effect on the likelihood of exiting due to that cause. The latter probability is a function of all cause‐specific hazards, and thereby potentially affected by indirect effects via hazards for competing causes. We consider the effects of covariates on the cumulative probability of exiting from a particular cause. These ‘marginal effects’ are decomposed into direct effects via the hazard of interest and indirect effects via the hazards for competing causes. For the piecewise constant hazard specification we derive simple closed‐form expressions for the marginal effects that can be computed from the standard hazard function estimates. An empirical application illustrates that the marginal effects provide a useful and coherent way of summarizing the results of competing risks analysis.  相似文献   

4.
In this paper, we propose an empirical likelihood ratio method for the inference about average derivatives in semiparametric hazard regression models for competing risks data. Empirical loglikelihood ratio for the vector of the average derivatives of a hazard regression function is defined and shown to be asymptotically chi-squared with degrees of freedom equal to the dimension of covariate vector. Monte Carlo simulation studies are presented to compare the empirical likelihood ratio method with the normal-approximation-based method.  相似文献   

5.
This paper considers multiple regression procedures for analyzing the relationship between a response variable and a vector of d covariates in a nonparametric setting where tuning parameters need to be selected. We introduce an approach which handles the dilemma that with high dimensional data the sparsity of data in regions of the sample space makes estimation of nonparametric curves and surfaces virtually impossible. This is accomplished by abandoning the goal of trying to estimate true underlying curves and instead estimating measures of dependence that can determine important relationships between variables. These dependence measures are based on local parametric fits on subsets of the covariate space that vary in both dimension and size within each dimension. The subset which maximizes a signal to noise ratio is chosen, where the signal is a local estimate of a dependence parameter which depends on the subset dimension and size, and the noise is an estimate of the standard error (SE) of the estimated signal. This approach of choosing the window size to maximize a signal to noise ratio lifts the curse of dimensionality because for regions with sparsity of data the SE is very large. It corresponds to asymptotically maximizing the probability of correctly finding nonspurious relationships between covariates and a response or, more precisely, maximizing asymptotic power among a class of asymptotic level αt-tests indexed by subsets of the covariate space. Subsets that achieve this goal are called features. We investigate the properties of specific procedures based on the preceding ideas using asymptotic theory and Monte Carlo simulations and find that within a selected dimension, the volume of the optimally selected subset does not tend to zero as n → ∞ unless the volume of the subset of the covariate space where the response depends on the covariate vector tends to zero.  相似文献   

6.
从产品纵向差异的角度分析了垄断市场中双寡头企业的短期市场行为和长期市场行为.具体来说,从短期的角度出发,将产品质量视为外生变量,研究高质量企业和低质量企业的定价策略和利润状况,并对均衡结果进行比较静态分析.研究结果表明,两个企业都有提高或降低各自产品质量的动机,这取决于双方现有的质量水平.从长期的角度出发,将产品质量视为内生变量,研究企业的质量与价格决策.研究结果表明,产品质量存在唯一均衡,但均衡质量并没有表现出较大的差异化,而是表现出较小的差异化.此外,无论从短期还是长期的角度来看,企业都不存在高质量优势,而仅存在低质量优势.  相似文献   

7.
We consider a semiparametric method to estimate logistic regression models with missing both covariates and an outcome variable, and propose two new estimators. The first, which is based solely on the validation set, is an extension of the validation likelihood estimator of Breslow and Cain (Biometrika 75:11–20, 1988). The second is a joint conditional likelihood estimator based on the validation and non-validation data sets. Both estimators are semiparametric as they do not require any model assumptions regarding the missing data mechanism nor the specification of the conditional distribution of the missing covariates given the observed covariates. The asymptotic distribution theory is developed under the assumption that all covariate variables are categorical. The finite-sample properties of the proposed estimators are investigated through simulation studies showing that the joint conditional likelihood estimator is the most efficient. A cable TV survey data set from Taiwan is used to illustrate the practical use of the proposed methodology.  相似文献   

8.
A host of recent research has used reweighting methods to analyze the extent to which observable characteristics predict between‐group differences in the distribution of an outcome. Less attention has been paid to using reweighting methods to isolate the roles of individual covariates. We analyze two approaches that have been used in previous studies, and we propose a new approach that examines the role of one covariate while holding the marginal distribution of the other covariates constant. We illustrate the differences between the methods with a numerical example and an empirical analysis of black–white wage differentials among males. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Hawkes processes are used in statistical modeling for event clustering and causal inference, while they also can be viewed as stochastic versions of popular compartmental models used in epidemiology. Here we show how to develop accurate models of COVID-19 transmission using Hawkes processes with spatial-temporal covariates. We model the conditional intensity of new COVID-19 cases and deaths in the U.S. at the county level, estimating the dynamic reproduction number of the virus within an EM algorithm through a regression on Google mobility indices and demographic covariates in the maximization step. We validate the approach on both short-term and long-term forecasting tasks, showing that the Hawkes process outperforms several models currently used to track the pandemic, including an ensemble approach and an SEIR-variant. We also investigate which covariates and mobility indices are most important for building forecasts of COVID-19 in the U.S.  相似文献   

10.
This paper proposes a new axiomatic model of intertemporal choice that allows for dynamic inconsistency. We weaken the classical assumption of stationarity into two related axioms: stationarity in the short-term and stationarity in the long-term. We obtain a model with two independent discount factors, which is flexible enough to capture different time preferences, including a greater impatience for more immediate outcomes (when a long-term discount factor exceeds a compounded short-term discount factor). Our proposed model can accommodate some experimental results that cannot be rationalized by other existing models of dynamic inconsistency (such as quasi-hyperbolic discounting and generalized hyperbolic discounting).  相似文献   

11.
Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates‐dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstrated that under mild assumptions on the dependence of the intensity on covariates, this approach can provide better results than the classical nonparametric method based on local smoothing in the spatial domain. In comparison with the parametric pseudo‐likelihood estimation, the nonparametric approach can be more accurate particularly when the dependence on covariates is weak or if there is uncertainty about the model or about the range of interactions. An important supplementary task is the dimension reduction of the covariate space. It is shown that the techniques based on the inverse regression, previously applied to Cox processes, are useful even when the interactions are present. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

12.
This paper considers the identification and estimation of an extension of Roy’s model (1951) of sectoral choice, which includes a non-pecuniary component in the selection equation and allows for uncertainty on potential earnings. We focus on the identification of the non-pecuniary component, which is key to disentangling the relative importance of monetary incentives versus preferences in the context of sorting across sectors. By making the most of the structure of the selection equation, we show that this component is point identified from the knowledge of the covariate effects on earnings, as soon as one covariate is continuous. Notably, and in contrast to most results on the identification of Roy models, this implies that identification can be achieved without any exclusion restriction nor large support condition on the covariates. As a by-product, bounds are obtained on the distribution of the ex ante   monetary returns. We propose a three-stage semiparametric estimation procedure for this model, which yields root-nn consistent and asymptotically normal estimators. Finally, we apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions.  相似文献   

13.
In this article we propose the use of an asymmetric binary link function to extend the proportional hazard model for predicting loan default. The rationale behind this approach is that the symmetry assumption that has been widely used in the literature could be considered as quite restrictive, especially during periods of financial distress. In our approach we allow for a flexible level of asymmetry in the probability of default by the use of the skewed logit distribution. This enable us to estimate the actual level of asymmetry that is associated with the data at hand. We implement our approach to both simulated data and a rich micro dataset of consumer loan accounts. Our results provide clear evidence that ignoring the actual level of asymmetry leads to seriously biased estimates of the slope coefficients, inaccurate marginal effects of the covariates of the model, and overestimation of the probability of default. Regarding the predictive power of the covariates of the model, we have found that loan-specific covariates contain considerably more information about the loan default than macroeconomic covariates, which are often used in practice to carry out macroprudential stress testing.  相似文献   

14.
This study examines the association between bond betas and default risk factors. We find that both long-term debt and the relative ratio of long-term debt to short-term debt increase the bond beta; two measures of profitability, net income/total assets and EBIT/total assets and a cash flow measure of cash flow from operations/total assets decrease the bond beta. A proxy measure of standard deviation of returns is also significantly negatively related to bond betas, confirming the prediction from the option pricing model. In addition, by using new cash flow measures in the discriminant analysis, we improve on the successful prediction rate of bond ratings.  相似文献   

15.
We investigate the time series properties of a volatility model, whose conditional variance is specified as in ARCH with an additional persistent covariate. The included covariate is assumed to be an integrated or nearly integrated process, with its effect on volatility given by a wide class of nonlinear volatility functions. In the paper, such a model is shown to generate many important characteristics that are commonly observed in financial time series. In particular, the model yields persistence in volatility, and also well predicts leptokurtosis. This is true for any type of volatility functions considered in the paper, as long as the covariate is integrated or nearly integrated. Stationary covariates cannot produce important characteristics observed in many financial time series. We present two empirical applications of the model, which show that the default premium (the yield spread between Baa and Aaa corporate bonds) affects stock return volatility and the interest rate differential between two countries accounts for exchange rate return volatility. The forecast evaluation shows that the model generally outperforms GARCH and FIGARCH at relatively lower frequencies.  相似文献   

16.
In the recent past, the automotive supply industry has been facing increasing merger activity. This paper examines the short- and long-term wealth effects of horizontal mergers and acquisitions on acquirers in the automotive supply industry. Based on a sample of 230 takeover announcements between 1981 and 2007, significant positive announcement returns to acquiring companies were determined. While these positive short-term returns to acquirers represent an outstanding attribute of this industry in terms of perceived synergy potential, this study also finds that acquirers are unable to sustain this exceptional position beyond a short-term horizon. A combination of the Fama-French-3-Factor model in calendar time and the control firm approach in event time consistently reveals significant value destruction of about 20% over 3years. In addition, the study determines a significant impact of internationalization, transaction volume, product diversification, and acquirer’s bidding experience on the long-term post-acquisition performance.  相似文献   

17.
This paper considers nonparametric identification of nonlinear dynamic models for panel data with unobserved covariates. Including such unobserved covariates may control for both the individual-specific unobserved heterogeneity and the endogeneity of the explanatory variables. Without specifying the distribution of the initial condition with the unobserved variables, we show that the models are nonparametrically identified from two periods of the dependent variable YitYit and three periods of the covariate XitXit. The main identifying assumptions include high-level injectivity restrictions and require that the evolution of the observed covariates depends on the unobserved covariates but not on the lagged dependent variable. We also propose a sieve maximum likelihood estimator (MLE) and focus on two classes of nonlinear dynamic panel data models, i.e., dynamic discrete choice models and dynamic censored models. We present the asymptotic properties of the sieve MLE and investigate the finite sample properties of these sieve-based estimators through a Monte Carlo study. An intertemporal female labor force participation model is estimated as an empirical illustration using a sample from the Panel Study of Income Dynamics (PSID).  相似文献   

18.
Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high‐dimensional covariates by low‐dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research.  相似文献   

19.
For a GARCH-type volatility model with covariates, we derive asymptotically valid forecast intervals for risk measures, such as the Value-at-Risk or Expected Shortfall. To forecast these, we use estimators from extreme value theory. In the volatility model, we allow for leverage effects and the inclusion of exogenous variables, e.g., volatility indices or high-frequency volatility measures. In simulations, we find coverage of the forecast intervals to be adequate for sufficiently extreme risk levels and sufficiently large samples, which is consistent with theory. Finally, we investigate if covariate information from volatility indices or high-frequency data improves risk forecasts for major US stock indices. While—in our framework—volatility indices appear to be helpful in this regard, intra-day data are not.  相似文献   

20.
A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations “just above” and “just below” the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function—the specification errors—as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号