首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates how the effect of income while unemployed on the probability of an individual leaving unemployment varies with the length of time that the individual has been unemployed. We examine this question in the context of a variety of alternative econometric models. We extend the Proportional Hazards model with unrestricted baseline hazard to one in which there are unrestricted effects of a subset of the explanatory variables and also consider models that can be estimated as series of binary response models. The proportional hazard restrictions are rejected for the sample of British unemployed men analysed and in the binary sequence framework Logit and Probit models based on symmetric distributions dominate (in likelihood terms) the Extreme Value form model implied by extension of the Proportional Hazards formulation. Logit models with a flexible form for the duration dependence which also incorporate unobserved heterogeneity in a flexible way are estimated. The results for all formulations indicate a rapidly declining effect of unemployment income as a spell lengthens, with no significant effect for the long-term unemployed. The preferred specifications which allow for omitted heterogeneity indicate no significant effect after about 5 months, and this result is robust to the inclusion or exclusion of previous labour-market experience variables and to the choice of mixing distribution.  相似文献   

2.
Capture–Recapture methods aim to estimate the size of an elusive target population. Each member of the target population carries a count of identifications by some identifying mechanism—the number of times it has been identified during the observational period. Only positive counts are observed and inference needs to be based on the observed count distribution. A widely used assumption for the count distribution is a Poisson mixture. If the mixing distribution can be described by an exponential density, the geometric distribution arises as the marginal. This note discusses population size estimation on the basis of the zero-truncated geometric (a geometric again itself). In addition, population heterogeneity is considered for the geometric. Chao’s estimator is developed for the mixture of geometric distributions and provides a lower bound estimator which is valid under arbitrary mixing on the parameter of the geometric. However, Chao’s estimator is also known for its relatively large variance (if compared to the maximum likelihood estimator). Another estimator based on a censored geometric likelihood is suggested which uses the entire sample information but is less affected by model misspecifications. Simulation studies illustrate that the proposed censored estimator comprises a good compromise between the maximum likelihood estimator and Chao’s estimator, e.g. between efficiency and bias.  相似文献   

3.
S. H. Ong 《Metrika》1996,43(1):221-235
This paper considers a class of distributions which may be regarded as the convolution of a negative binomial and a stopped-sum generalized hypergeometric factorial-moment random variables. Some properties are derived and it is shown that this class of distributions is a subset of distributions for the birth-and-death process with immigration (also reversible counter system). Formulations by mixing, limiting distributions and maximum likelihood equations are also discussed.  相似文献   

4.
Mixing of sand layers on top of peat soils is achieved by soil improvement machines. The result of the mixing process is studied by taking samples from a vertical cross-section of the profile. The samples will have various values of the sand to peat volume-ratio. These values can be plotted in emperical cumulative frequency distributions. The distributions for two types of machines were found to be quite different due to differences of mixing intensities. A method is proposed for the evaluation of the mixing effect expressed in the parameters of theoretical probability distributions of two-component soil samples. The probability distribution of the volume-ratios in the sample are derived and their properties are discussed. Moment estimators of the parameters are derived. The theoretical distributions are compared with two experimental results obtained with a mixing rooter and a rotary mixer respectively.  相似文献   

5.
The interplay between the Bayesian and Frequentist approaches: a general nesting spatial panel-data model. Spatial Economic Analysis. An econometric framework mixing the Frequentist and Bayesian approaches is proposed in order to estimate a general nesting spatial model. First, it avoids specific dependency structures between unobserved heterogeneity and regressors, which improves mixing properties of Markov chain Monte Carlo (MCMC) procedures in the presence of unobserved heterogeneity. Second, it allows model selection based on a strong statistical framework, characteristics that are not easily introduced using a Frequentist approach. We perform some simulation exercises, finding good performance of the properties of our approach, and apply the methodology to analyse the relation between productivity and public investment in the United States.  相似文献   

6.
7.
Estimation of the one sided error component in stochastic frontier models may erroneously attribute firm characteristics to inefficiency if heterogeneity is unaccounted for. However, unobserved inefficiency heterogeneity has been little explored. In this work, we propose to capture it through a random parameter which may affect the location, scale, or both parameters of a truncated normal inefficiency distribution using a Bayesian approach. Our findings using two real data sets, suggest that the inclusion of a random parameter in the inefficiency distribution is able to capture latent heterogeneity and can be used to validate the suitability of observed covariates to distinguish heterogeneity from inefficiency. Relevant effects are also found on separating and shrinking individual posterior efficiency distributions when heterogeneity affects the location and scale parameters of the one-sided error distribution, and consequently affecting the estimated mean efficiency scores and rankings. In particular, including heterogeneity simultaneously in both parameters of the inefficiency distribution in models that satisfy the scaling property leads to a decrease in the uncertainty around the mean scores and less overlapping of the posterior efficiency distributions, which provides both more reliable efficiency scores and rankings.  相似文献   

8.
We study the profit persistence literature by applying meta‐regression analysis (MRA) to a set of 36 empirical papers, which analyze the persistence of abnormal firm profits over time. The analyzed literature provides evidence for a mediocre degree of persistence in abnormal profits. An initial analysis of the distribution of reported profit persistence estimates reveals some degree of excess variation. This points toward publication bias that favors significant results independent of their algebraic sign. The MRA, however, reveals that publication bias is particularly favoring results that indicate profit persistence and thus contradict the neoclassical model of perfect competition. Moreover, the MRA enables to control for heterogeneity driven by the study design. We find that the analyzed country (developing vs. developed), the applied econometric approach, as well as the analyzed period of time are significant drivers of heterogeneity in reported persistence estimates.  相似文献   

9.
When modeling demand for differentiated products, it is vital to adequately capture consumer taste heterogeneity, But there is no clearly preferred approach. Here, we compare the performance of six alternative models. Currently, the most popular are mixed logit (MIXL), particularly the version with normal mixing (N‐MIXL), and latent class (LC), which assumes discrete consumer types. Recently, several alternative models have been developed. The 'generalized multinomial logit' (G‐MNL) extends N‐MIXL by allowing for heterogeneity in the logit scale coefficient. Scale heterogeneity logit (S‐MNL) is a special case of G‐MNL with scale heterogeneity only. The 'mixed‐mixed' logit (MM‐MNL) assumes a discrete mixture‐of‐normals heterogeneity distribution. Finally, one can modify N‐MIXL by imposing theoretical sign constraints on vertical attributes. We call this 'T‐MIXL'. We find that none of these models dominates the others, but G‐MNL, MM‐MNL and T‐MIXL typically outperform the popular N‐MIXL and LC models. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, we introduce a new Poisson mixture model for count panel data where the underlying Poisson process intensity is determined endogenously by consumer latent utility maximization over a set of choice alternatives. This formulation accommodates the choice and count in a single random utility framework with desirable theoretical properties. Individual heterogeneity is introduced through a random coefficient scheme with a flexible semiparametric distribution. We deal with the analytical intractability of the resulting mixture by recasting the model as an embedding of infinite sequences of scaled moments of the mixing distribution, and newly derive their cumulant representations along with bounds on their rate of numerical convergence. We further develop an efficient recursive algorithm for fast evaluation of the model likelihood within a Bayesian Gibbs sampling scheme. We apply our model to a recent household panel of supermarket visit counts. We estimate the nonparametric density of three key variables of interest-price, driving distance, and their interaction-while controlling for a range of consumer demographic characteristics. We use this econometric framework to assess the opportunity cost of time and analyze the interaction between store choice, trip frequency, search intensity, and household and store characteristics. We also conduct a counterfactual welfare experiment and compute the compensating variation for a 10%-30% increase in Walmart prices.  相似文献   

11.
In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing distribution is discrete, resulting in a finite mixture model formulation. An EM algorithm for estimation is described, and the algorithm is applied to data on customer purchases of books offered through direct mail. Our model is compared empirically to a number of other approaches that deal with heterogeneity in Poisson regression models.  相似文献   

12.
We discuss the problem of constructing a suitable regression model from a nonparametric Bayesian viewpoint. For this purpose, we consider the case when the error terms have symmetric and unimodal densities. By the Khintchine and Shepp theorem, the density of response variable can be written as a scale mixture of uniform densities. The mixing distribution is assumed to have a Dirichlet process prior. We further consider appropriate prior distributions for other parameters as the components of the predictive device. Among the possible submodels, we select the one which has the highest posterior probability. An example is given to illustrate the approach.  相似文献   

13.
We propose a method to decompose the changes in the wage distribution over a period of time in several factors contributing to those changes. The method is based on the estimation of marginal wage distributions consistent with a conditional distribution estimated by quantile regression as well as with any hypothesized distribution for the covariates. Comparing the marginal distributions implied by different distributions for the covariates, one is then able to perform counterfactual exercises. The proposed methodology enables the identification of the sources of the increased wage inequality observed in most countries. Specifically, it decomposes the changes in the wage distribution over a period of time into several factors contributing to those changes, namely by discriminating between changes in the characteristics of the working population and changes in the returns to these characteristics. We apply this methodology to Portuguese data for the period 1986–1995, and find that the observed increase in educational levels contributed decisively towards greater wage inequality. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
The paper concerns the study of equilibrium points, or steady states, of economic systems arising in modeling optimal investment with vintage capital, namely, systems where all key variables (capitals, investments, prices) are indexed not only by time but also by age. Capital accumulation is hence described as a partial differential equation (briefly, PDE), and equilibrium points are in fact equilibrium distributions in the variable of ages. A general method is developed to compute and study equilibrium points of a wide range of infinite dimensional, infinite horizon, optimal control problems. We apply the method to optimal investment with vintage capital, for a variety of data, deriving existence and uniqueness of equilibrium distribution, as well as analytic formulas for optimal controls and trajectories in the long run. The examples suggest that the same method can be applied to other economic problems displaying heterogeneity. This shows how effective the theoretical machinery of optimal control in infinite dimension is in computing explicitly equilibrium distributions. To this extent, the results of this work constitute a first crucial step towards a thorough understanding of the behavior of optimal paths in the long run.  相似文献   

15.
[Ten Raa, 1984] has shown how arithmetics ideas carry over to distributions over space and can be used to solve open, static spatial problems such as the determination of urban equilibrium. This article extends the approach to dynamic spatial economics by tracing spatial distributions through time. It is shown that the basic ideas of ordinary differential equations carry over to the present context, provided that ‘functions’ are spatially distributed valued. The consequent differential equations for the distributions are solved. [Puu, 1982] spatial trade cycle model falls out as a special case and its associated initial value problem can now be completely solved.  相似文献   

16.
Although each statistical unit on which measurements are taken is unique, typically there is not enough information available to account totally for its uniqueness. Therefore, heterogeneity among units has to be limited by structural assumptions. One classical approach is to use random effects models, which assume that heterogeneity can be described by distributional assumptions. However, inference may depend on the assumed mixing distribution, and it is assumed that the random effects and the observed covariates are independent. An alternative considered here is fixed effect models, which let each unit has its own parameter. They are quite flexible but suffer from the large number of parameters. The structural assumption made here is that there are clusters of units that share the same effects. It is shown how clusters can be identified by tailored regularised estimators. Moreover, it is shown that the regularised estimates compete well with estimates for the random effects model, even if the latter is the data generating model. They dominate if clusters are present.  相似文献   

17.
The measurement of market risk poses major challenges to researchers and different economic agents. On one hand, it is by now widely recognized that risk varies over time. On the other hand, the risk profile of an investor, in terms of investment horizon, makes it crucial to also assess risk at the frequency level. We propose a novel approach to measuring market risk based on the continuous wavelet transform. Risk is allowed to vary both through time and at the frequency level within a unified framework. In particular, we derive the wavelet counterparts of well-known measures of risk. One is thereby able to assess total risk, systematic risk and the importance of systematic risk to total risk in the time-frequency space. To illustrate the method we consider the emerging markets case over the last twenty years, finding noteworthy heterogeneity across frequencies and over time, which highlights the usefulness of the wavelet approach.  相似文献   

18.
We consider a semiparametric cointegrating regression model, for which the disequilibrium error is further explained nonparametrically by a functional of distributions changing over time. The paper develops the statistical theories of the model. We propose an efficient econometric estimator and obtain its asymptotic distribution. A specification test for the model is also investigated. The model and methodology are applied to analyze how an aging population in the US influences the consumption level and the savings rate. We find that the impact of age distribution on the consumption level and the savings rate is consistent with the life-cycle hypothesis.  相似文献   

19.
This paper investigates the cost efficiency of Russian banks with regard to their heterogeneity in terms of ownership form, capitalization and asset structure. Using bank-level quarterly data over the period 2005–2013, we perform stochastic frontier analysis (SFA) and compute cost efficiency scores at the bank and bank group levels. We deduct from gross costs the negative revaluations of foreign currency items generated by official exchange rate dynamics rather than by managerial decisions. The results indicate that the core state banks, as distinct from other state-controlled banks, were nearly as efficient as private domestic banks during and after the crisis of 2008–2009. Foreign banks appear to be the least efficient market participants in terms of costs, which might reflect their lower (and decreasing over time) penetration of the Russian banking system. We further document that the group ranking by cost efficiency is not permanent over time and depends on the observed differences in bank capitalization and asset structure. We find that foreign banks gain cost efficiency when they lend more to the economy. Core state banks, conversely, lead in terms of cost efficiency when they lend less to the economy, which can result from political interference in their lending decisions in favor of unprofitable projects Private domestic banks that maintain a lower capitalization significantly outperform foreign banks and do not differ from the core state banks in this respect.  相似文献   

20.
We estimate the union premium for young men over a period of declining unionization (1980–87) through a procedure which identifies the alternative sources of the endogeneity of union status. While we estimate the average increase in wages resulting from union employment to be in excess of 20% we find that the return to unobserved heterogeneity operating through union status is substantial and that the union premium is highly variable. We also find that the premium is sensitive to the form of sorting allowed in estimation. Moreover, the data are consistent with comparative advantage sorting. Our results suggest that the unobserved heterogeneity which positively contributes to the likelihood of union membership is associated with higher wages. We are unable, however, to determine whether this is due to the ability of these workers to extract monopoly rents or whether it reflects the more demanding hiring standards of employers faced by union wages. © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号