首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper applies the theoretical literature on nonparametric bounds on treatment effects to the estimation of how limited English proficiency (LEP) affects wages and employment opportunities for Hispanic workers in the United States. I analyse the identifying power of several weak assumptions on treatment response and selection, and stress the interactions between LEP and education, occupation and immigration status. I show that the combination of two weak but credible assumptions provides informative upper bounds on the returns to language skills for certain subgroups of the population. Adding age at arrival as a monotone instrumental variable also provides informative lower bounds. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
We provide a partial ordering view of horizontal inequity (HI), based on the Lorenz criterion, associated with different post‐tax income distributions and a (bistochastic) non‐parametric estimated benchmark distribution. As a consequence, several measures consistent with the Lorenz criterion can be rationalized. In addition, we establish the so‐called HI transfer principle, which imposes a normative minimum requirement that any HI measure must satisfy. Our proposed HI ordering is consistent with this principle. Moreover, we adopt a cardinal view to decompose the total effect of a tax system into a welfare gain caused by HI‐free income redistribution and a welfare loss caused by HI, without any additive decomposable restriction on the indices. Hence, more robust tests can be applied. Other decompositions in the literature are seen as particular cases.  相似文献   

3.
A rich theory of production and analysis of productive efficiency has developed since the pioneering work by Tjalling C. Koopmans and Gerard Debreu. Michael J. Farrell published the first empirical study, and it appeared in a statistical journal (Journal of the Royal Statistical Society), even though the article provided no statistical theory. The literature in econometrics, management sciences, operations research and mathematical statistics has since been enriched by hundreds of papers trying to develop or implement new tools for analysing productivity and efficiency of firms. Both parametric and non‐parametric approaches have been proposed. The mathematical challenge is to derive estimators of production, cost, revenue or profit frontiers, which represent, in the case of production frontiers, the optimal loci of combinations of inputs (like labour, energy and capital) and outputs (the products or services produced by the firms). Optimality is defined in terms of various economic considerations. Then the efficiency of a particular unit is measured by its distance to the estimated frontier. The statistical problem can be viewed as the problem of estimating the support of a multivariate random variable, subject to some shape constraints, in multiple dimensions. These techniques are applied in thousands of papers in the economic and business literature. This ‘guided tour’ reviews the development of various non‐parametric approaches since the early work of Farrell. Remaining challenges and open issues in this challenging arena are also described. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

4.
Datasets examining periodontal disease records current (disease) status information of tooth‐sites, whose stochastic behavior can be attributed to a multistate system with state occupation determined at a single inspection time. In addition, the tooth‐sites remain clustered within a subject, and the number of available tooth‐sites may be representative of the true periodontal disease status of that subject, leading to an ‘informative cluster size’ scenario. To provide insulation against incorrect model assumptions, we propose a non‐parametric regression framework to estimate state occupation probabilities at a given time and state exit/entry distributions, utilizing weighted monotonic regression and smoothing techniques. We demonstrate the superior performance of our proposed weighted estimators over the unweighted counterparts via a simulation study and illustrate the methodology using a dataset on periodontal disease.  相似文献   

5.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

6.
Lanne and Saikkonen [Oxford Bulletin of Economics and Statistics (2011a) Vol. 73, pp. 581–592], show that the generalized method of moments (GMM) estimator is inconsistent, when the instruments are lags of variables that admit a non‐causal autoregressive representation. This article argues that this inconsistency depends on distributional assumptions, that do not always hold. In particular under rational expectations, the GMM estimator is found to be consistent. This result is derived in a linear context and illustrated by simulation of a nonlinear asset pricing model.  相似文献   

7.
Economic theory does not always specify the functional relationship between dependent and explanatory variables, or even isolate a particular set of covariates. This means that model uncertainty is pervasive in empirical economics. In this paper, we indicate how Bayesian semi‐parametric regression methods in combination with stochastic search variable selection can be used to address two model uncertainties simultaneously: (i) the uncertainty with respect to the variables which should be included in the model and (ii) the uncertainty with respect to the functional form of their effects. The presented approach enables the simultaneous identification of robust linear and nonlinear effects. The additional insights gained are illustrated on applications in empirical economics, namely willingness to pay for housing, and cross‐country growth regression.  相似文献   

8.
This paper is concerned with the forecasting of probability density functions. Density functions are nonnegative and have a constrained integral, and thus do not constitute a vector space. The implementation of established functional time series forecasting methods for such nonlinear data is therefore problematic. Two new methods are developed and compared to two existing methods. The comparison is based on the densities derived from cross-sectional and intraday returns. For such data, one of our new approaches is shown to dominate the existing methods, while the other is comparable to one of the existing approaches.  相似文献   

9.
Many asset prices, including exchange rates, exhibit periods of stability punctuated by infrequent, substantial, often one‐sided adjustments. Statistically, this generates empirical distributions of exchange rate changes that exhibit high peaks, long tails, and skewness. This paper introduces a GARCH model, with a flexible parametric error distribution based on the exponential generalized beta (EGB) family of distributions. Applied to daily US dollar exchange rate data for six major currencies, evidence based on a comparison of actual and predicted higher‐order moments and goodness‐of‐fit tests favours the GARCH‐EGB2 model over more conventional GARCH‐t and EGARCH‐t model alternatives, particularly for exchange rate data characterized by skewness. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
11.
This paper analyzes the effect of public R&D subsidies on firms' private R&D investment per employee and new product sales in German manufacturing. Parametric and semiparametric two‐step selection models are applied to this evaluation problem. The results show that the average treatment effect on the treated firms' R&D intensity is positive. The estimated effects are robust with respect to the different selection models. Further results show that publicly induced R&D spending is as productive as private R&D investment in generating new product sales. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
I examine the effects of insurance status and managed care on hospitalization spells, and develop a new approach for sample selection problems in parametric duration models. MLE of the Flexible Parametric Selection (FPS) model does not require numerical integration or simulation techniques. I discuss application to the exponential, Weibull, log‐logistic and gamma duration models. Applying the model to the hospitalization data indicates that the FPS model may be preferred even in cases in which other parametric approaches are available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a new conditionally heteroskedastic factor model, the GICA-GARCH model, which combines independent component analysis (ICA) and multivariate GARCH (MGARCH) models. This model assumes that the data are generated by a set of underlying independent components (ICs) that capture the co-movements among the observations, which are assumed to be conditionally heteroskedastic. The GICA-GARCH model separates the estimation of the ICs from their fitting with a univariate ARMA-GARCH model. Here, we will use two ICA approaches to find the ICs: the first estimates the components, maximizing their non-Gaussianity, while the second exploits the temporal structure of the data. After estimating and identifying the common ICs, we fit a univariate GARCH model to each of them in order to estimate their univariate conditional variances. The GICA-GARCH model then provides a new framework for modelling the multivariate conditional heteroskedasticity in which we can explain and forecast the conditional covariances of the observations by modelling the univariate conditional variances of a few common ICs. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. Finally, we present an empirical application to the Madrid stock market, where we evaluate the forecasting performances of the GICA-GARCH and two additional factor GARCH models: the orthogonal GARCH and the conditionally uncorrelated components GARCH.  相似文献   

14.
《Labour economics》2001,8(2):259-289
The paper analyses the rise in Italian unemployment in a small structural VAR (sVAR) based on a Layard–Nickell framework. Unemployment is driven by fully permanent and long-lived but temporary shocks. The unemployment component due to demand shocks appears sizeable, with swings amounting to 4 percentage points, and quite persistent, showing an almost continuous increase since the beginning of the 1980s. Nonetheless, the bulk of unemployment rises is attributed to non-demand factors: temporary (productivity and labour supply) and permanent (labelled as shocks to wage bargaining). The latter explain a 2.5-point rise during the 1970s, but showing no further increases since then.  相似文献   

15.
This paper develops a new model for the analysis of stochastic volatility (SV) models. Since volatility is a latent variable in SV models, it is difficult to evaluate the exact likelihood. In this paper, a non-linear filter which yields the exact likelihood of SV models is employed. Solving a series of integrals in this filter by piecewise linear approximations with randomly chosen nodes produces the likelihood, which is maximized to obtain estimates of the SV parameters. A smoothing algorithm for volatility estimation is also constructed. Monte Carlo experiments show that the method performs well with respect to both parameter estimates and volatility estimates. We illustrate our model by analysing daily stock returns on the Tokyo Stock Exchange. Since the method can be applied to more general models, the SV model is extended so that several characteristics of daily stock returns are allowed, and this more general model is also estimated. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
Recurrent ⿿black swans⿿ financial events are a major concern for both investors and regulators because of the extreme price changes they cause, despite their very low probability of occurrence. In this paper, we use unconditional and conditional methods, such as the recently proposed high quantile (HQ) extreme value theory (EVT) models of DPOT (Duration-based Peak Over Threshold) and quasi-PORT (peaks over random threshold), to estimate the Value-at-Risk with very small probability values for an adequately long and major financial time series to obtain a reasonable number of violations for backtesting. We also compare these models and other alternative strategies through an out-of-sample accuracy investigation to determine their relative performance within the HQ context. Policy implications relevant to estimation of risk for extreme events are also provided.  相似文献   

17.
We use the regional and time variation of training grants in Italy to identify the causal effect of (formal continuing vocational) training on earnings. We estimate log-linear earnings regressions with constant marginal returns to training and find that one additional week of training increases monthly net earnings by 1.36%, substantially less than the 3% or more often found in the literature. Estimated returns vary significantly by firm size, and range from 0.40% in firms with more than 100 employees to 2.51% in smaller firms, the bulk of the Italian private sector. A simple back of the envelope comparison of the marginal costs and benefits of training policy suggests that the latter are higher than the former.  相似文献   

18.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

19.
Macro‐integration is the process of combining data from several sources at an aggregate level. We review a Bayesian approach to macro‐integration with special emphasis on the inclusion of inequality constraints. In particular, an approximate method of dealing with inequality constraints within the linear macro‐integration framework is proposed. This method is based on a normal approximation to the truncated multivariate normal distribution. The framework is then applied to the integration of international trade statistics and transport statistics. By combining these data sources, transit flows can be derived as differences between specific transport and trade flows. Two methods of imposing the inequality restrictions that transit flows must be non‐negative are compared. Moreover, the figures are improved by imposing the equality constraints that aggregates of incoming and outgoing transit flows must be equal.  相似文献   

20.
This paper follows an alternative approach to identify the wage effects of private‐sector training. The idea is to narrow down the comparison group by only taking into consideration the workers who wanted to participate in training but did not do so because of some random event. This makes the comparison group increasingly similar to the group of participants in terms of observed individual characteristics and the characteristics of (planned) training events. At the same time, the point estimate of the average return to training consistently drops from a large and significant return to a point estimate close to zero. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号