首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

In this paper we construct a model to estimate local employment growth in Italian local labour markets for the period 1991–2001. The model is constructed in a similar manner to the original models of Glaeser et al. (1992), Henderson et al. (1995) and Combes (2000). Our objective is to identify the extent to which the results estimated by these types of models are themselves sensitive to the model specification. In order to do this we extend the basic models by successively incorporating new explanatory variables into the model framework. In addition, and for the first time, we also estimate these same models at two different levels of sectoral aggregation, for the same spatial structure. Our results indicate that these models are highly sensitive to sectoral aggregation and classification and our results therefore strongly support the use of highly disaggregated data.  相似文献   

2.
When constructing unconditional point forecasts, both direct and iterated multistep (DMS and IMS) approaches are common. However, in the context of producing conditional forecasts, IMS approaches based on vector autoregressions are far more common than simpler DMS models. This is despite the fact that there are theoretical reasons to believe that DMS models are more robust to misspecification than are IMS models. In the context of unconditional forecasts, Marcellino et al. (Journal of Econometrics, 2006, 135, 499–526) investigate the empirical relevance of these theories. In this paper, we extend that work to conditional forecasts. We do so based on linear bivariate and trivariate models estimated using a large dataset of macroeconomic time series. Over comparable samples, our results reinforce those in Marcellino et al.: the IMS approach is typically a bit better than DMS with significant improvements only at longer horizons. In contrast, when we focus on the Great Moderation sample we find a marked improvement in the DMS approach relative to IMS. The distinction is particularly clear when we forecast nominal rather than real variables where the relative gains can be substantial.  相似文献   

3.
Randomized response (say, RR) techniques on survey are used for collecting data on sensitive issues while trying to protect the respondents’ privacy. The degree of confidentiality will clearly determine whether or not respondents choose to cooperate. There have been many proposals for privacy measures with very different implications for an optimal model design. These derived measures of protection privacy involve both conditional probabilities of being perceived as belonging to sensitive group, denoted as P(A|yes) and P(A|no). In this paper, we introduce an alternative criterion to measure privacy protection and reconsider and compare some RR models in the light of the efficiency/protection privacy. This measure is known to the respondents before they agree to use the RR model. This measure is helpful for choosing an optimal RR model in practice.  相似文献   

4.
  • Using a six-factor model of donations, we estimate the effect on net donations; i.e., donations less fundraising expenditures, of a one percent marginal increase in fundraising expenditures, for each sample nonprofit organization (NPO) from the Nonprofit Times 100 from 2000 to 2002. No prior study of U.S. NPOs estimates the effect of fundraising expense on net donations. We then use these estimates and what we argue is the correct benchmark, the ratio of fundraising expense to donations, to provide evidence, for each NPO, on whether the NPO's level of fundraising is ‘excessive,’ ‘optimal,’ or ‘insufficient,’ relative to the level that maximizes net donations. All prior studies using log-log models use what we suggest is an incorrect benchmark for evaluating NPO fundraising behavior.
  • The estimated effect of a 1% increase in fundraising on net donations varies widely across NPOs in our sample—from an increase in net donations of 0.18% of gross donations to a decrease of 0.66% of gross donations. Of the 76 Nonprofit Times 100 NPOs with usable data in 2002, we estimate that 24 engaged in ‘excessive’ fundraising, 18 engaged in ‘insufficient’ fundraising, and 34 did not engage in ‘excessive’ or ‘insufficient’ fundraising; i.e., we could not reject the null hypothesis of ‘optimal’ levels of fundraising.
Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
A non-standard log-linear approach is employed to fit the class of non-independence,asymmetric, skew-symmetric and inclined point-symmetry models to intergenerationalmobility table. While these models are not new, our goal in this paper is to present theimplementation of the models that are discussed here using SAS PROC GENMOD. This approach is used to fit these class of models to the 6 × 6 Brazilian social mobility data which has received extensive consideration in the literature, as well as to the 9 × 9 mobility table of men in England and Wales. These are further extended to two 5 × 5 mobility tables of men in the US aged 20–64. Parsimonious models are sought for each table and expressions for estimated odds-ratios under the appropriate models are provided based on our approach.  相似文献   

6.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

7.
The aim of this paper is to derive methodology for designing ‘time to event’ type experiments. In comparison to estimation, design aspects of ‘time to event’ experiments have received relatively little attention. We show that gains in efficiency of estimators of parameters and use of experimental material can be made using optimal design theory. The types of models considered include classical failure data and accelerated testing situations, and frailty models, each involving covariates which influence the outcome. The objective is to construct an optimal design based of the values of the covariates and associated model or indeed a candidate set of models. We consider D-optimality and create compound optimality criteria to derive optimal designs for multi-objective situations which, for example, focus on the number of failures as well as the estimation of parameters. The approach is motivated and demonstrated using common failure/survival models, for example, the Weibull distribution, product assessment and frailty models.  相似文献   

8.
Heterogeneity among firms has been an important issue in studying firms’ technical efficiencies. If firms do not randomly fall into different groups with different technologies but by self-selection, statistically it implies the data are subject to the sample selection bias. In this paper, we generalize the stochastic frontier (SF) model to accommodate heterogeneous technologies among firms by considering the threshold SF model with an endogenous threshold variable. We discuss the econometric techniques appropriate for the threshold SF model with panel data. To determine the optimal number of regimes, we use modified the model selection criteria of Gonzalo and Pitarakis (J Econom 110(2):319–352, 2002) and investigate their finite sample performance by some Monte Carlo experiments. Finally, we also demonstrate our approach by an empirical example.  相似文献   

9.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

10.
Extended input–output models require careful estimation of disaggregated consumption by households and comparable sources of labor income by sector. The latter components most often have to be estimated. The primary focus of this paper is to produce labor demand disaggregated by workers’ age. The results are evaluated through considerations of its consistency with a static labor demand model restricted with theoretical requirements. A Bayesian approach is used for more straightforward imposition of regularity conditions. The Bayesian model confirms elastic labor demand for youth workers, which is consistent with what past studies find. Additionally, to explore the effects of changes in age structure on a regional economy, the estimated age-group-specific labor demand model is integrated into a regional input–output model. The integrated model suggests that ceteris paribus ageing population contributes to lowering aggregate economic multipliers due to the rapidly growing number of elderly workers who earn less than younger workers.  相似文献   

11.
The present paper shows that cross-section demeaning with respect to time fixed effects is more useful than commonly appreciated, in that it enables consistent and asymptotically normal estimation of interactive effects models with heterogeneous slope coefficients when the number of time periods, T, is small and only the number of cross-sectional units, N, is large. This is important when using OLS but also when using more sophisticated estimators of interactive effects models whose validity does not require demeaning, a point that to the best of our knowledge has not been made before in the literature. As an illustration, we consider the problem of estimating the average treatment effect in the presence of unobserved time-varying heterogeneity. Gobillon and Magnac (2016) recently considered this problem. They employed a principal components-based approach designed to deal with general unobserved heterogeneity, which does not require fixed effects demeaning. The approach does, however, require that T is large, which is typically not the case in practice, and the results reported here confirm that the performance can be extremely poor in small-T samples. The exception is when the approach is applied to data that have been demeaned with respect to fixed effects.  相似文献   

12.
We provide the first empirical application of a new approach proposed by Lee (Journal of Econometrics 2007; 140 (2), 333–374) to estimate peer effects in a linear‐in‐means model when individuals interact in groups. Assumingsufficient group size variation, this approach allows to control for correlated effects at the group level and to solve the simultaneity (reflection) problem. We clarify the intuition behind identification of peer effects in the model. We investigate peer effects in student achievement in French, Science, Mathematics and History in secondary schools in the Province of Québec (Canada). We estimate the model using conditional maximum likelihood and instrumental variables methods. We find some evidence of peer effects. The endogenous peer effect is large and significant in Mathematics but imprecisely estimated in the other subjects. Some contextual peer effects are also significant. In particular, for most subjects, the average age of peers has a negative effect on own test score. Using calibrated Monte Carlo simulations, we find that high dispersion in group sizes helps with potential issues of weak identification. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.

Globalization and migratory fluxes are increasing the ethnic and racial diversity within many countries. Therefore, describing social dynamics requires models that are apt to capture multi-groups interactions. Building on the assumption of a relationship between multi-racial dynamics and socioeconomic status (SES), we introduce an aggregate, contextual, and continuous index of SES accounting for measures of income, employment, expected life, and group numerosity. After, taking into account that groups’ SES assumes the form of a logit model, we propose a Lotka–Volterra system to study and forecast the interaction among racial groups. Last, we apply our methodology to describe the racial dynamics in the US society. In particular, we study the kind and the intensity of Asians–Blacks–Natives–Whites interactions in the US between 2002 and 2013. Moreover, we forecast the evolution of groups’ SES and how interracial relations will unfold between 2013 and 2018 and in three alternative stylized scenarios.

  相似文献   

14.
In this paper, we provide an intensive review of the recent developments for semiparametric and fully nonparametric panel data models that are linearly separable in the innovation and the individual-specific term. We analyze these developments under two alternative model specifications: fixed and random effects panel data models. More precisely, in the random effects setting, we focus our attention in the analysis of some efficiency issues that have to do with the so-called working independence condition. This assumption is introduced when estimating the asymptotic variance–covariance matrix of nonparametric estimators. In the fixed effects setting, to cope with the so-called incidental parameters problem, we consider two different estimation approaches: profiling techniques and differencing methods. Furthermore, we are also interested in the endogeneity problem and how instrumental variables are used in this context. In addition, for practitioners, we also show different ways of avoiding the so-called curse of dimensionality problem in pure nonparametric models. In this way, semiparametric and additive models appear as a solution when the number of explanatory variables is large.  相似文献   

15.
Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments. Even though conditional forecasting is common, there has been little work on methods for evaluating conditional forecasts. This paper provides analytical, Monte Carlo and empirical evidence on tests of predictive ability for conditional forecasts from estimated models. In the empirical analysis, we examine conditional forecasts obtained with a VAR in the variables included in the DSGE model of Smets and Wouters (American Economic Review 2007; 97 : 586–606). Throughout the analysis, we focus on tests of bias, efficiency and equal accuracy applied to conditional forecasts from VAR models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
A desirable property of a forecast is that it encompasses competing predictions, in the sense that the accuracy of the preferred forecast cannot be improved through linear combination with a rival prediction. In this paper, we investigate the impact of the uncertainty associated with estimating model parameters in‐sample on the encompassing properties of out‐of‐sample forecasts. Specifically, using examples of non‐nested econometric models, we show that forecasts from the true (but estimated) data generating process (DGP) do not encompass forecasts from competing mis‐specified models in general, particularly when the number of in‐sample observations is small. Following this result, we also examine the scope for achieving gains in accuracy by combining the forecasts from the DGP and mis‐specified models.  相似文献   

17.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

18.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

19.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

20.
Si propone un modello di consenso in ambiente sfumato, simile al cosiddetto metodo Delphi.Nel nostro approccio opinioni individuali e pesi sono numeri sfumati e la transizione tra stati è ottenuta per mezzo di operazioni max-min estese.Nel nostro approccio si prova che, dato un gruppo din individui, sotto opportune condizioni dopon–1 iterazioni le opinioni individuali si stabilizzano.
Here we propose a model of consensus in a fuzzy environment similar to the so called Delphi method.In our approach individual opinions and weights are fuzzy numbers and transition from one state to another one is obtained via extended max-min operations.While in the classical stochastic approach to Delphi method (see De Groot [1] and Kelly [6]) conditions for reaching consensus are established, in our approach it is proved that, given a group ofn individuals, under suitable conditions aftern–1 iterations individual opinions become stable.


Lavoro eseguito con il contributo CNR n. 83-02619.10.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号