首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many studies show that women are more risk averse than men. In this paper, following DeLeire and Levy [Deleire T. and Levy H. (2004) ‘Worker Sorting and the Risk of Death on the Job’, Journal of Labor Economics, Vol. 22, No. 4, pp. 210–217.] for the US, we use family structure as a proxy for the degree of risk aversion to test the proposition that those with strong aversion to risk will make occupational choices biased towards safer jobs. In line with DeLeire and Levy we find that women are more risk averse than men and those that are single with children are more risk averse than those without. The effect on the degree of gender segregation is somewhat smaller than for the US.  相似文献   

2.
Drawing from Victor and Cullen's[Victor, B. and Cullen, J. B. (1987) ‘A theory and measure of ethical climate in organizations’, Research in Corporate Social Performance and Policy, Vol. 9, pp. 51–71.],[Victor, B. and Cullen, J. B. (1988) ‘The organizational bases of ethical work climates’, Administrative Science Quarterly, Vol. 33, pp. 101–125.] theoretical framework a recent study by Agarwal and Malloy[Agarwal, J. and Malloy, D. C. (1999) ‘Ethical work climate dimensions in a not‐for‐profit organization: An empirical study’, Journal of Business Ethics, Vol. 20, pp. 1–14.] examined ethical work climate dimensions in the context of a nonprofit organisation. This paper reviews the framework and extends the study further by investigating several factors that influence the perception of ethical work climate in a nonprofit organisation. The multiple analysis of variance (MANOVA) procedure is employed to test nine hypotheses. Results demonstrate somewhat unique findings regarding factors that influence ethical climate perception in a nonprofit context. Specifically, the findings of this study point to the level of education, decision style and the influence that superiors and volunteers have upon ethical perception. Results also demonstrate that factors that have been employed traditionally by forprofit management, such as length of service, codes of ethics, size of the organisation and peer pressure, do not effectively influence ethical perception in the nonprofit context. Finally implications of this study are discussed. Copyright © 2003 Henry Stewart Publications  相似文献   

3.
This paper reviews current evidence on the declining political engagement of British youth. What emerges is that causes of their political disaffection are manifold and complex, but trust, distrust and cynicism feature strongly. Traditional approaches to trust and distrust fail to recognise this complexity; consequently this paper offers a more sophisticated conceptual framework that examines trust and distrust as separate but linked dimensions, as advocated by Lewicki, McAllister and Bies.[Lewicki, R. J., McAllister D. J. and Bies R. J. (1998) ‘Trust and distrust: New relationships and realities’, Academy of Management Review, Vol. 23, No. 3, pp. 438–458.] From the analysis four segments of ‘voter’ types are identified. By segmenting voters in this way, marketers can design strategies to help increase young people's trust and reduce their distrust, thereby increasing their propensity to vote in future elections. A synopsis of marketing aims to stimulate the ‘youth vote’ is presented along with areas for further research. Copyright © 2004 Henry Stewart Publications  相似文献   

4.
《Labour economics》2006,13(5):571-587
This study uses a sample of young Australian twins to examine whether the findings reported in [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp.1157–73] and [Miller, P.W., Mulvey, C and Martin, N., (1994). ‘What Do Twins Studies Tell Us About the Economic Returns to Education?: A Comparison of Australian and US Findings’, Western Australian Labour Market Research Centre Discussion Paper 94/4] are robust to choice of sample and dependent variable. The economic return to schooling in Australia is between 5 and 7 percent when account is taken of genetic and family effects using either fixed-effects models or the selection effects model of Ashenfelter and Krueger. Given the similarity of the findings in this and in related studies, it would appear that the models applied by [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp.1157–73] are robust. Moreover, viewing the OLS and IV estimators as lower and upper bounds in the manner of [Black, Dan A., Berger, Mark C., and Scott, Frank C., (2000). ‘Bounding Parameter Estimates with Nonclassical Measurement Error’, Journal of the American Statistical Association, Vol. 95, No.451, pp.739–748], it is shown that the bounds on the return to schooling in Australia are much tighter than in [Ashenfelter, Orley and Krueger, Alan, (1994). ‘Estimates of the Economic Return to Schooling from a New Sample of Twins’, American Economic Review, Vol. 84, No. 5, pp. 1157–73], and the return is bounded at a much lower level than in the US.  相似文献   

5.
The controversy over the selection of ‘growth regressions’ was precipitated by some remarkably numerous ‘estimation’ strategies, including two million regressions by Sala‐i‐Martin [American Economic Review (1997b) Vol. 87, pp. 178–183]. Only one regression is really needed, namely the general unrestricted model, appropriately reduced to a parsimonious encompassing, congruent representation. We corroborate the findings of Hoover and Perez [Oxford Bulletin of Economics and Statistics (2004) Vol. 66], who also adopt an automatic general‐to‐simple approach, despite the complications of data imputation. Such an outcome was also achieved in just one run of PcGets, within a few minutes of receiving the data set in Fernández, Ley and Steel [Journal of Applied Econometrics (2001) Vol. 16, pp. 563–576] from Professor Ley.  相似文献   

6.
The paper describes two automatic model selection algorithms, RETINA and PcGets, briefly discussing how the algorithms work and what their performance claims are. RETINA's Matlab implementation of the code is explained, then the program is compared with PcGets on the data in Perez‐Amaral, Gallo and White (2005 , Econometric Theory, Vol. 21, pp. 262–277), ‘A Comparison of Complementary Automatic Modelling Methods: RETINA and PcGets’, and Hoover and Perez (1999 , Econometrics Journal, Vol. 2, pp. 167–191), ‘Data Mining Reconsidered: Encompassing and the General‐to‐specific Approach to Specification Search’. Monte Carlo simulation results assess the null and non‐null rejection frequencies of the RETINA and PcGets model selection algorithms in the presence of nonlinear functions.  相似文献   

7.
Although out‐of‐sample forecast performance is often deemed to be the ‘gold standard’ of evaluation, it is not in fact a good yardstick for evaluating models in general. The arguments are illustrated with reference to a recent paper by Carruth, Hooker and Oswald [Review of Economics and Statistics (1998) , Vol. 80, pp. 621–628], who suggest that the good dynamic forecasts of their model support the efficiency‐wage theory on which it is based.  相似文献   

8.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

9.
Two different approaches intend to resolve the ‘puzzling’ slow convergence to purchasing power parity (PPP) reported in the literature [see Rogoff (1996) , Journal of Economic Literature, Vol. 34.] On the one hand, there are models that consider a non‐linear adjustment of real exchange rate to PPP induced by transaction costs. Such costs imply the presence of a certain transaction band where adjustment is too costly to be undertaken. On the other hand, there are models that relax the ‘classical’ PPP assumption of constant equilibrium real exchange rates. A prominent theory put together by Balassa (1964, Journal of Political Economy, Vol. 72) and Samuelson (1964 Review of Economics and Statistics, Vol. 46) , the BS effect, suggests that a non‐constant real exchange rate equilibrium is induced by different productivity growth rates between countries. This paper reconciles those two approaches by considering an exponential smooth transition‐in‐deviation non‐linear adjustment mechanism towards non‐constant equilibrium real exchange rates within the EMS (European Monetary System) and effective rates. The equilibrium is proxied, in a theoretically appealing manner, using deterministic trends and the relative price of non‐tradables to proxy for BS effects. The empirical results provide further support for the hypothesis that real exchange rates are well described by symmetric, nonlinear processes. Furthermore, the half‐life of shocks in such models is found to be dramatically shorter than that obtained in linear models.  相似文献   

10.
In an influential paper Pesaran (‘A simple panel unit root test in presence of cross‐section dependence’, Journal of Applied Econometrics, Vol. 22, pp. 265–312, 2007) proposes two unit root tests for panels with a common factor structure. These are the CADF and CIPS test statistics, which are amongst the most popular test statistics in the literature. One feature of these statistics is that their limiting distributions are highly non‐standard, making for relatively complicated implementation. In this paper, we take this feature as our starting point to develop modified CADF and CIPS test statistics that support standard chi‐squared and normal inference.  相似文献   

11.
We consider lifetime data subject to right random censorship. In this context, this paper deals with the topic of estimating the distribution function of the lifetime and the corresponding quantile function. As it has been shown that the classical Kaplan–Meier estimator of the distribution function can be improved by means of presmoothing ideas, we introduce a quantile function estimator via the presmoothed distribution function estimator studied by Cao et al. [Journal of Nonparametric statistics, Vol. 17 (2005) pp. 31–56.] The main result of this paper is an almost sure representation of this presmoothed estimator. As a consequence, its strong consistency and asymptotic normality are established. The performance of this new quantile estimator is analyzed in a simulation study and applied to a real data example.  相似文献   

12.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

13.
The deterioration in 1995 of Europe's productivity performance relative to the U.S. coincided with the ‘renaissance’ of the U.S. statistical system, which can be regarded, by now, as the frontier in official statistics. This paper raises the natural question whether the European statistical system was ‘left at the station’ while its U.S. counterpart ‘departed’, making it possible for measurement differences to become the primary suspect for the existing productivity gap. My retrospective review of the development path in the services sector productivity statistics suggests that Europe lags significantly behind the U.S. in the services producer price index program, both in terms of scope and timing of its implementation. Accordingly, the role of these measurement differences in the post‐1995 Europe–U.S. productivity story cannot reasonably be ruled out. The paper concludes with a ‘structured guess’ that provides a circumstantial evidence on the benefits generated by the upgrades in the U.S. services sector statistics. The results show that these enhancements led to two kinds of benefits during the post‐1995 period – a considerable reduction in the contribution of industries that traditionally dampened the aggregate productivity trend combined with a larger contribution of those that generally lifted it. This contrasts markedly with Europe where the contribution of these two sources remained unchanged in the meantime.  相似文献   

14.
We examine the drivers of vertical integration for an integrated and unified HR-process model for 42 large companies from the financial services (13 companies) and the non-financial services sector (29 companies). The basis of this paper is formed by the results of a survey analysing the structures, processes and sourcing activities of human resource organizations. We sent the survey to 500 companies in Austria, Germany and Switzerland. The survey is based on an integrated process model that uses an employee life-cycle approach and differentiates between eight HR activities.

The purpose of this paper is threefold: first, to gain insights into the current status of HR outsourcing and understand the differences between the financial services and the non-financial services industry. Second, to develop a theory-based framework (transaction-cost, resource-based, principal agent) enabling us to derive and test eight hypotheses using Ordinary Least Squares (OLS)-regression analysis in order to examine the determinants of the vertical integration of HR processes. Third, to analyse the impact of the vertical integration of HR departments on company performance and characteristics. We find significant differences in the level of vertical integration between the HR subprocesses analysed. Even the processes with increased outsourcing activities (i.e. a lower degree of vertical integration) still show a relatively high proportion of in-house production.

Regression analysis reveals a significant negative interrelationship between the relative size of the HR department compared to company size and vertical integration. This finding holds for the HR subprocesses ‘Personnel Administration’, ‘Payroll and Benefits’, and ‘Off Boarding’. Second, we find a significant negative correlation between financial performance in terms of Return-on-Equity and vertical integration of ‘HR-IT’. We also find support for the theoretical framework for the subprocess ‘HR-Top Management’. Six hypotheses (out of eight) are supported by the analyses; two of these are highly significant.

Three major findings are noteworthy when analysing company performance and the vertical integration of HR departments. First, we find that large companies (in terms of total staff and total assets) display significantly high levels of vertical integration for subprocesses which include a large amount of manual work and crucial managerial, controlling and reporting tasks (‘HR-Top Management’ and ‘HR-Controlling and Reporting’). Second, large companies (in terms of total company staff) show lower levels of vertical integration for the HR subprocess ‘HR-IT’. Third, companies that show superior financial performance in terms of Return on Equity (RoE) display lower levels of vertical integration for the HR subprocess ‘HR-IT’.  相似文献   

15.
The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in system GMM, potentially suspect. A large instrument collection overfits endogenous variables even as it weakens the Hansen test of the instruments’ joint validity. This paper reviews the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it. It illustrates the dangers by replicating Forbes [American Economic Review (2000) Vol. 90, pp. 869–887] on income inequality and Levine et al. [Journal of Monetary Economics] (2000) Vol. 46, pp. 31–77] on financial sector development. Results in both papers appear driven by previously undetected endogeneity.  相似文献   

16.
In this article, we derive the local asymptotic power function of the unit root test proposed by Breitung [Journal of Econometrics (2002) Vol. 108, pp. 343–363]. Breitung's test is a non‐parametric test and is free of nuisance parameters. We compare the local power curve of the Breitungs’ test with that of the Dickey–Fuller test. This comparison is in fact a quantification of the loss of power that one has to accept when applying a non‐parametric test.  相似文献   

17.
In this article, we investigate the behaviour of stationarity tests proposed by Müller [Journal of Econometrics (2005) Vol. 128, pp. 195–213] and Harris et al. [Econometric Theory (2007) Vol. 23, pp. 355–363] with uncertainty over the trend and/or initial condition. As different tests are efficient for different magnitudes of local trend and initial condition, following Harvey et al. [Journal of Econometrics (2012) Vol. 169, pp. 188–195], we propose decision rule based on the rejection of null hypothesis for multiple tests. Additionally, we propose a modification of this decision rule, relying on additional information about the magnitudes of the local trend and/or the initial condition that is obtained through pre‐testing. The resulting modification has satisfactory size properties under both uncertainty types.  相似文献   

18.
This note presents some properties of the stochastic unit‐root processes developed in Granger and Swanson [Journal of Econometrics (1997) Vol. 80, pp. 35–62] and Leybourne, McCabe and Tremayne [Journal of Business & Economic Statistics (1996) Vol. 14, pp. 435–446] that have not been or only implicitly discussed in the literature.  相似文献   

19.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

20.
The maximum eigenvalue (ME) test for seasonal cointegrating ranks is presented using the approach of Cubadda [Oxford Bulletin of Economics and Statistics (2001), Vol. 63, pp. 497–511], which is computationally more efficient than that of Johansen and Schaumburg [Journal of Econometrics (1999), Vol. 88, pp. 301–339]. The asymptotic distributions of the ME test statistics are obtained for several cases that depend on the nature of deterministic terms. Monte Carlo experiments are conducted to evaluate the relative performances of the proposed ME test and the trace test, and we illustrate these tests using a monthly time series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号