首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using job characteristics theory as a framework, we calculated meta‐analytic effect sizes between meaningful work and various outcomes and tested a mediated model of meaningful work predicting proximal and distal outcomes with meta‐analytic structural equation modelling (MASEM). From 44 articles (N = 23,144), we found that meaningful work had large correlations (r = 0.70+) with work engagement, commitment, and job satisfaction; moderate to large correlations (r = 0.44 to ?0.49) with life satisfaction, life meaning, general health, and withdrawal intentions; and small to moderate correlations (r = ?0.19 to 0.33) with organizational citizenship behaviours, self‐rated job performance, and negative affect. The best MASEM fitting model was meaningful work predicting work engagement, commitment, and job satisfaction and these variables subsequently predicting self‐rated performance, organizational citizenship behaviours, and withdrawal intentions. This meta‐analysis provides estimated effect sizes between meaningful work and its outcomes and reveals how meaningful work relates directly and indirectly to key outcomes.  相似文献   

2.
Non-negative matrix factorisation (NMF) is an increasingly popular unsupervised learning method. However, parameter estimation in the NMF model is a difficult high-dimensional optimisation problem. We consider algorithms of the alternating least squares type. Solutions to the least squares problem fall in two categories. The first category is iterative algorithms, which include algorithms such as the majorise–minimise (MM) algorithm, coordinate descent, gradient descent and the Févotte-Cemgil expectation–maximisation (FC-EM) algorithm. We introduce a new family of iterative updates based on a generalisation of the FC-EM algorithm. The coordinate descent, gradient descent and FC-EM algorithms are special cases of this new EM family of iterative procedures. Curiously, we show that the MM algorithm is never a member of our general EM algorithm. The second category is based on cone projection. We describe and prove a cone projection algorithm tailored to the non-negative least square problem. We compare the algorithms on a test case and on the problem of identifying mutational signatures in human cancer. We generally find that cone projection is an attractive choice. Furthermore, in the cancer application, we find that a mix-and-match strategy performs better than running each algorithm in isolation.  相似文献   

3.
The brown rat lives with man in a wide variety of environmental contexts and adversely affects public health by transmission of diseases, bites, and allergies. Understanding behavioral and spatial correlation aspects of pest species can contribute to their effective management and control. Rat sightings can be described by spatial coordinates in a particular region of interest defining a spatial point pattern. In this paper, we investigate the spatial structure of rat sightings in the Latina district of Madrid (Spain) and its relation to a number of distance‐based covariates that relate to the proliferation of rats. Given a number of locations, biologically considered as attractor points, the spatial dependence is modeled by distance‐based covariates and angular orientations through copula functions. We build a particular spatial trivariate distribution using univariate margins coming from the covariate information and provide predictive distributions for such distances and angular orientations.  相似文献   

4.
In most surveys, one is confronted with missing or, more generally, coarse data. Traditional methods dealing with these data require strong, untestable and often doubtful assumptions, for example, coarsening at random. But due to the resulting, potentially severe bias, there is a growing interest in approaches that only include tenable knowledge about the coarsening process, leading to imprecise but reliable results. In this spirit, we study regression analysis with a coarse categorical‐dependent variable and precisely observed categorical covariates. Our (profile) likelihood‐based approach can incorporate weak knowledge about the coarsening process and thus offers a synthesis of traditional methods and cautious strategies refraining from any coarsening assumptions. This also allows a discussion of the uncertainty about the coarsening process, besides sampling uncertainty and model uncertainty. Our procedure is illustrated with data of the panel study ‘Labour market and social security' conducted by the Institute for Employment Research, whose questionnaire design produces coarse data.  相似文献   

5.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

6.
In many surveys, imputation procedures are used to account for non‐response bias induced by either unit non‐response or item non‐response. Such procedures are optimised (in terms of reducing non‐response bias) when the models include covariates that are highly predictive of both response and outcome variables. To achieve this, we propose a method for selecting sets of covariates used in regression imputation models or to determine imputation cells for one or more outcome variables, using the fraction of missing information (FMI) as obtained via a proxy pattern‐mixture (PMM) model as the key metric. In our variable selection approach, we use the PPM model to obtain a maximum likelihood estimate of the FMI for separate sets of candidate imputation models and look for the point at which changes in the FMI level off and further auxiliary variables do not improve the imputation model. We illustrate our proposed approach using empirical data from the Ohio Medicaid Assessment Survey and from the Service Annual Survey.  相似文献   

7.
Organizational scholars have studied the impact of sex on sexual harassment outcomes but left unexplored the influences of race. Thus, we use social identity theory to explore the role of race stereotypes and their influences on sexual harassment outcomes. We posit that stereotypes of African-American women tend to be much more negative than those of white women and this serves to marginalize their position both as victims of sexual harassment as well as complainants.
Eileen KwesigaEmail:
  相似文献   

8.
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

9.
A rich theory of production and analysis of productive efficiency has developed since the pioneering work by Tjalling C. Koopmans and Gerard Debreu. Michael J. Farrell published the first empirical study, and it appeared in a statistical journal (Journal of the Royal Statistical Society), even though the article provided no statistical theory. The literature in econometrics, management sciences, operations research and mathematical statistics has since been enriched by hundreds of papers trying to develop or implement new tools for analysing productivity and efficiency of firms. Both parametric and non‐parametric approaches have been proposed. The mathematical challenge is to derive estimators of production, cost, revenue or profit frontiers, which represent, in the case of production frontiers, the optimal loci of combinations of inputs (like labour, energy and capital) and outputs (the products or services produced by the firms). Optimality is defined in terms of various economic considerations. Then the efficiency of a particular unit is measured by its distance to the estimated frontier. The statistical problem can be viewed as the problem of estimating the support of a multivariate random variable, subject to some shape constraints, in multiple dimensions. These techniques are applied in thousands of papers in the economic and business literature. This ‘guided tour’ reviews the development of various non‐parametric approaches since the early work of Farrell. Remaining challenges and open issues in this challenging arena are also described. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

10.
In panel studies, where a categorical response is measured attwo points in time, we can examine two kind of hypotheses regardingthe nature of change. The first is related with change at theindividual level (gross change) through the modelling of joint distributionof responses. The second is related with aggregate change (netchange) through the modelling of marginal distributions of responses.This paper describes a general approach to the analysis of two-wavepanel data based on Lang and Agresti's work (1994) that simultaneouslypermits the modelling of marginal and joint distributions of responses.This approach is illustrated with data from Heatherton et al.(1997) about change in dieting behaviour. These data were originallyanalyzed using the 2 statistic to test independenceof responses. This paper shows how it is possible toobtain a better understanding of these data using the proposedmethodological approach.  相似文献   

11.
We show how differences in aggregate human development outcomes over time and space can be additively decomposed into a pure mean income (growth) component, a component attributed to differences in the distribution of income, and components attributed to ‘non‐income’ factors and differences in the model linking outcomes to income and non‐income characteristics. The income effect at the micro level is modelled non‐parametrically, so as to flexibly reflect potentially complex distributional changes. Our proposed method is illustrated using data for Morocco and Vietnam, and the results offer some surprising insights into the observed aggregate gains in schooling attainments.  相似文献   

12.
Binary response index models may be affected by several forms of misspecification, which range from pure functional form problems (e.g. incorrect specification of the link function, neglected heterogeneity, heteroskedasticity) to various types of sampling issues (e.g. covariate measurement error, response misclassification, endogenous stratification, missing data). In this article we examine the ability of several versions of the RESET test to detect such misspecifications in an extensive Monte Carlo simulation study. We find that: (i) the best variants of the RESET test are clearly those based on one or two fitted powers of the response index; and (ii) the loss of power resulting from using the RESET instead of a test directed against a specific type of misspecification is very small in many cases.  相似文献   

13.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   

14.
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D‐, A‐ or E‐optimality. As an illustrative example, we demonstrate the approach using the power‐logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D‐optimal designs with two regressors for a logistic model and a two‐variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.  相似文献   

15.
16.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

17.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

18.
This article considers the problem of testing for cross‐section independence in limited dependent variable panel data models. It derives a Lagrangian multiplier (LM) test and shows that in terms of generalized residuals of Gourieroux et al. (1987) it reduces to the LM test of Breusch and Pagan (1980) . Because of the tendency of the LM test to over‐reject in panels with large N (cross‐section dimension), we also consider the application of the cross‐section dependence test (CD) proposed by Pesaran (2004) . In Monte Carlo experiments it emerges that for most combinations of N and T the CD test is correctly sized, whereas the validity of the LM test requires T (time series dimension) to be quite large relative to N. We illustrate the cross‐sectional independence tests with an application to a probit panel data model of roll‐call votes in the US Congress and find that the votes display a significant degree of cross‐section dependence.  相似文献   

19.
A wide class of prior distributions for the Poisson‐gamma hierarchical model is proposed. Prior distributions in this class carry vague information in the sense that their tails exhibit slow decay. Conditions for the propriety of the resulting posterior density are determined, as well as for the existence of posterior moments of the Poisson rate of either an observed or an unobserved unit.  相似文献   

20.
We consider Grenander‐type estimators for a monotone function , obtained as the slope of a concave (convex) estimate of the primitive of λ. Our main result is a central limit theorem for the Hellinger loss, which applies to estimation of a probability density, a regression function or a failure rate. In the case of density estimation, the limiting variance of the Hellinger loss turns out to be independent of λ.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号