首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Estimation with longitudinal Y having nonignorable dropout is considered when the joint distribution of Y and covariate X is nonparametric and the dropout propensity conditional on (Y,X) is parametric. We apply the generalised method of moments to estimate the parameters in the nonignorable dropout propensity based on estimating equations constructed using an instrument Z, which is part of X related to Y but unrelated to the dropout propensity conditioned on Y and other covariates. Population means and other parameters in the nonparametric distribution of Y can be estimated based on inverse propensity weighting with estimated propensity. To improve efficiency, we derive a model‐assisted regression estimator making use of information provided by the covariates and previously observed Y‐values in the longitudinal setting. The model‐assisted regression estimator is protected from model misspecification and is asymptotically normal and more efficient when the working models are correct and some other conditions are satisfied. The finite‐sample performance of the estimators is studied through simulation, and an application to the HIV‐CD4 data set is also presented as illustration.  相似文献   

2.
So far, statistics has mainly relied on information collected from censuses and sample surveys, which are used to produce statistics about selected characteristics of the population. However, because of cost cuts and increasing non‐response in sample surveys, statisticians have started to search for new sources of information, such as registers, Internet data sources (IDSs, i.e. web portals) or big data. Administrative sources are already used for purposes of official statistics, while the suitability of the latter two sources is currently being discussed in the literature. Unfortunately, only a few papers devoted to statistical theory point out methodological problems related to the use of IDSs, particularly in the context of survey methodology. The unknown generation mechanism and the complexity of such data are often neglected in view of their size. Hence, before IDSs can be used for statistical purposes, especially for official statistics, they need to be assessed in terms of such fundamental issues as representativeness, non‐sampling errors or bias. The paper attempts to fill the first gap by proposing a two‐step procedure to measure representativeness of IDSs. The procedure will be exemplified using data about the secondary real estate market in Poland.  相似文献   

3.
In spite of the abundance of clustering techniques and algorithms, clustering mixed interval (continuous) and categorical (nominal and/or ordinal) scale data remain a challenging problem. In order to identify the most effective approaches for clustering mixed‐type data, we use both theoretical and empirical analyses to present a critical review of the strengths and weaknesses of the methods identified in the literature. Guidelines on approaches to use under different scenarios are provided, along with potential directions for future research.  相似文献   

4.
Incomplete data is a common problem of survey research. Recent work on multiple imputation techniques has increased analysts’ awareness of the biasing effects of missing data and has also provided a convenient solution. Imputation methods replace non-response with estimates of the unobserved scores. In many instances, however, non-response to a stimulus does not result from measurement problems that inhibit accurate surveying of empirical reality, but from the inapplicability of the survey question. In such cases, existing imputation techniques replace valid non-response with counterfactual estimates of a situation in which the stimulus is applicable to all respondents. This paper suggests an alternative imputation procedure for incomplete data for which no true score exists: multiple complete random imputation, which overcomes the biasing effects of missing data and allows analysts to model respondents’ valid ‘I don’t know’ answers.  相似文献   

5.
Multiple event data are frequently encountered in medical follow‐up, engineering and other applications when the multiple events are considered as the major outcomes. They may be repetitions of the same event (recurrent events) or may be events of different nature. Times between successive events (gap times) are often of direct interest in these applications. The stochastic‐ordering structure and within‐subject dependence of multiple events generate statistical challenges for analysing such data, including induced dependent censoring and non‐identifiability of marginal distributions. This paper provides an overview of a class of existing non‐parametric estimation methods for gap time distributions for various types of multiple event data, where sampling bias from induced dependent censoring is effectively adjusted. We discuss the statistical issues in gap time analysis, describe the estimation procedures and illustrate the methods with a comparative simulation study and a real application to an AIDS clinical trial. A comprehensive understanding of challenges and available methods for non‐parametric analysis can be useful because there is no existing standard approach to identifying an appropriate gap time method that can be used to address research question of interest. The methods discussed in this review would allow practitioners to effectively handle a variety of real‐world multiple event data.  相似文献   

6.
This paper surveys the empirical research on fiscal policy analysis based on real‐time data. This literature can be broadly divided into four groups that focus on: (1) the statistical properties of revisions in fiscal data; (2) the political and institutional determinants of projection errors by governments; (3) the reaction of fiscal policies to the business cycle and (4) the use of real‐time fiscal data in structural vector autoregression (VAR) models. It emerges that, first, fiscal revisions are large and initial releases are biased estimates of final values. Secondly, strong fiscal rules and institutions lead to more accurate releases of fiscal data and smaller deviations of fiscal outcomes from government plans. Thirdly, the cyclical stance of fiscal policies is estimated to be more ‘counter‐cyclical’ when real‐time data are used instead of ex post data. Fourthly, real‐time data can be useful for the identification of fiscal shocks. Finally, it is shown that existing real‐time fiscal data sets cover only a limited number of countries and variables. For example, real‐time data for developing countries are generally unavailable. In addition, real‐time data on European countries are often missing, especially with respect to government revenues and expenditures. Therefore, more work is needed in this field.  相似文献   

7.
The growth of non‐response rates for social science surveys has led to increased concern about the risk of non‐response bias. Unfortunately, the non‐response rate is a poor indicator of when non‐response bias is likely to occur. We consider in this paper a set of alternative indicators. A large‐scale simulation study is used to explore how each of these indicators performs in a variety of circumstances. Although, as expected, none of the indicators fully depict the impact of non‐response in survey estimates, we discuss how they can be used when creating a plausible account of the risks for non‐response bias for a survey. We also describe an interesting characteristic of the fraction of missing information that may be helpful in diagnosing not‐missing‐at‐random mechanisms in certain situations.  相似文献   

8.
This article proposes a new method of project performance evaluation, by which project performance data can be better understood. This article combines principal component analysis (PCA) and data envelopment analysis (DEA) to enhance the efficiency of decision‐making units more accurately. The data used was based on energy projects promoted by the Bureau of Energy at the Ministry of Economic Affairs in Taiwan. The results of this article show that by combining PCA and DEA in evaluating the performance of energy projects, there was an improved evaluation of projects over simply using DEA data alone.  相似文献   

9.
Logistic regression analysis may well be used to develop a predictive model for a dichotomous medical outcome, such as short-term mortality. When the data set is small compared to the number of covariables studied, shrinkage techniques may improve predictions. We compared the performance of three variants of shrinkage techniques: 1) a linear shrinkage factor, which shrinks all coefficients with the same factor; 2) penalized maximum likelihood (or ridge regression), where a penalty factor is added to the likelihood function such that coefficients are shrunk individually according to the variance of each covariable; 3) the Lasso, which shrinks some coefficients to zero by setting a constraint on the sum of the absolute values of the coefficients of standardized covariables.
Logistic regression models were constructed to predict 30-day mortality after acute myocardial infarction. Small data sets were created from a large randomized controlled trial, half of which provided independent validation data. We found that all three shrinkage techniques improved the calibration of predictions compared to the standard maximum likelihood estimates. This study illustrates that shrinkage is a valuable tool to overcome some of the problems of overfitting in medical data.  相似文献   

10.
This study set out to evaluate the financing efficiency of low‐carbon companies. Applying a three‐stage data envelopment analysis with the data from 85 listed companies in China's low‐carbon industries over the period 2011 to 2017, this study has found that the overall financing efficiency of low‐carbon companies was relatively high, and the pure technical efficiency was quite steady over the period. The overall financing efficiency of these low‐carbon companies on average tended to change with the scale efficiency. This study has also shown that the scale efficiency was the main constraint influencing the financing efficiency of low‐carbon companies in China over the period. Our results are robust and have significant implications for policy makers and corporate managers.  相似文献   

11.
Increasing precision of measurement is a goal of scientific advancement, but Nunnally's (1978) .70 benchmark for coefficient alpha (alpha) has remained the omnibus test for reliability for nearly 40 years. This likely arises due to there only being scattered empirical evidence of the degree to which the field has met or surpassed this standard. Using meta‐analytic techniques known as reliability generalization (RG), we cumulate alphas across 36 commonly used individual differences, attitudes, and behaviours from 1675 independent samples (N = 991,588). Our primary finding is that alphas almost always exceed .70 and generally fall above .80. In addition, we identified factors that moderate alpha including the specific measure used, the number of scale items, and the rater. The study provides baseline alphas that can be used for research planning and design; it also offers best practices for RG and notes the benefits of RG for understanding systematic fluctuations in reliability.  相似文献   

12.
本文以Eisenhardt 2000年以后发表的6篇经典管理案例研究论文为例,分析了4种主要的案例研究资料分析技术在管理案例研究中的应用:Glaser和Strauss的扎根理论资料分析技术、Miles和Huberman的质性资料分析技术、Yin的案例研究资料分析技术和Eisenhardt的案例研究资料分析技术。  相似文献   

13.
This paper reviews the recent literature on conditional duration modeling in high‐frequency finance. These conditional duration models are associated with the time interval between trades, price, and volume changes of stocks, traded in a financial market. An earlier review by Pacurar provides an exhaustive survey of the first and some of the second generation conditional duration models. We consider almost all of the third‐generation and some of the second‐generation conditional duration models. Notable applications of these models and related empirical studies are discussed. The paper may be seen as an extension to Pacurar.  相似文献   

14.
Environmental reporting is a tool of corporate environmental management that can also be used as research material. The aim of this paper is to produce a comprehensive definition of eco‐efficiency based on the literature and then compare it with definitions identified in the environmental reports published by selected companies. In addition, this paper presents a conceptual framework of the relationship between environmental and economic performance in the companies. Three Finnish companies in the forest industry are selected as case companies. This analysis reviews environmental reports published by the companies from 1998 to 2007. In short, eco‐efficiency can be seen either as an indicator of environmental performance, or as a business strategy for sustainable development. The case companies very seldom give an exact definition of eco‐efficiency in their environmental reports. However, different aspects of eco‐efficiency are often referred to. Copyright © 2012 John Wiley & Sons, Ltd and ERP Environment.  相似文献   

15.
在消费升级背景下,数据正呈爆炸式增长,给企业带来了机遇与挑战,新的商业模式不断涌现。本文基于动态能力和大数据理论,以小米和网易严选为例,研究了在消费升级背景下大数据能力对商业模式创新的影响机理和路径问题,发现大数据能力通过客户价值、企业资源和能力、盈利模式驱动商业模式创新的三条路径。探讨了资源整合能力、深度分析能力和实时洞察与预测能力三个维度之间的关系,并构建出大数据能力对商业模式创新的影响机理模型,可以拓展大数据能力和现有企业商业模式创新的相关内容。  相似文献   

16.
In this study, we examine the cross‐cultural differences in human resource (HR) managers’ beliefs in effective HR practices by surveying HR practitioners in Finland (N = 86), South Korea (N = 147), and Spain (N = 196). Similar to previous studies from the United States, the Netherlands, and Australia, there are large discrepancies between HR practitioner beliefs and research findings, particularly in the area of staffing. In addition, we find that interpersonal‐oriented aspects of HR practices tend to be more culturally bound than technical‐oriented aspects of HR practices. We interpret the differences using Hofstede's cultural dimensions (Power Distance, Individualism versus Collectivism, Masculinity versus Femininity, Long‐Term Orientation versus Short‐Term Orientation, and Uncertainty Avoidance). We discuss the overall nature of the science‐practice gap in HR management, and the implications for evidence‐based management. © 2014 Wiley Periodicals, Inc.  相似文献   

17.
李迅  张传平 《价值工程》2010,29(22):48-49
运用问卷调查法对各类专业技术人才进行了问卷调查,依据因素分析得出专业技术人才胜任能力四因素模型。专业技术人才胜任能力中组织认同水平较高,而人格特质、专业知识技能、一般能力水平则相对较低,同时也发现不同职称人才胜任能力各因素水平上的差异。  相似文献   

18.
Cross‐validation is a widely used tool in selecting the smoothing parameter in a non‐parametric procedure. However, it suffers from large sampling variation and tends to overfit the data set. Many attempts have been made to reduce the variance of cross‐validation. This paper focuses on two recent proposals of extrapolation‐based cross‐validation bandwidth selectors: indirect cross‐validation and subsampling‐extrapolation technique. In univariate case, we notice that using a fixed value parameter surrogate for indirect cross‐validation works poorly when the true density is hard to estimate, while the subsampling‐extrapolation technique is more robust to non‐normality. We investigate whether a hybrid bandwidth selector could benefit from the advantages of both approaches and compare the performance of different extrapolation‐based bandwidth selectors through simulation studies, real data analyses and large sample theory. A discussion on their extension to bivariate case is also presented.  相似文献   

19.
We present a modern perspective of the conditional likelihood approach to the analysis of capture‐recapture experiments, which shows the conditional likelihood to be a member of generalized linear model (GLM). Hence, there is the potential to apply the full range of GLM methodologies. To put this method in context, we first review some approaches to capture‐recapture experiments with heterogeneous capture probabilities in closed populations, covering parametric and non‐parametric mixture models and the use of covariates. We then review in more detail the analysis of capture‐recapture experiments when the capture probabilities depend on a covariate.  相似文献   

20.
How has the impact of ‘good corporate governance’ principles on firm performance changed over time in China? Amassing a database of 84 studies, 684 effect sizes, and 547,622 firm observations, we explore this important question by conducting a meta‐analysis on the corporate governance literature on China. The weight of evidence demonstrates that two major ‘good corporate governance’ principles advocating board independence and managerial incentives are indeed associated with better firm performance. However, we cannot find strong support for the criticisms against CEO duality. In addition, we go beyond a static perspective (such as certain governance mechanisms are effective or ineffective) by investigating the temporal hypotheses. We reveal that over time, with the improvement in the quality of market institutions and development of financial markets, the monitoring mechanisms of the board and state ownership become more strongly related to firm performance, whereas the incentive mechanisms lose their significance. Overall, our findings advance a dynamic institution‐based view by substantiating the case that institutional transitions matter for the relationship between governance mechanisms and firm performance in the second largest economy in the world.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号