首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
Multiple imputation methods properly account for the uncertainty of missing data. One of those methods for creating multiple imputations is predictive mean matching (PMM), a general purpose method. Little is known about the performance of PMM in imputing non‐normal semicontinuous data (skewed data with a point mass at a certain value and otherwise continuously distributed). We investigate the performance of PMM as well as dedicated methods for imputing semicontinuous data by performing simulation studies under univariate and multivariate missingness mechanisms. We also investigate the performance on real‐life datasets. We conclude that PMM performance is at least as good as the investigated dedicated methods for imputing semicontinuous data and, in contrast to other methods, is the only method that yields plausible imputations and preserves the original data distributions.  相似文献   

2.
The goal of this article is to develop a flexible Bayesian analysis of regression models for continuous and categorical outcomes. In the models we study, covariate (or regression) effects are modeled additively by cubic splines, and the error distribution (that of the latent outcomes in the case of categorical data) is modeled as a Dirichlet process mixture. We employ a relatively unexplored but attractive basis in which the spline coefficients are the unknown function ordinates at the knots. We exploit this feature to develop a proper prior distribution on the coefficients that involves the first and second differences of the ordinates, quantities about which one may have prior knowledge. We also discuss the problem of comparing models with different numbers of knots or different error distributions through marginal likelihoods and Bayes factors which are computed within the framework of Chib (1995) as extended to DPM models by Basu and Chib (2003). The techniques are illustrated with simulated and real data.  相似文献   

3.
In recent years Statistics Netherlands has published several stochastic population forecasts. The degree of uncertainty of the future population is assessed on the basis of assumptions about the probability distribution of future fertility, mortality and migration. The assumptions on fertility are based on an analysis of historic forecasts of the total fertility rate (TFR), on time‐series models of observations of the TFR, and on expert knowledge. This latter argument‐based approach refers to the TFR distinguished by birth order. In the most recent Dutch forecast the 95% forecast interval of the total fertility rate in 2050 is assumed to range from 1.2 to 2.3 children per woman.  相似文献   

4.
Contingent knowledge workers will play an increasingly important role in organisational success as workers transition in and out of project‐based innovation teams with more frequency. Our research finds that collaborators in the contingent, high‐skill workforce face uncertainty challenges to their work that are unique from the independent, contingent professionals more often studied. The article proposes a theoretical framework of uncertainty to guide us in understanding collaborative contingent knowledge workers’ work experience. Interviews with postdoctoral researchers reveal four findings about the influence of these highly uncertain work environments on collaborative contingent knowledge workers – collaboration isolation, frustrated independence, performance anxiety and internalised blame. Perhaps most concerning is that the workers internalise the negative impacts as personal failings instead of recognising them as consequences of a poorly designed work environment. This study argues for the need to manage and mitigate different sources of uncertainty to avoid creating an unnecessary burden on contingent knowledge workers, and to enable a sustainable, contingent knowledge workforce.  相似文献   

5.
This survey reviews the existing literature on the most relevant Bayesian inference methods for univariate and multivariate GARCH models. The advantages and drawbacks of each procedure are outlined as well as the advantages of the Bayesian approach versus classical procedures. The paper makes emphasis on recent Bayesian non‐parametric approaches for GARCH models that avoid imposing arbitrary parametric distributional assumptions. These novel approaches implicitly assume infinite mixture of Gaussian distributions on the standardized returns which have been shown to be more flexible and describe better the uncertainty about future volatilities. Finally, the survey presents an illustration using real data to show the flexibility and usefulness of the non‐parametric approach.  相似文献   

6.
In spite of the abundance of clustering techniques and algorithms, clustering mixed interval (continuous) and categorical (nominal and/or ordinal) scale data remain a challenging problem. In order to identify the most effective approaches for clustering mixed‐type data, we use both theoretical and empirical analyses to present a critical review of the strengths and weaknesses of the methods identified in the literature. Guidelines on approaches to use under different scenarios are provided, along with potential directions for future research.  相似文献   

7.
Through building and testing theory, the practice of research animates data for human sense-making about the world. The IS field began in an era when research data was scarce; in today's age of big data, it is now abundant. Yet, IS researchers often enact methodological assumptions developed in a time of data scarcity, and many remain uncertain how to systematically take advantage of new opportunities afforded by big data. How should we adapt our research norms, traditions, and practices to reflect newfound data abundance? How can we leverage the availability of big data to generate cumulative and generalizable knowledge claims that are robust to threats to validity? To date, IS academics have largely welcomed the arrival of big data as an overwhelmingly positive development. A common refrain in the discipline is: more data is great, IS researchers know all about data, and we are a well-positioned discipline to leverage big data in research and teaching. In our opinion, many benefits of big data will be realized only with a thoughtful understanding of the implications of big data availability and, increasingly, a deliberate shift in IS research practices. We advocate for a need to re-visit and extend traditional models that are commonly used to guide much of IS research. Based on our analysis, we propose a research approach that incorporates consideration of big data—and associated implications such as data abundance—into a classic approach to building and testing theory. We close our commentary by discussing the implications of this hybrid approach for the organization, execution, and evaluation of theory-informed research. Our recommendations on how to update one approach to IS research practice may have relevance to all theory-informed researchers who seek to leverage big data.  相似文献   

8.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

9.
Abstract There is a plethora of time series measures of uncertainty for inflation and real output growth in empirical studies but little is known whether they are comparable to the uncertainty measure reported by individual forecasters in the survey of professional forecasters. Are these two measures of uncertainty inherently distinct? This paper shows that, compared with many uncertainty proxies produced by time series models, the use of real‐time data with fixed‐sample recursive estimation of an asymmetric bivariate generalized autoregressive conditional heteroskedasticity model yields inflation uncertainty estimates which resemble the survey measure. There is, however, overwhelming evidence that many of the time series measures of growth uncertainty exceed the level of uncertainty obtained from survey measure. Our results highlight the relative merits of using different methods in modelling macroeconomic uncertainty which are useful for empirical researchers.  相似文献   

10.
We develop an iterative and efficient information-theoretic estimator for forecasting interval-valued data, and use our estimator to forecast the SP500 returns up to five days ahead using moving windows. Our forecasts are based on 13 years of data. We show that our estimator is superior to its competitors under all of the common criteria that are used to evaluate forecasts of interval data. Our approach differs from other methods that are used to forecast interval data in two major ways. First, rather than applying the more traditional methods that use only certain moments of the intervals in the estimation process, our estimator uses the complete sample information. Second, our method simultaneously selects the model (or models) and infers the model’s parameters. It is an iterative approach that imposes minimal structure and statistical assumptions.  相似文献   

11.
This paper provides a fresh perspective to explore the network correlations among commodity, exchange rate, and categorical economic policy uncertainties (EPU) in China. We try to contribute to the literature by examining the spillover mechanism with a relatively novel connectedness network using the monthly data over the period between June 2006 and January 2021. Our results suggest that prior to the recession, China’s commodity price is subject to greater spillovers from the exchange rate than recessions. The domestic commodity prices are more sensitive to monetary policy uncertainty and fiscal policy uncertainty. The occurrence of COVID-19 revises the dominance in the system from monetary policy uncertainty and fiscal policy uncertainty to trade policy uncertainty.  相似文献   

12.
The increasing interest in sustainable consumption has lead several scholars to investigate the determinants that drive the consumption of organic food. Most of this research is based on consumers' self‐reports of their purchasing behavior by exploring declared behavioral intentions. There is a lack of understanding concerning the determinants of organic food consumption based on actual purchasing behavior. To fill this gap, this study is based on a combination of actual purchasing data and self‐reported data from a sample of 79 Italian consumers. The determinants of organic food consumption are explored by analyzing the effects of subjective norms, attitude, perceived behavioral control, intention to buy, organic knowledge, and health consciousness on actual purchasing behavior. Our results suggest that actual purchasing behavior is positively influenced by intention to buy and negatively by subjective norms. Although attitude towards buying organics is positively affected by health consciousness and perceived behavioral control, consumer knowledge about organics is found to influence purchase intentions. Theoretical and managerial implications, along with avenues for future research, are discussed.  相似文献   

13.
This paper outlines a strategy to validate multiple imputation methods. Rubin's criteria for proper multiple imputation are the point of departure. We describe a simulation method that yields insight into various aspects of bias and efficiency of the imputation process. We propose a new method for creating incomplete data under a general Missing At Random (MAR) mechanism. Software implementing the validation strategy is available as a SAS/IML module. The method is applied to investigate the behavior of polytomous regression imputation for categorical data.  相似文献   

14.
In this paper, we investigate certain operational and inferential aspects of invariant Post‐randomization Method (PRAM) as a tool for disclosure limitation of categorical data. Invariant PRAM preserves unbiasedness of certain estimators, but inflates their variances and distorts other attributes. We introduce the concept of strongly invariant PRAM, which does not affect data utility or the properties of any statistical method. However, the procedure seems feasible in limited situations. We review methods for constructing invariant PRAM matrices and prove that a conditional approach, which can preserve the original data on any subset of variables, yields invariant PRAM. For multinomial sampling, we derive expressions for variance inflation inflicted by invariant PRAM and variances of certain estimators of the cell probabilities and also their tight upper bounds. We discuss estimation of these quantities and thereby assessing statistical efficiency loss from applying invariant PRAM. We find a connection between invariant PRAM and creating partially synthetic data using a non‐parametric approach, and compare estimation variance under the two approaches. Finally, we discuss some aspects of invariant PRAM in a general survey context.  相似文献   

15.
This paper characterizes a robust optimal policy rule in a simple forward‐looking model, when the policymaker faces uncertainty about model parameters and shock processes. We show that the robust optimal policy rule is likely to involve a stronger response of the interest rate to fluctuations in inflation and the output gap than is the case in the absence of uncertainty. Thus parameter uncertainty alone does not necessarily justify a small response of monetary policy to perturbations. However, uncertainty may amplify the degree of ‘super‐inertia’ required by optimal monetary policy. We finally discuss the sensitivity of the results to alternative assumptions. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
City of Rents: The limits to the Barcelona model of urban competitiveness   总被引:1,自引:0,他引:1  
The turn towards the knowledge‐based economy and creative strategies to enhance urban competitiveness within it has been well documented. Yet too little has been said to date about the transformation of land use for new productive activities, and the contradictions inherent to this process. Our case study is Barcelona, an erstwhile ‘model’ for urban regeneration which has sought to transform itself into a global knowledge city since 2000. Through the lens of Marxian value theory, and Harvey's writing on urban monopoly rents especially, we show how the 22@Barcelona project — conceived with received wisdom about the determinants of urban knowledge‐based competitiveness in mind — amounted to an exercise in the capture of monopoly rents, driven by the compulsion of public sector institutions, financiers and developers to pursue rental profit‐maximizing opportunities through the mobilization of land as a financial asset.  相似文献   

17.
In this study, we develop a theoretical conceptualization and an operational definition of structuring of human resource management (HRM) processes and examine how this structuring enables employee creativity at work. Analyzing the data collected from employees and their managers in knowledge‐intensive workplace settings, we examine a mediation model that suggests that the relationship between structuring of HRM processes and employee creativity is best explained in terms of the intervening variables of perceived uncertainty, stress, and psychological availability. Results suggest that structuring of HRM processes is negatively associated with perceived uncertainty and stress. These perceptions produce a sense of psychological availability, which in turn enhances employee creativity. This study offers new insights about diagnosing the level of structuring of HRM processes and the ways managers and HR directors facilitate its implementation in their organization. © 2010 Wiley Periodicals, Inc.  相似文献   

18.
Social and economic scientists are tempted to use emerging data sources like big data to compile information about finite populations as an alternative for traditional survey samples. These data sources generally cover an unknown part of the population of interest. Simply assuming that analyses made on these data are applicable to larger populations is wrong. The mere volume of data provides no guarantee for valid inference. Tackling this problem with methods originally developed for probability sampling is possible but shown here to be limited. A wider range of model‐based predictive inference methods proposed in the literature are reviewed and evaluated in a simulation study using real‐world data on annual mileages by vehicles. We propose to extend this predictive inference framework with machine learning methods for inference from samples that are generated through mechanisms other than random sampling from a target population. Describing economies and societies using sensor data, internet search data, social media and voluntary opt‐in panels is cost‐effective and timely compared with traditional surveys but requires an extended inference framework as proposed in this article.  相似文献   

19.
Small area estimation is concerned with methodology for estimating population parameters associated with a geographic area defined by a cross-classification that may also include non-geographic dimensions. In this paper, we develop constrained estimation methods for small area problems: those requiring smoothness with respect to similarity across areas, such as geographic proximity or clustering by covariates, and benchmarking constraints, requiring weighted means of estimates to agree across levels of aggregation. We develop methods for constrained estimation decision theoretically and discuss their geometric interpretation. The constrained estimators are the solutions to tractable optimisation problems and have closed-form solutions. Mean squared errors of the constrained estimators are calculated via bootstrapping. Our approach assumes the Bayes estimator exists and is applicable to any proposed model. In addition, we give special cases of our techniques under certain distributional assumptions. We illustrate the proposed methodology using web-scraped data on Berlin rents aggregated over areas to ensure privacy.  相似文献   

20.
Although environmental regulations have been considered as important forces of conducting green innovation, how and under what conditions they affect green innovation are still unclear. Drawing from institutional theory, this study used survey data from 237 manufacturing firms in China to investigate how two dimensions of environmental regulations (i.e., command and control regulation and market‐based regulation) affect green product innovation and green process innovation. Further, this article examined the mediating role of external knowledge adoption and the moderating role of green absorptive capacity. Our results indicate that both command and control regulation and market‐based regulation have positive influences on external knowledge adoption. External knowledge adoption fully mediates these positive relationships. In addition, green absorptive capacity only strengthens the positive impact of market‐based regulation on external knowledge adoption. Our study contributes to institutional theory and green innovation literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号