首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
We detail a method of simulating data from long range dependent processes with variance-gamma or t distributed increments, test various estimation procedures [method of moments (MOM), product-density maximum likelihood (PMLE), non-standard minimum χ2 and empirical characteristic function estimation] on the data, and assess the performance of each. The investigation is motivated by the apparent poor performance of the MOM technique using real data ( Tjetjep & Seneta, 2006 ); and the need to assess the performance of PMLE for our dependent data models. In the simulations considered the product-density method performs favourably.  相似文献   

2.
This paper compares innovation survey data with invention data as recorded by patents (and, therefore, the basis of the Yale Technology Concordance), presenting input–output tables for both data sets. We describe and compare the intersec-toral flows of technology from the industry of manufacture (IOM) to the sector of use (SOU); compare patent-to-innovation ratios by industry; and present correlations between innovation, invention and R&D by industy. One significant conclusion is that, while innovation and invention data are highly correlated in the IOM, they diverge on the SOU, especially in certain key industries that we identify.  相似文献   

3.
There is widespread agreement in research and practice that data governance is an instrumental element to help organizations leverage and protect data. IS research has observed that our practical and our scientific knowledge of data governance remains limited, and the increasing ability for organizations to generate, acquire, store, transform, process and analyze data calls for us to further identify and address issues on the topic. Striving to contribute to answer this pressing need, we argue that understanding the nature and the implications of governance mechanisms is of high importance as it is these mechanisms that effectively instantiate data governance in an organization. Building on our experience preparing and teaching workshops to 102 executives on the topic, we adopt a position of engaged scholarship and provide a translational account of our pedagogical experience on data governance, highlighting four outstanding themes for IS research. We argue that these four themes—(1) embracing data governance without compromising digital innovation; (2) enacting data governance through repertoires of mechanisms; (3) moving away from data governance toward governing data; and (4) moving away from a view of data at rest to adopt a service-based perspective on data governance—are highly relevant for practice and research. In our view, studying these themes will contribute to inform practitioners who often struggle with the implementation of comprehensive data governance programs and frameworks. At the same time, the ability to leverage theory to study these themes can help research generate novel theoretical contributions on data governance, helping future research on the topic.  相似文献   

4.
Statistical agencies often release a masked or perturbed version of survey data to protect the confidentiality of respondents' information. Ideally, a perturbation procedure should provide confidentiality protection without much loss of data quality, so that the released data may practically be treated as original data for making inferences. One major objective is to control the risk of correctly identifying any respondent's records in released data, by matching the values of some identifying or key variables. For categorical key variables, we propose a new approach to measuring identification risk and setting strict disclosure control goals. The general idea is to ensure that the probability of correctly identifying any respondent or surveyed unit is at most ξ, which is pre‐specified. Then, we develop an unbiased post‐randomisation procedure that achieves this goal for ξ>1/3. The procedure allows substantial control over possible changes to the original data, and the variance it induces is of a lower order of magnitude than sampling variance. We apply the procedure to a real data set, where it performs consistently with the theoretical results and quite importantly, shows very little data quality loss.  相似文献   

5.
Due to continued interest in geographic living-cost differentials, some researchers have used data from the ACCRA Cost of Living Index. This paper explores further the potential for using ACCRA data for cost-of-living research. In particular, it investigates the possibility of self-selection bias affecting OLS estimates using ACCRA data. The findings indicate that self-selection bias is a concern that researchers using ACCRA data should be aware of. Results using Heckman’s two-step procedure to estimate a cost-of-living model indicate promise for using ACCRA data to update and expand upon previous cost-of-living research. The author wishes to acknowledge Keith Ihlanfeldt and Cynthia Rogers for constructive comments on an earlier version of this paper presented at the 35th Meeting of the Southern Regional Science Association and also wishes to thank the two anonymous referees for their suggestions on improving the paper.  相似文献   

6.
RFID在物流系统中的应用分析及实证   总被引:1,自引:0,他引:1  
闫柏睿 《价值工程》2010,29(19):39-40
本文简单介绍了无线射频识别技术,指出了它是现代物流信息化建设中基础数据获取的重要手段,分析了在物流系统中应用RFID技术会极大提高物流系统的效率,避免人为错误;取得及时的物流动态资料,实现物流完全可视化,加速物流的运作并改善对物流运作的掌握;减少多余的资料录入并且提高资料的正确性。并从无线射频识别技术在物流中的应用进行实证。  相似文献   

7.
We examine how the use of high‐frequency data impacts the portfolio optimization decision. Prior research has documented that an estimate of realized volatility is more precise when based upon intraday returns rather than daily returns. Using the framework of a professional investment manager who wishes to track the S&P 500 with the 30 Dow Jones Industrial Average stocks, we find that the benefits of using high‐frequency data depend upon the rebalancing frequency and estimation horizon. If the portfolio is rebalanced monthly and the manager has access to at least the previous 12 months of data, daily data have the potential to perform as well as high‐frequency data. However, substantial improvements in the portfolio optimization decision from high‐frequency data are realized if the manager rebalances daily or has less than a 6‐month estimation window. These findings are robust to transaction costs. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
We present a new econometric model of aggregate demand and labor supply for the United States. We also analyze the allocation full wealth among time periods for households distinguished by a variety of demographic characteristics. The model is estimated using micro-level data from the Consumer Expenditure Surveys supplemented with price information obtained from the Consumer Price Index. An important feature of our approach is that aggregate demands and labor supply can be represented in closed form while accounting for the substantial heterogeneity in behavior that is found in household-level data. As a result, we are able to explain the patterns of aggregate demand and labor supply in the data despite using a parametrically parsimonious specification.  相似文献   

9.
In this study we focus attention on model selection in the presence of panel data. Our approach is eclectic in that it combines both classical and Bayesian techniques. It is also novel in that we address not only model selection, but also model occurrence, i.e., the process by which ‘nature’ chooses a statistical framework in which to generate the data of interest. For a given data subset, there exist competing models each of which have an ex ante positive probability of being the correct model, but for any one generated sample, ex post exactly one such model is the basis for the observed data set. Attention focuses on how the underlying model occurrence probabilities of the competing models depend on characteristics of the environments in which the data subsets are generated. Classical, Bayesian, and mixed estimation approaches are developed. Bayesian approaches are shown to be especially attractive whenever the models are nested.  相似文献   

10.
This article aims to contribute to a better understanding of how to integrate customers within service development by assessing different methods of obtaining use information. The article reviews and classifies methods for customer integration and it also presents a new framework that suggests four modes of customer integration in which data is classified either as insitu (data captured in a customer's use situation) or exsitu (data captured outside the use situation) and as either incontext or excontext. Context is defined as a resource constellation that is available for customers to enable value co-creation. Accordingly, incontext refers to methods in which the customer is in the actual use context and has access to various resources, while excontext refers to a situation in which the customer is outside the use context and, therefore, has no direct access to the resources.  相似文献   

11.
12.
Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revisions.Recent empirical work suggests that measurement errors typically have much more complex dynamics than existing models of data revisions allow. This paper describes a state-space model that allows for richer dynamics in these measurement errors, including the noise, news and spillover effects documented in this literature. We also show how to relax the common assumption that “true” values are observed after a few revisions.The result is a unified and flexible framework that allows for more realistic data revision properties, and allows the use of standard methods for optimal real-time estimation of trends and cycles. We illustrate the application of this framework with real-time data on US real output growth.  相似文献   

13.
土建工程造价指标包括价值性指标和数量性指标两大类;土建工程造价关联数据包括工程量间相关性数据、价值间的相关性数据、工程造价与建筑体量及建筑结构特征因素的相关性影响或数据等。对于个人来讲,掌握指标和关联数据是资历、经验、能力的最直观的一种表现形式;对于企业来讲,掌握指标和关联数据更是一种资历积淀、核心竞争力的体现。造价指标和关联数据,在审核工程预结算的正常性、快速估算造价、优化设计等方面发挥着重要作用。  相似文献   

14.
Externalities and Industrial Development   总被引:9,自引:0,他引:9  
Using panel data for five capital goods industries, this paper estimates dynamic externalities. In contrast to previous studies, panel data allow separation of externalities from fixed effects and identification of a lag structure. I find strong evidence of Marshall–Arrow–Romer (MAR) (own industry, or localization) externalities. For Jacobs (urbanization) externalities effects are smaller. In terms of lag structure, for MAR externalities the biggest effects are typically from several years ago, but die out after six years. For urbanization phenomena, effects persist to the end of the time horizon of the data–eight or nine years back.  相似文献   

15.
Research in the data-oriented areas of computer science is contributing a new wave of theory and tools for learning from data. Some of the research areas complement those in statistics and others overlap. While the research topics of the two fields are not the same, the goals of the research are identical–to enhance theory, methods, models, and systems for the study of data. Unification–close collaboration in research, in teaching, and in applications–would greatly enhance new developments in learning from data.  相似文献   

16.
This study attempts to investigate whether corporate performance is affected by the ownership structure, using data from companies quoted on the Athens Stock Exchange for the period 1996–1998. Given such an objective, the basic hypothesis examined, is that corporate performance as measured by Tobin's Q ratio is a function of ownership and other control variables. Our econometric approach relies on the use of a combination of time series and cross section data (panel‐data analysis), a procedure that avoids many statistical problems. After examining the role of each identifiable shareholder, we find a positive relationship between institutional investors and corporate performance. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
Financial data often contain information that is helpful for macroeconomic forecasting, while multi-step forecast accuracy benefits from incorporating good nowcasts of macroeconomic variables. This paper considers the usefulness of financial nowcasts for making conditional forecasts of macroeconomic variables with quarterly Bayesian vector autoregressions (BVARs). When nowcasting quarterly financial variables’ values, we find that taking the average of the available daily data and a daily random walk forecast to complete the quarter typically outperforms other nowcasting approaches. Using real-time data, we find gains in out-of-sample forecast accuracy from the inclusion of financial nowcasts relative to unconditional forecasts, with further gains from the incorporation of nowcasts of macroeconomic variables. Conditional forecasts from quarterly BVARs augmented with financial nowcasts rival the forecast accuracy of mixed-frequency dynamic factor models and mixed-data sampling (MIDAS) models.  相似文献   

18.
In order to increase data quality some household surveys visit the respondent households several times to estimate one measure of consumption. For example, in Ghanaian Living Standards Measurement surveys, households are visited up to 10 times over a period of 1 month. I find strong evidence for conditioning effects as a result of this approach: In the Ghanaian data the estimated level of consumption is a function of the number of prior visits, with consumption being highest in the earlier survey visits. Telescoping (perceiving events as being more recent than they are) or seasonality (first‐of‐the‐month effects) cannot explain the observed pattern. To study whether earlier or later survey visits are of higher quality, I employ a strategy based on Benford's law. Results suggest that the consumption data from earlier survey visits are of higher quality than data from later visits. The findings have implications for the value of additional visits in household surveys, and also shed light on possible measurement problems in high‐frequency panels. They add to a recent literature on measurement errors in consumption surveys (Beegle et al., 2012 , Gibson et al., 2015 ), and complement findings by Zwane et al. ( 2011 ) regarding the effect of surveys on subsequent behaviour.  相似文献   

19.
Graph‐theoretic methods of causal search based on the ideas of Pearl (2000), Spirtes et al. (2000), and others have been applied by a number of researchers to economic data, particularly by Swanson and Granger (1997) to the problem of finding a data‐based contemporaneous causal order for the structural vector autoregression, rather than, as is typically done, assuming a weakly justified Choleski order. Demiralp and Hoover (2003) provided Monte Carlo evidence that such methods were effective, provided that signal strengths were sufficiently high. Unfortunately, in applications to actual data, such Monte Carlo simulations are of limited value, as the causal structure of the true data‐generating process is necessarily unknown. In this paper, we present a bootstrap procedure that can be applied to actual data (i.e. without knowledge of the true causal structure). We show with an applied example and a simulation study that the procedure is an effective tool for assessing our confidence in causal orders identified by graph‐theoretic search algorithms.  相似文献   

20.
Micro-aggregation is a frequently used strategy to anonymize data before they are released to the scientific public. A sample of a continuous random variable is individually micro-aggregated by first sorting and grouping the data into groups of equal size and then replacing the values of the variable in each group by their group mean. In a similar way, data with more than one variable can be anonymized by individual micro-aggregation. Data thus distorted may still be used for statistical analysis. We show that if probabilities and quantiles are estimated in the usual way by computing relative frequencies and sample quantiles, respectively, these estimates are consistent and asymptotically normal under mild conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号