首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Modelling non-stationary ‘Big Data’
Abstract:‘Fat big data’ characterise data sets that contain many more variables than observations. We discuss the use of both principal components analysis and equilibrium correction models to identify cointegrating relations that handle stochastic trends in non-stationary fat data. However, most time series are wide-sense non-stationary—induced by the joint occurrence of stochastic trends and distributional shifts—so we also handle the latter by saturation estimation. Seeking substantive relationships when there are vast numbers of potentially spurious connections cannot be achieved by merely choosing the best-fitting equation or trying hundreds of empirical fits and selecting a preferred one, perhaps contradicted by others that go unreported. Conversely, fat big data are useful if they help ensure that the data generation process is nested in the postulated model, and increase the power of specification and mis-specification tests without raising the chances of adventitious significance. We model the monthly UK unemployment rate, using both macroeconomic and Google Trends data, searching across 3000 explanatory variables, yet identify a parsimonious, statistically valid, and theoretically interpretable specification.
Keywords:Cointegration  Big data  Model selection  Outliers  Saturation estimation
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号