Modelling with Massively Missing Data


at Facebook, Menlo Park, CA on Mar 20, 2014 [pdf]
Neil D. Lawrence, University of Sheffield



Supervised deep learning techniques now dominate in terms of performance for complex classification tasks such as ImageNet. For these, the set of inputs (features) and targets (labels) are typically well defined in advance. However, for many tasks in artificial intelligence the questions that need to be answered evolve, alongside the features that we can acquire. For example, imagine we wish to infer the health status of individuals by building population scale models based on clinical data. For most people in the population most of the data will be missing because clinical tests are not applied to patients as a matter of course. Indeed, some of the features we may wish to use in our model may not even exist when our model is first designed (e.g. emerging clinical tests and treatments). We refer to this scenario as ’massively missing data’. It is a scenario humans are faced with every day. Almost all of the time we are missing almost all of the data. And yet we have no difficulty assimilating disparate pieces of information from a wide range of sources to draw inferences about our world. Implementing machine learning systems that can replicate this characteristic requires model architectures that can be adapted at ’runtime’ as the data evolves, we don’t want to be limited by decisions made at ’design time’ when perhaps a more limited feature set existed. This poses particular challenges that we will address in this talk.