I’ve just selected two papers from this year’s conference, not that there weren’t many more, but I wanted to keep this fairly short.

Model Composition

A general trend at the conference was composition of probabilistic models and neural networks.

Composing graphical models with neural networks for structured representations and fast inference

Matthew Johnson · David Duvenaud · Alex Wiltschko · Ryan P Adams · Sandeep R Datta

There were other papers like this also. They seem to rely strongly on good automatic differentiation methods, implemented in software, and they really demonstrate how creative you can get once you have these tools.

Developing Theory of GANs

GANs were obviously a big trend, but what was particularly interesting (for me) was theoretical work that was starting to emerge … or at least ‘analysis’ work that could be found in papers such as DISCO networks:

DISCO Nets : DISsimilarity COefficients Networks

Diane Bouchacourt · Pawan K Mudigonda · Sebastian Nowozin

An open question for me is what are the range of application areas for these models? One thing you can normally do with unsupervised learning is deal with missing data (e.g. in clinical applicatons where most of the data is missing most of the time). Although the image work is exciting it would be great to see these models move into other domains.

Workshops and Symposia

Often the workshops are the highlight of the meeting, and that was true again this year. There were very many but this year I went to the Machine Learning and Law Symposium, the Machine Learning for Health Workshop and the Bayesian Deep Learning Workshop. There were plenty of other excellent workshops, I chose these above because I’d been asked to participate in them, and I find it works better if I commit to one workshop rather than moving about (whenever I do that I find the workshops component of NIPS very unsatisfying).

Machine Learning and the Law Symposium

Some great work was presented here on challenges in fairness. Although I got worried how far from ‘normal people’ the language we use to describe these challenges is. I worried a bit that we are becoming obsessive about the technical side (both for legal and CS professions) and less worried about the practical applications. Perhaps social science perspectives would have helped here.

I thought was very interesting how far apart the language of the machine learning and legal experts is. I got to chair the final panel session, and one observation I made is that both law and computer science are about codifying a process: legal codes and programming codes. But there’s a fundamental difference between how we expect those codes to be interpreted. For law the wording is sometimes loose to allow flexibility in interpretation, for CS there needs to be absolute precision in interpretation. I think this leads to a gap which will take time to bridge and we need meetings with additional experts from other domains to help. I missed his talk, but I thought Solon Baracas was one of the best panelists and bridging this divide.

Machine Learning for Health

I thought it was promising how many real-world applications are emerging, but there is still a theme of challenge of data access and availability. Indeed, this is becoming a theme for the NIPS community. Because of success in developing software that is so easy to share, and is highly flexible, it is becoming really clear that the main bottleneck is data and data sharing. There was a workshop on AI for Data Science that is about streamlining this approach, but this also relates to challenges in quantifying value in data. This is something where Amazon is an ideal place to address these challenges due to large size of company and a large number of data sets.

Bayesian Deep Learning

Bayesian Deep Learning, I very much enjoyed Ryan Adams’ tribute to David Mackay. David Blei was also very good, and combined with Zoubin Ghahramani’s historical perspectives plus Shakir Mohamed’s optimistic outlook (which I share, despite my ribbing of him in the panel session!). I thought this was an excellent workshop. It even had Ian Goodfellow thinking about ways of doing GANs + Bayes which brings to mind areas such as ABC and simulation as well. Great stuff.