NIPS Experiment Analysis

Sorry for the relative silence on the NIPS experiment. Corinna and I have both done some analysis on the data. Over the Christmas break I focussed an analysis on the ‘raw numbers’ which people have been discussing. In particular I wanted to qualify the certainties that people are placing on these numbers. There are a couple of different ways of doing this, bootstrap, or a Bayesian analysis. I went for the latter. Corinna has also been doing a lot of work on how the scores correlate, and the ball is in my court to pick up on that. However, before doing that I wanted to make the initial Bayesian analysis of the data. In doing so, we’re also releasing a little bit more information on the numbers.

Headline figure is that if we re-ran the conference we would expect anywhere between 38% and 64% of the same papers to have been presented again. This is the figure that several commentators mentioned that is the one attendees are really interested in. Of course, when you think about it, you also realise it is a difficult figure to estimate because you reduce the power of the study because the figure is based only on papers which had at least one accept or more (rather than the full 168 papers used in the study).

Anyway details of the Bayesian analysis are available in a Jupyter notebook on github.

Proceedings of Machine Learning Research

Back in 2006 when the wider machine learning community was becoming aware of Gaussian processes (mainly through the publication of the Rasmussen and WIlliams book). Joaquin Quinonero Candela, Anton Schwaighofer and I organised the Gaussian Processes in Practice workshop at Bletchley Park. We planned a short proceedings for the workshop, but when I contacted Springer’s LNCS proceedings, a rather dismissive note came back with an associated prohibitive cost. Given that the ranking of LNCS wasn’t (and never has been) that high, this seemed a little presumptuous on their part. In response I contacted JMLR and asked if they’d ever considered a proceedings track. The result was that I was asked by Leslie Pack Kaelbling to launch the proceedings track.

JMLR isn’t just open access, but there is no charge to authors. It is hosted by servers at MIT and managed by the community.

We launched the proceedings in March 2007 with the first volume from the Gaussian Processes in Practice workshop. Since then there have been 38 volumes including two volumes in the pipeline. The proceedings publishes several leading conferences in machine learning including AISTATS, COLT and ICML.

From the start we felt that it was important to share the branding of JMLR with the proceedings, to show that the publication was following the same ethos as JMLR. However, this led to the rather awkward name: JMLR Workshop and Conference Proceedings, or JMLR W&CP. Following discussion with the senior editorial board of JMLR we now feel the time is right to rebrand with the shorter “Proceedings of Machine Learning Research”.

As part of the rebranding process the editorial team for the Proceedings of Machine Learning Research (which consists of Mark Reid and myself) is launching a small consultation exercise looking for suggestions on how we can improve the service for the community. Please feel free to leave comments on this blog post or via Facebook or Twitter to let us have feedback!

Can you select for ‘robustness’?

20150315_165626
My mum and son ensuring preparing the ground for non-robust seeds

Was at the allotment the other day, and my son Frederick asked how the seeds we plant could ever survive when it took so much work and preparation to plant and support them. I said it was because they’ve been selected (by breeding) to produce high yield, and that tends to make them less robust (in comparison to e.g. weeds). So he asked why don’t we breed in robustness. I instinctively said that you can’t do that, because breeding involves selecting for a characteristic, whereas (I think) robustness implies performance under a range of different conditions, some of which will not even be known to us. Of course, I agree you can breed in resistance to a particular circumstance, but I think robustness is about resistance to many circumstances. I think a robust population will include wide variation in characteristics, whereas selection by breeding tends to refine the characteristics, reducing variation. My reply was instinctive, but I think it’s broadly speaking correct, although it would be nice to find some counter examples!

Beware the Rise of the Digital Oligarchy

The Guardian’s media network published a short article I wrote for them on 5th March. They commissioned an article of about 600 words, that appeared on the Guardian’s site, but the original version I wrote was around 1400. I agreed a week’s exclusivity with the Guardian, but now that’s up, the longer version is below (it’s about twice as long).

On a recent visit to Genova, during a walk through the town with my colleague Lorenzo, he pointed out what he said was the site of the world’s first commercial bank. The bank of St George, located just outside the city’s old port, grew to be one of the most powerful institutions in Europe, it bankrolled Charles V and governed many of Genova’s possessions on the republic’s behalf. The trust that its clients placed in the bank is shown in records of its account holders. There are letters from Christopher Columbus to the bank instructing them in the handling of his affairs. The influence of the bank was based on the power of accumulated capital. Capital they could accumulate through the trust of a wealthy client base. The bank was so important in the medieval world that Machiavelli wrote that “if even more power was ceded by the Genovan republic to the bank, Genova would even outshine Venice amongst the Italian city states.” The Bank of St George was once one of the most influential private institutions in Europe.

Today the power wielded by accumulated capital can still dominate international affairs, but a new form of power is emerging, that of accumulated data. Like Hansel and Grettel trailing breadcrumbs into the forest, people now leave a trail of data-crumbs wherever we travel. Supermarket loyalty cards, text messages, credit card transactions, web browsing and social networking. The power of this data emerges, like that of capital, when it’s accumulated. Data is the new currency.

Where does this power come from? Cross linking of different data sources can give deep insights into personality, health, commercial intent and risk. The aim is now to understand and characterize the population, perhaps down to the individual level. Personalization is the watch word for your search results, your social network news feed, your movie recommendations and even your friends. This is not a new phenomenon, psychologists and social scientists have always attempted to characterize the population, to better understand how to govern or who to employ. They acquired their data by carefully constructed questionnaires to better understand personality and intelligence. The difference is the granularity with which these characterizations are now made, instead of understanding groups and sub-groups in the population, the aim is to understand each person. There are wonderful possibilities, we should  better understand health, give earlier diagnoses for diseases such as dementia and provide better support to the elderly and otherwise incapacitated people. But there are also major ethical questions, and they don’t seem to be adequately addressed by our current legal frameworks. For Columbus it was clear, he was the owner of the money in his accounts. His instructions to the bank tell them how to distribute it to friends and relations. They only held his capital under license. A convenient storage facility. Ownership of data is less clear. Historically, acquiring data was expensive: questionnaires were painstakingly compiled and manually distributed. When answering, the risk of revealing too much of ourselves was small because the data never accumulated. Today we leave digital footprints in our wake, and acquisition of this data is relatively cheap. It is the processing of the data that is more difficult.

I’m a professor of machine learning. Machine learning is the main technique at the heart of the current revolution in artificial intelligence. A major aim of our field is to develop algorithms that better understand data: that can reveal the underlying intent or state of health behind the information flow. Already machine learning techniques are used to recognise faces or make recommendations, as we develop better algorithms that better aggregate data, our understanding of the individual also improves.

What do we lose by revealing so much of ourselves? How are we exposed when so much of our digital soul is laid bare? Have we engaged in a Faustian pact with the internet giants? Similar to Faust, we might agree to the pact in moments of levity, or despair, perhaps weakened by poor health. My father died last year, but there are still echoes of him on line. Through his account on Facebook I can be reminded of his birthday or told of common friends. Our digital souls may not be immortal, but they certainly outlive us. What we choose to share also affects our family: my wife and I may be happy to share information about our genetics, perhaps for altruistic reasons, or just out of curiosity. But by doing so we are also sharing information about our children’s genomes. Using a supermarket loyalty card gains us discounts on our weekly shop, but also gives the supermarket detailed information about our family diet. In this way we’d expose both the nature and nurture of our children’s upbringing. Will our decisions to make this information available haunt our children in the future? Are we equipped to understand the trade offs we make by this sharing?

There have been calls from Elon Musk, Stephen Hawking and others to regulate artificial intelligence research. They cite fears about autonomous and sentient artificial intelligence that  could self replicate beyond our control. Most of my colleagues believe that such breakthroughs are beyond the horizon of current research. Sentient intelligence is  still not at all well understood. As Ryan Adams, a friend and colleague based at Harvard tweeted:

Personally, I worry less about the machines, and more about the humans with enhanced powers of data access. After all, most of our historic problems seem to have come from humans wielding too much power, either individually or through institutions of government or business. Whilst sentient AI does seem beyond our horizons, one aspect of it is closer to our grasp. An aspect of sentient intelligence is ‘knowing yourself’, predicting your own behaviour. It does seem to me plausible that through accumulation of data computers may start to ‘know us’ even better than we know ourselves. I think that one concern of Musk and Hawking is that the computers would act autonomously on this knowledge. My more immediate concern is that our fellow humans, through the modern equivalents of the bank of St George, will be exploiting this knowledge leading to a form of data-oligarchy. And in the manner of oligarchies, the power will be in the hands of very few but wielded to the effect of many.

How do we control for all this? Firstly, we need to consider how to regulate the storage of data. We need better models of data-ownership. There was no question that Columbus was the owner of the money in his accounts. He gave it under license, and he could withdraw it at his pleasure. For the data repositories we interact with we have no right of deletion. We can withdraw from the relationship, and in Europe data protection legislation gives us the right to examine what is stored about us. But we don’t have any right of removal. We cannot withdraw access to our historic data if we become concerned about the way it might be used. Secondly, we need to increase transparency. If an algorithm makes a recommendation for us, can we known on what information in our historic data that prediction was based? In other words, can we know how it arrived at that prediction? The first challenge is a legislative one, the second is both technical and social. It involves increasing people’s understanding of how data is processed and what the capabilities and limitations of our algorithms are.

There are opportunities and risks with the accumulation of data, just as there were (and still are) for the accumulation of capital. I think there are many open questions, and we should be wary of anyone who claims to have all the answers. However, two directions seem clear: we need to both increase the power of the people; we need to develop their understanding of the processes. It is likely to be a fraught process, but we need to form a data-democracy: data governance for the people by the people and with the people’s consent.

Neil Lawrence is a Professor of Machine Learning at the University of Sheffield. He is an advocate of “Open Data Science” and an advisor to a London based startup, CitizenMe, that aims to allow users to “reclaim their digital soul”.

Questions on Deep Gaussian Processes

I was recently contacted by Chris Edwards, he’s putting together an article for Communications of the ACM on Deep Learning and had a few questions on deep Gaussian processes. He kindly agreed to let me use his questions and my answers in a blog post.
1) Are there applications that suit Gaussian processes well? Would they typically replace the neural network layers in a deep learning system or would they possibly be mixed and matched with neural layers, perhaps as preprocessors or using the neural layers for stuff like feature extraction (assuming that training algorithms allow for this)?
Yes, I think there are applications that suit Gaussian processes very well. In particular applications where data is scarce (this doesn’t necessarily mean small data sets, but when data is scarce relative to the complexity of the system being modeled). In these scenarios, handling uncertainty in the model appropriately becomes very important. Two examples which have exploited this characteristic in practice are GaussianFace by Lu & Tang, and Bayesian optimization (e.g. Snoek, Larochelle and Adams). Almost all my own group’s work also exploits this characteristic. A further manifestation of this effect is what I call “massively missing data”. Although we are getting a lot of data at the moment, when you think about it you realise that almost all the things we would like to know are still missing almost all of the time. Deep models have performed well in situations where data sets are very well characterised and labeled. However, one of the domains that inspires me is clinical data where this isn’t the case. In clinical data most people haven’t had most clinical tests applied to them most of the time. Also, the nature of clinical tests evolve (as do the diseases that affect patients). This is an example of massively missing data. I think Gaussian processes provide a very promising approach to handling this data.
With regard to whether they are a replacement for deep neural networks, I think in the end they may well be mixed and matched. From a Gaussian process perspective the neural network layers could be seen as a type of ‘mean function’ (a Gaussian process is defined by its mean function and its covariance function). So they can be seen as part of the deep GP framework: deep Gaussian processes enhance the toolkit available. So there is no conceptual reason why they shouldn’t be mixed and matched. I think you’re quite right that it might be that the low level feature extraction is still done by parametric models like neural networks, but it’s certainly important that we use the right techniques in the right domains and being able to interchange ideas enables that.
2) Are there training algorithms that allow Gaussian processes to be used today for deep-learning type applications or is this where work needs to be done?
There are algorithms, yes, we have three different approaches right now and its also clear that work in doubly stochastic variational inference (see for example Kingma and Welling  or Rezende, Mohamed and Wierstra) could also be applicable. But more work still needs to be done. In particular, a lot of the success of deep learning has been down to the engineering of the system. How to implement these models on GPUs and scale them to billions of data. We’ve been starting to look at this (Dai, Damianou, Hensman and Lawrence) but there’s no doubt we are far behind and it’s a steep learning curve! We also don’t have quite the same computational resource of Facebook, Microsoft and Google!
3) Is the computational load similar to that of deep-learning neural networks or are the applications sufficiently different that a comparison is meaningless?
We carry an additional algorithmic burden, that of propagating uncertainty around the network. This is where the algorithmic problems begin, but is also where we’ve had most of the breakthroughs. Propagating this uncertainty will always come with an additional load for a particular network, but it has particular advantages like dealing with the massively missing data I mentioned above and automatic regularisation of the system. This has allowed us to automatically determine aspects like the number of layers in the network and the number of hidden nodes in each layer. This type of structural learning is very exciting and was one of the original motivations for considering these models. This has enabled us to develop variants of Gaussian processes that can be used for multiview learning (Damianou, Ek, Titsias and Lawrence), we intend to apply these ideas to deep GPs also.
4) I think I saw a suggestion that GPs are reasonably robust when trained with small datasets – do they represent a way in for smaller organisation without bags of data? Is access to data a key problem when dealing with these data science techniques?
I think it’s a very good question, it’s an area we’re particularly interested in addressing. How can we bring data science to smaller organisations? I think it might relates to our ‘open data science’ initiative (see this blog post here). I refer to this idea as ‘analysis empowerment’. However, I hadn’t particularly thought deep GPs in this way before, but can I hazard a possible yes to that? Certainly with GaussianFace we saw they could outperform DeepFace (from Facebook) with a small fraction of the data. For us it wasn’t the main motivation for developing deep GPs, but I’d like to think it might be a characteristic of the models. The motivating examples we have are more in the domain of applications that the current generation of supervised deep learning algorithms can’t address: like interconnection of data sets in health. Many of my group’s papers are about interconnecting different views of the patient (genotype, environmental background, clinical data, survival information … with luck even information from social networks and loyalty cards). We approach this through Gaussian process frameworks to ensure that we can build models that will be fully interconnected in application. We call this approach “deep health”. We aren’t there yet, but I feel there’s a lot of evidence so far that we’re working with a class of models that will do the job. My larger concern is the ethical implications of pulling this scale and diversity of information together. I find the idea of a world where we have computer models outperforming humans in predicting their own behavior (perhaps down to the individual) quite disturbing. It seems to me that now the technology is coming within reach, we need to work hard to also address these ethical questions. And it’s important that this debate is informed by people who actually understand the technology.
5) On a more general point that I think can be explored within this feature, are techniques such as Gaussian processes at a disadvantage in computer science because of their heavy mathematical basis? (I’ve had interviews with people like Donald Knuth and Erol Gelenbe in the past where the idea has come up that computer science and maths should, if not merge, interact a lot more).
Yes, and no. It is true that people seem to have some difficulty with the concept of Gaussian processes. But it’s not that the mathematics is more complex than people are using (at the cutting edge) for deep neural networks. Any of the researchers leading the deep revolution could easily turn their hands to Gaussian processes if they chose to do so. Perhaps at ‘entry’ the concepts seem simpler in deep neural networks, but as you peer ‘deeper’ (forgive the pun) into those models it actually becomes a lot harder to understand what’s going on. The leading people (Hinton, Bengio, LeCun, etc) seem to have really good intuitions, but these are not always easy to teach. Certainly when Geoff Hinton explains something to me I always feel I’ve got a very good grasp of it at the time, but later, when I try and explain the same concept to someone else, I find I can’t always do it (i.e., he’s got better intuitions than me, and he’s better at explaining than I am). There may be similar issues for explaining deep GPs, but my hope is that once the conceptual hurdle of a GP is surmounted, the resulting models are much easier to analyze. Such analysis should also feed back into the wider deep learning community. I’m pleased that this is already starting to happen (see Duvenaud, Rippel, Adams and Ghahramani). Gaussian processes also generalise many different approaches to learning and signal processing (including neural networks), so understanding Gaussian processes well gives you an ‘in’ for many different areas. I agree, though, that the perception in the wider community matches your analysis. This is a major reason for the program of summer schools we’ve developed in Gaussian Processes. So far we’ve taught over 200 students, and we have two further schools planned for 2015 with a developing program for 2016. We’ve made material freely available on line including lectures (on YouTube) and lab notes. So I hope we are doing something to address the perception that these models are harder mathematically!
I totally agree on the Maths/CS interface. It is, however, slightly frustrating (and perhaps inevitable) how much different academic disciplines become dominated by a particular culture of research. This can create barriers, particularly when it comes to formal publication (e.g. in the ‘leading’ journals). My group’s been working very hard over the last decade to combat this through organization of workshops and summer schools that bridge the domains. It always seems to me that meeting people face to face helps us gain a shared understanding. For example, a lot of confusion can be generated by the slightly different ways we use technical terminology, it leads to a surprising number of misunderstandings that do take time to work through. However, through these meetings I’ve learned an enormous amount, particularly from the statistics community. Unfortunately, formal outlets and funding for this interface are still surprisingly difficult to find. This is not helped by the fact that the traditional professional societies don’t necessarily bridge the intellectual ground and sometimes engage in their own fights for territory. These cultural barriers also spill over into organization of funding. For example, in the UK it’s rare that my grant proposals are refereed by colleagues from Maths/Stats community or that their grant proposals are refereed by me. They actually go two totally separate parts of the relevant UK funding body. As a result both sets of proposals can be lost in the wider Maths and CS communities, which is not always conducive to expanding the interface. In the UK I’m hoping that the recent founding of the Alan Turing Institute will cause a bit of a shake up in this area, and that some of these artificial barriers will fall away. But in summary, I totally agree with the point, but also recognize that on both sides of the divide we have created communities which can make collaboration harder.

Blogs on the NIPS Experiment

There are now quite a few blog posts on the NIPS experiment, I just wanted to put a place together where I could link to them all. It’s a great set of posts from community mainstays, newcomers and those outside our research fields.

Just as a reminder, Corinna and I were extremely open about the entire review process, with a series of posts about how we engaging the reviewers and processing the data. All that background can be found through a separate post here.

At the time of writing there is also still quite a lot of twitter traffic on the experiment.

List of Blog Posts

What an exciting series of posts and perspectives!
For those of you that couldn’t make the conference, here’s what it looked like.
And that’s just one of 5 or six poster rows!

Open Collaborative Grant Writing

Thanks to an introduction to the Sage Math team by Fernando Perez, I just had the pleasure of participating in a large scale collaborative grant proposal construction exercise, co-ordinated Nicolas Thiéry. I’ve collaborated on grants before, but for me this was a unique experience because the grant writing was carried out in the open, on github.

The proposal, ‘OpenDreamKit’ is principally about doing as much as possible to smooth collaboration between mathematicians so that advances in maths can be delivered as rapidly as possible to teachers, researchers, technologists etc. Although, of course, I don’t have to tell you because you can read it on github.

It was a wonderful social experiment, and I think it really worked, although a lot of credit to that surely goes to the people involved (most of whom were there before I came aboard). I really hope this is funded, because collaborating with these people is going to be great.

For the first time on a proposal, I wasn’t the one who was most concerned about the latex template (actually second time … I’ve worked on a grant once with Wolfgang Huber). But this took things to another level, as soon as a feature was required the latex template seemed to be updated, almost in real time, I think mainly by Michael Kohlhase.

Socially it was very interesting, because the etiquette of how to interact (on the editing side) was not necessarily clear at the outset. For example, at one point I was tasked with proof reading a section, but ended up doing a lot of rephrasing. I was worried about whether people would be upset that their text had been changed, but actually there was a positive reaction (at least from Nicolas and Hans Fangohr!), which emboldened me to try more edits. As deadline approached I think others went through a similar transition, because the proposal really came together in the last few days. It was a little like a school dance, where at the start we were all standing at the edge of the room, eyeing each other up, but as DJ Nicolas ramped things up and the music became a little more hardcore (as dawn drew near), barriers broke down and everyone went a little wild. Nicolas produced a YouTube video, visualising the github commits.

As Alex Konovalov pointed out, we look like bees pollinating each other’s flowers!

I also discovered great new (for me) tools like appear.in that we used for brainstorming on ‘Excellence’ with Nicolas and Hans: much more convenient than Skype or Hangouts.

Many thanks to Nicolas, and all of the collaborators. I think it takes an impressive bunch of people to pull off such a thing, and regardless of outcome, which I very much hope will be positive, I look forward to further collaborations within this grouping.