The Guardian’s media network published a short article I wrote for them today. I’ll post it here in full after giving them an agreed week’s exclusivity. For the moment here’s the link.
1) Are there applications that suit Gaussian processes well? Would they typically replace the neural network layers in a deep learning system or would they possibly be mixed and matched with neural layers, perhaps as preprocessors or using the neural layers for stuff like feature extraction (assuming that training algorithms allow for this)?
2) Are there training algorithms that allow Gaussian processes to be used today for deep-learning type applications or is this where work needs to be done?
3) Is the computational load similar to that of deep-learning neural networks or are the applications sufficiently different that a comparison is meaningless?
4) I think I saw a suggestion that GPs are reasonably robust when trained with small datasets – do they represent a way in for smaller organisation without bags of data? Is access to data a key problem when dealing with these data science techniques?
5) On a more general point that I think can be explored within this feature, are techniques such as Gaussian processes at a disadvantage in computer science because of their heavy mathematical basis? (I’ve had interviews with people like Donald Knuth and Erol Gelenbe in the past where the idea has come up that computer science and maths should, if not merge, interact a lot more).
There are now quite a few blog posts on the NIPS experiment, I just wanted to put a place together where I could link to them all. It’s a great set of posts from community mainstays, newcomers and those outside our research fields.
Just as a reminder, Corinna and I were extremely open about the entire review process, with a series of posts about how we engaging the reviewers and processing the data. All that background can be found through a separate post here.
At the time of writing there is also still quite a lot of twitter traffic on the experiment.
List of Blog Posts
- Eric Price’s original blog which seemed to have the largest impact in making the world aware of the experiment.
- John Langford, long time ML blogger has his say on the ACM site.
- Lance Fortnow from the computational complexity community adds his thoughts.
- Bert Huang, who was actually on the program committee gives his perspective.
- A really early post from Aaron Defazio was one of the authors of a duplicated paper. He writes about his experience from before the results were widely known.
- The experiment triggers a set of broader musings on peer review from popsci.
- Boaz Barak, who has experience of chairing FOCS, a major CS Theory conference, brings his perspective here.
- Yisong Yue gives the perspective of one of the attendees.
Thanks to an introduction to the Sage Math team by Fernando Perez, I just had the pleasure of participating in a large scale collaborative grant proposal construction exercise, co-ordinated Nicolas Thiéry. I’ve collaborated on grants before, but for me this was a unique experience because the grant writing was carried out in the open, on github.
The proposal, ‘OpenDreamKit’ is principally about doing as much as possible to smooth collaboration between mathematicians so that advances in maths can be delivered as rapidly as possible to teachers, researchers, technologists etc. Although, of course, I don’t have to tell you because you can read it on github.
It was a wonderful social experiment, and I think it really worked, although a lot of credit to that surely goes to the people involved (most of whom were there before I came aboard). I really hope this is funded, because collaborating with these people is going to be great.
For the first time on a proposal, I wasn’t the one who was most concerned about the latex template (actually second time … I’ve worked on a grant once with Wolfgang Huber). But this took things to another level, as soon as a feature was required the latex template seemed to be updated, almost in real time, I think mainly by Michael Kohlhase.
Socially it was very interesting, because the etiquette of how to interact (on the editing side) was not necessarily clear at the outset. For example, at one point I was tasked with proof reading a section, but ended up doing a lot of rephrasing. I was worried about whether people would be upset that their text had been changed, but actually there was a positive reaction (at least from Nicolas and Hans Fangohr!), which emboldened me to try more edits. As deadline approached I think others went through a similar transition, because the proposal really came together in the last few days. It was a little like a school dance, where at the start we were all standing at the edge of the room, eyeing each other up, but as DJ Nicolas ramped things up and the music became a little more hardcore (as dawn drew near), barriers broke down and everyone went a little wild. Nicolas produced a YouTube video, visualising the github commits.
As Alex Konovalov pointed out, we look like bees pollinating each other’s flowers!
I also discovered great new (for me) tools like appear.in that we used for brainstorming on ‘Excellence’ with Nicolas and Hans: much more convenient than Skype or Hangouts.
Many thanks to Nicolas, and all of the collaborators. I think it takes an impressive bunch of people to pull off such a thing, and regardless of outcome, which I very much hope will be positive, I look forward to further collaborations within this grouping.
Just back from NIPS where it was really great to see the results of all the work everyone put in. I really enjoyed the program and thought the quality of all presented work was really strong. Both Corinna and I were particularly impressed by the work that put in by oral presenters to make their work accessible to such a large and diverse audience.
We also released some of the figures from the NIPS experiment, and there was a lot of discussion at the conference about what the result meant.
As we announced at the conference the consistency figure was 25.9%. I just wanted to confirm that in the spirit of openness that we’ve pursued across the entire conference process Corinna and I will provide a full write up of our analysis and conclusions in due course!
Some of the comment in the existing debate is missing out some of the background information we’ve tried to generate, so I just wanted to write a post that summarises that information to highlight its availability.
With the help of Nicolo Fusi, Charles Twardy and the entire Scicast team we launched a Scicast question a week before the results were revealed. The comment thread for that question already had an amount of interesting comment before the conference. Just for informational purposes before we began reviewing Corinna forecast this figure would be 25% and I forecast it would be 20%. The box plot summary of predictions from Scicast is below.
Comment at the Conference
There was also an amount of debate at the conference about what the results mean, a few attempts to answer this question (based only on the inconsistency score and the expected accept rate for the conference) are available here in this little Facebook discussion and on this blog post.
Background Information on the Process
Just to emphasise previous posts on this year’s conference see below:
- NIPS Decision Time
- Reviewer Calibration for NIPS
- Reviewer Recruitment and Experience
- Paper Allocation for NIPS
Software on Github
And finally there is a large amount of code available on a github site for allowing our processes to be recreated. A lot of it is tidied up, but the last sections on the analysis are not yet done because it was always my intention to finish those when the experimental results are fully released.
On Wednesday last week I attended an “Open Meeting” organised by the UK’s EPSRC Research Council on the Alan Turing Institute. The Turing Institute is a new government initiative that stems from a letter from our Chief Scientific advisor to our prime minister about the “age of algorithms”. It aims to provide an international centre of excellence in data science.
The government has provided 42 million pounds of funding (about 60-70 million dollars) and Universities interested in partnering in the Turing Institute are expected to bring 5 million pounds (8 million dollars) to the initiative, to be spent over 5 years.
It seemed clear that the EPSRC will require that the institute is located in one place, and there was much talk of ‘critical mass’, which made me think of what ‘critical mass’ is in data science, after all, we aren’t building a large hadron collider, and one of the most interesting challenges of the new age of data is its distributed nature. I asked a question about this and was given the answers you might expect: flagship international centre of excellence, stimulating environment, attracting the best of the best etc. Nothing was particularly specific to data science.
In my own area of machine learning the UK has a lot of international recognition, but one of the features I’ve always enjoyed is the distributed nature of the expertise. The groups that spring first to mind are Cambridge (Engineering), Edinburgh (Informatics), UCL (Computer Science and Gatsby) and recently Oxford has expanded significantly (CS, Engineering and Statistics). I’ve always enjoyed the robustness that such a network of leading groups brings. It’s evolved over a period of 20 years, and those of us that have watched it grow are incredibly proud of what the UK has been able to achieve with relatively few people.
Data science requires strong interactions between statisticians and computer scientists. It requires knowledge of classical techniques and modern computational capabilities. The pool of expertise is currently rather small relative to the demand. As a result I find my self constantly in demand within my own University, mainly to advise on the capabilities that current approaches to analysis have. A recent xkcd comic cleverly reminded us of how hard it can be to explain the gap between those things that are easy and those things that are virtually impossible. Although in many cases where advice is need it’s not the full explanation that’s required, just the knowledge. Many expensive errors can be avoided by just a little access to this knowledge. Back in July I posted a position paper on this that was targeting exactly this problem and in Sheffield we are pursuing the “Open Data Science” agenda I proposed with vigour. Indeed, I sometimes wonder if my group is not more useful for this advice (which rarely involves any intellectual novelty) than for the ideas we push forward in our research. However, our utility as advisors is much more difficult to quantify, particularly because it often won’t lead to a formal collaboration.
I like analogies, but I think that ‘critical mass’ here is the wrong one. To give better access to expertise, what is required is a higher surface area to volume ratio, not a greater mass. Communication between experts is important, but we are fortunate in the UK to have a geographically close network of well connected Universities. Many international visitors take the time to visit two or three of the leading groups when they are here, so I think the idea of analogy of a lung is a far better one for describing what is required for UK data science. I’m pleased the government has recognised the importance of data science, I just hope that in their rush to create a flagship institute, with a large headline grabbing investment figure associated, they don’t switch off the incubator that sustains our developing lungs.
Yesterday we finished our third Sheffield school. As with the previous events we’ve ended with a one day workshop focussed on Gaussian processes, this time on using them for feature extraction. With such a busy summer it was pretty intimidating to take on the school so shortly after we have sent out decisions on NIPS. As ever the group came through with the organisation though. This time out Zhenwen Dai was the main organiser, but once again he could never have done it without the rest of the group chipping in. It’s another reminder that when you are working with great people, great things can happen.
The school always gives me a special kind of energy, that which you can only get from seeing people enthuse about the things you care about. We were very lucky to have such a great group of speakers: Carl Rasmussen, Dan Cornford, Mike Osbourne, Rich Turner, Joaquin Quinonero Candela, and then at the workshop Carl Henrik Ek, Andreas Damianou, Victor Prisacariu and Chaochao Lu. It always part feels like a family reunion (we had brief overlaps between Carl, Joaquin (Sheffield Tap!), Lehel Csato and Magnus Rattray, all four of whom were in Sheffield for the 2005 GPRT) and part like a welcoming event for new researchers. We covered important new developments in probabilistic numerics (Mike Osborne) and time series processing (Rich Turner) and Control (Carl Rasmussen). Joaquin also gave us insights into the evidence and then presented to a University-wide audience on machine learning at Facebook.
In the workshop we also saw how GPs can be used for multiview learning (Carl Henrik Ek) audio processing (Rich Turner) deep learning (Andreas Damianou) shape representation (Victor Prisacariu) and face identification (Chaochao Lu).
We’ve now taught around about 140 students through the schools in Sheffield and a further 60 through roadshows to Uganda and Colombia. Perhaps the best bit was watching everyone head for the Devonshire Cat after the last lecture to continue the debate. I think we all probably remember summer schools from our early times in research that were influential (for me the NATO ASI on Machine Learning and Generalisation, for many it will be the regular MLSS events). It’s nice to hope that this series of events may have also done something to influence others. The next scheduled events will be in roadshows in Australia in February with Trevor Cohn and Kenya in June with Ciira wa Maina and John Quinn (although we plan to make the Kenyan event it will be more data science focussed than GPs).
Thanks to all in the group for organising!