Re-work Interview Questions
I’m talking at the Re-work event on deep learning in London on 22-23 September. They’ve asked a few questions for a pre-event interview. The blog post of the interview is hereI’ve reposted my answers below.
- What started your work in deep learning?
Geoff Hinton gave a talk at the Newton Institute in 1997 on a model called the “Hierarchical Community of Experts”, his explanation of the need for hierarchies was a significant influence on my thinking. Yann LeCun had spoken at the same conference on convolutional neural networks for digits, which was another really impressive demonstration, but the ideas that were closest to my own work (and remain close today) were those that Hinton expressed in his talk. I also met my wife at that meeting, so it turned out to be quite important for me!
- What are the key factors that have enabled recent advancements in deep learning?
The recent advances in deep learning have all been about supervised learning on very large data sets which are ‘weakly structured’, such as images or sequence data. The methodologies themselves haven’t advanced much from what Yann spoke about at the Newton Institute in 1997, but what has changed is the massive availability of data and practical compute (through GPUs). Back in the late 1980s Tony Robinson had been experimenting with Recurrent Neural Network trained speech recognisers trained on Transputers, but the company making Transputers collapsed, closing that line of research. Modern GPUs have a lot of similarities to Transputers, but the companies that make them are far from collapsing!
- What are the main types of problems now being addressed in the deep learning space?
I call the problems ‘cognitive problems’. By which I mean ones that humans are naturally good at already. Speech, vision, language. That’s because these are the only domains where we can easily access the vast amounts of data that are needed to train these systems. Even then an enormous amount of human labor labelling the data is needed to deliver results.
- What are the practical applications of your work and what sectors are most likely to be affected?
My own work is much more focussed on unsupervised learning, multiview learning, and making methods data efficient. Our ambition is to effect all sectors where data is an important component. But in the meantime we are focussing on those sectors where it’s clear that the current generation of neural network based supervised learning techniques don’t cut the mustard. Our particular focus is on personalized health and data in the developing world.
- What developments can we expect to see in deep learning in the next 5 years?
I hope that the field becomes more diverse, we’ve seen a rapid influx of people who only have learned one thing, and that thing is something that we understood a great deal about 20 years ago. This is in severe danger of eclipsing the important aspects of machine learning that the community learnt so much about across the last two decades. So I hope we see a settling of this enormous expansion, and a return to a more thoughtful approach to research.
- What advancements excite you most in the field?
One of the things that I admire most is the imagination of researchers in deploying the new generation of techniques. We see this particularly in the vision community and the natural language processing community. This is enabling a new generation of devices that don’t require the advances in machine learning I’ve referred to above. It is opening new challenges in dialogue systems and human computer interaction. Through pressing forward in these directions, and discovering the limitations of these methods I think we are likely to learn a lot more about ourselves and our humanity. Call me soppy, but I think that’s pretty cool.