Yesterday an interview with Katy Gorman and Ryan Adams was released as the 11th Episode of Talking Machines. It was great fun to do. It must be a lot of work pulling the podcast together and I think this is a really great service to the community (and beyond). It’s like having our own radio show. I’ve really enjoyed the episodes so far (although I’m only up to Episode 7 with my listening!).

Broadly the format of the episodes is to first discuss a particular aspect of machine learning, then answer a viewer question and then an interview with a researcher. The episode covers Markov Decision Processes and answers a question on Feature Learning. Ryan is an extremely impressive person. I was lucky enough to be his thesis examiner, even as a student he had a breadth of knowledge which is extremely difficult to develop in such a technical field. He does a great job of summarizing ideas in a way that highlights the salient issues and is clear but doesn’t oversimplify. The main idea we discuss in this segment the interview is what are the implications for privacy in the age of data. Of course reflecting on it, I could have been clearer (and more succinct!) in many of my points, so it’s useful to put a few links up and provide some comment.

Since the interview I’ve also written a couple of articles on the challenges of privacy, privacy rights and regulation that are mentioned at the end.

The ideas I’m trying to get across in the interview are

  • The idea of the AI singularity as an existentialist threat isn’t particularly helpful to debate in AI, because there are far more urgent problems, in particular the combination of humans with our existing tools and the danger of that power being concentrated in a few hands. Ryan provides a good example of how provision of simple tools gives individuals a lot of power. It seems Uber exec used a powerful tool to track someone’s movements.

  • As a society we seem suprisingly reluctant to update our rights for the digital era. In the interview I refer to taking existing rights as ‘biblically’, meaning that we treat them as if they are handed down by a greater power, rather than treating them as written by intelligent, free thinking people, who were far sighted about the aspects of society and life that need protecting. I think we need to try and invoke some of that far sighted thinking again today.

A couple of mis-speaks from me.

  • When referring to the intellectual capacity of the NSA, I should clarify that I was thinking more of their machine learning capabilities, in other areas like cryptography, they may well have some of the very best people.

  • I think I sound a little trite on the AI singularity, I mean it’s a fun idea in terms of intellectually. Earlier in the series Ryan uses the term “it tickles the mind” referring I think to A* sampling. I like that term, and in some sense I think the idea of the AI singularity ‘tickles the mind. However, I think there’s many reasons why it won’t come into being in the form its envisaged, and I’m keen to steer the debate to the much more imminent challenge of personal data privacy.