Reviewer Calibration for NIPS

One issue that can occur for a conference is differences in interpretation of the reviewing scale. For a number of years (dating back to at least NIPS 2002) mis-calibration between reviewers has been corrected for with a model. Area chairs see not just the actual scores of the paper, but also ‘corrected scores’. Both are used in the decision making process.

Reviewer calibration at NIPS dates back to a model first implemented in 2002 by John Platt when he was an area chair. It’s a regularized least squares model that Chris Burges and John wrote up in 2012. They’ve kindly made their write up available here.

Calibrated scores are used alongside original scores to help in judging the quality of papers.

We also knew that Zoubin and Max had modified the model last year, along with their program manager Hong Ge. However, before going through the previous work we first of all approached the question independently. However, the model we came up with turned out to be pretty much identical to that of Hong, Zoubin and Max, and the approach we are using to compute probability of accepts was also identical. The model is a probabilistic reinterpretation of the Platt and Burges model: one that treats the bias parameters and quality parameters as latent variables that are normally distributed. Marginalizing out the latent variables leads to an ANOVA style description of the data.

The Model

Our assumption is that the score from the jth reviewer for the ith paper is given by

y_{i,j} = f_i + b_j + \epsilon_{i, j}

where f_i is the objective quality of paper i and b_j is an offset associated with reviewer j. \epsilon_{i,j} is a subjective quality estimate which reflects how a specific reviewer’s opinion differs from other reviewers (such differences in opinion may be due to differing expertise or perspective). The underlying ‘objective quality’ of the paper is assumed to be the same for all reviewers and the reviewer offset is assumed to be the same for all papers.

If we have n papers and m reviewers then this implies n + m + nm values need to be estimated. Of course, in practice, the matrix is sparse, and we have no way of estimating the subjective quality for paper-reviewer pairs where no assignment was made. However, we can firstly assume that the subjective quality is drawn from a normal density with variance \sigma^2

\epsilon_{i, j} \sim N(0, \sigma^2 \mathbf{I})

which reduces us to n + m + 1 parameters. The Platt-Burges model then estimated these parameters by regularized least squares. Instead, we follow Zoubin, Max and Hong’s approach of treating these values as latent variables. We assume that the objective quality, f_i, is also normally distributed with mean \mu and variance \alpha_f,

f_i \sim N(\mu, \alpha_f)

this now reduces us to $m$+3 parameters. However, we only have approximately $4m$ observations (4 papers per reviewer) so parameters may still not be that well determined (particularly for those reviewers that have only one review). We therefore also assume that the reviewer offset is a zero mean normally distributed latent variable,

b_j \sim N(0, \alpha_b),

leaving us only four parameters: \mu, \sigma^2, \alpha_f and \alpha_b. When we combine these assumptions together we see that our model assumes that any given review score is a combination of 3 normally distributed factors: the objective quality of the paper (variance \alpha_f), the subjective quality of the paper (variance \sigma^2) and the reviewer offset (variance \alpha_b). The a priori marginal variance of a reviewer-paper assignment’s score is the sum of these three components. Cross-correlations between reviewer-paper assignments occur if either the reviewer is the same (when the cross covariance is given by \alpha_b) or the paper is the same (when the cross covariance is given by $\alpha_f$). With a constant mean coming from the mean of the ‘subjective quality’, this gives us a joint model for reviewer scores as follows:

\mathbf{y} \sim N(\mu \mathbf{1}, \mathbf{K})

where \mathbf{y} is a vector of stacked scores $\mathbf{1}$ is the vector of ones and the elements of the covariance function are given by

k(i,j; k,l) = \delta_{i,k} \alpha_f + \delta_{j,l} \alpha_b + \delta_{i, k}\delta_{j,l} \sigma^2

where i and j are the index of the paper and reviewer in the rows of \mathbf{K} and k and l are the index of the paper and reviewer in the columns of \mathbf{K}.

It can be convenient to reparameterize slightly into an overall scale $\alpha_f$, and normalized variance parameters,

k(i,j; k,l) = \alpha_f(\delta_{i,k} + \delta_{j,l} \frac{\alpha_b}{\alpha_f} + \delta_{i, k}\delta_{j,l} \frac{\sigma^2}{\alpha_f})

which we rewrite to give two ratios: offset/objective quality ratio, \hat{\alpha}_b and subjective/objective ratio \hat{\sigma}^2 ratio.

k(i,j; k,l) = \alpha_f(\delta_{i,k} + \delta_{j,l} \hat{\alpha}_b + \delta_{i, k}\delta_{j,l} \hat{\sigma}^2)

The advantage of this parameterization is it allows us to optimize \alpha_f directly through maximum likelihood (with a fixed point equation). This leaves us with two free parameters, that we might explore on a grid.

We expect both $\mu$ and $\alpha_f$ to be very well determined due to the number of observations in the data. The negative log likelihood is

\frac{|\mathbf{y}|}{2}\log2\pi\alpha_f + \frac{1}{2}\log \left|\hat{\mathbf{K}}\right| + \frac{1}{2\alpha_f}\mathbf{y}^\top \hat{\mathbf{K}}^{-1} \mathbf{y}

where |\mathbf{y}| is the length of \mathbf{y} (i.e. the number of reviews) and \hat{\mathbf{K}}=\alpha_f^{-1}\mathbf{K} is the scale normalised covariance. This negative log likelihood is easily minimized to recover

\alpha_f = \frac{1}{|\mathbf{y}|} \mathbf{y}^\top \hat{\mathbf{K}}^{-1} \mathbf{y}

A Bayesian analysis of alpha_f parameter is possible with gamma priors, but it would merely shows that this parameter is extremely well determined (the degrees of freedom parameter of the associated Student-t marginal likelihood scales will the number of reviews, which will be around |\mathbf{y}| \approx 6,000 in our case.

We can set these parameters by maximum likelihood and then we can remove the offset from the model by computing the conditional distribution over the paper scores with the bias removed, s_{i,j} = f_i + \epsilon_{i,j}. This conditional distribution is found as

\mathbf{s}|\mathbf{y}, \alpha_f,\alpha_b, \sigma^2 \sim N(\boldsymbol{\mu}_s, \boldsymbol{\Sigma}_s)

where

\boldsymbol{\mu}_s = \mathbf{K}_s\mathbf{K}^{-1}\mathbf{y}

and

\boldsymbol{\Sigma}_s = \mathbf{K}_s - \mathbf{K}_s\mathbf{K}^{-1}\mathbf{K}_s

and \mathbf{K}_s is the covariance associated with the quality terms only with elements given by,

k_s(i,j;k,l) = \delta_{i,k}(\alpha_f + \delta_{j,l}\sigma^2).

We now use \boldsymbol{\mu}_s (which is both the mode and the mean of the posterior over \mathbf{s}) as the calibrated quality score.

Analysis of Variance

The model above is a type of Gaussian process model with a specific covariance function (or kernel). The variances are highly interpretable though, because the covariance function is made up of a sum of effects. Studying these variances is known as analysis of variance in statistics, and is commonly used for batch effects. It is known as an ANOVA model. It is easy to extend this model to include batch effects such as whether or not the reviewer is a student or whether or not the reviewer has published at NIPS before. We will conduct these analyses in due course. Last year, Zoubin, Max and Hong explored whether the reviewer confidence could be included in the model, but they found it did not help with performance on hold out data.

Scatter plot of Quality Score vs Calibrated Quality Score

Scatter plot of Quality Score vs Calibrated Quality Score

Probability of Acceptance

To predict the probability of acceptance of any given paper, we sample from the multivariate normal that gives the posterior over \mathbf{s}. These samples are sorted according to the values of \mathbf{s}, and the top scoring papers are considered to be accepts. These samples are taken 1000 times and the probability of acceptance is computed for each paper by seeing how many times the paper received a positive outcome from the thousand samples.

NIPS Reviewer Recruitment and ‘Experience’

Triggered by a question from Christoph Lampert as a comment on a previous blog post on reviewer allocation, I thought I’d post about how we did reviewer recruitment, and what the profile of reviewer ‘experience’ is, as defined by their NIPS track record.

I wrote this blog post, but it ended up being quite detailed, so Corinna suggested I put the summary of reviewer recruitment first, which makes a lot of sense. If you are interested in the details of our reviewer recruitment, please read on to the section below ‘Experience of the Reviewing Body’.

Questions

As a summary, I’ve imagined two questions and given answers below:

  1. I’m an Area Chair for NIPS, how did I come to be invited?
    You were personally known to one of the Program Chairs as an expert in your domain who had good judgement about the type and quality of papers we are looking to publish at NIPS. You have a strong publication track record in your domain. You were known to be reliable and responsive. You may have a track record of workshop organization in your domain and/or experience in area chairing previously at NIPS or other conferences. Through these activities you have shown community leadership.
  2. I’m a reviewer for NIPS, how did I come to be invited?
    You could have been invited for one of several reasons:

    • you were a reviewer for NIPS in 2013
    • you were a reviewer for AISTATS in 2012
    • you were personally recommended by an Area Chair or a Program Chair
    • you have been on a Program Committee (i.e. you were an Area Chair) at a leading international conference in recent years (specifically NIPS since 2000, ICML since 2008, AISTATS since 2011).
    • you have published 2 or more papers at NIPS since 2007
    • you published at NIPS in either 2012 or 2013 and your publication track record was personally reviewed and approved by one of the Program Chairs.

Experience of The Reviewing Body

That was the background to Reviewer and Area Chair recruitment, and it is also covered below, in much more detail than perhaps anyone could wish for! Now, for those of you that have gotten this far, we can try and look at the result in terms of one way of measuring reviewer experience. Our aim was to increase the number of reviewers and try and maintain or increase the quality of the reviewing body. Of course quality is subjective, but we can look at things such as reviewer experience in terms of how many NIPS publications they have had. Note that we have purposefully selected many reviewers and area chairs who have never previously published at NIPS, so this is clearly not the only criterion for experience, but it is one that is easily available to us and given Christoph’s question, the statistics may be of wider interest.

Reviewer NIPS Publication Record

Firstly we give the histograms for cumulative reviewer publications. We plot two histograms, publications since 2007 (to give an idea of long term trends) and publications since 2012 (to give an idea of recent trends).

Reviewer Publications 2007

Histogram of NIPS 2014 reviewers publication records since 2007.

Our most prolific reviewer has published 22 times at NIPS since 2007! That’s an average of over 3 per year (for comparison, I’ve published 7 times at NIPS since 2007).

Looking more recently we can get an idea of the number of NIPS publications reviewers have had since 2012.

Histogram of NIPS 2014 reviewers publication records since 2012.

Impressively the most prolific reviewer has published 10 papers at NIPS over the last two years, and intriguingly it is not the same reviewer that has published 22 times since 2007. The mode of 0 reviews is unsurprising, and comparing the histograms it looks like about 200 of our reviewing body haven’t published in the last two years, but have published at NIPS since 2007.

Area Chair Publication Record

We have got similar plots for the Area Chairs. Here is the histogram since 2007.

Area Chair Publications 2007

Histogram of NIPS 2014 Area Chair’s publication records since 2007.

Note that we’ve selected 16 Area Chairs who haven’t published at NIPS before. People who aren’t regular to NIPS may be surprised at this, but I think it reflects the openness of the community to other ideas and new directions for research. NIPS has always been a crossroads between traditional fields, and that is one of it’s great charms. As a result, NIPS publication record is a poor proxy for ‘experience’ where many of our area chairs are concerned.

Looking at the more recent publication track record for Area Chairs we have the following histogram.

Area Chair Publications 2012

Histogram of NIPS 2014 Area Chair’s publication records since 20012.

Here we see that a considerable portion of our Area Chairs haven’t published at NIPS in the last two years. I also find this unsurprising. I’ve only published one paper at NIPS since then (that was NIPS 2012, the groups’ NIPS 2013 submissions were both rejected—although I think my overall ‘hit rate’ for NIPS success is still around 50%).

Details of the Recruitment Process

Below are all the gritty details in terms of how things actually panned out in practice for reviewer recruitment. This might be useful for other people chairing conferences in the future.

Area Chair Recruitment

The first stage is invitation of area chairs. To ensure we got the correct distribution of expertise in area chairs, we invited in waves. Max and Zoubin gave us information about the subject distribution of the previous year’s NIPS submissions. This then gave us a rough number of area chairs required for each area. We had compiled a list of 99 candidate area chairs by mid January 2014, coverage here matched the subject coverage from the previous year’s conference. The Area Chairs are experts in their field, the majority of the Area Chairs are people that either Corinna or I have worked with directly or indirectly, others have a long track record of organising workshops and demonstrating thought leadership in their subject area. It’s their judgement on which we’ll be relying for paper decisions. As capable and active researchers they are in high demand for a range of activities (journal editing, program chairing other conferences, organizing workshops etc). This combined with the demands on our everyday lives (including family illnesses, newly born children etc) mean that not everyone can accept the demands on time that being an area chair makes. As well as being involved in reviewer recruitment, assignment and paper discussion. Area chairs need to be available for video conference meetings to discuss their allocation and make final recommendations on their papers. All this across periods of the summer when many are on vacation. Of our original list of 99 invites, 56 were available to help out. This then allowed us to refocus on areas where we’d missed out on Area Chairs. By early March we had a list of 57 further candidate area chairs. Of these 36 were available to help out. Finally we recruited a further 3 Area Chairs in early April, targeted at areas where we felt we were still short of expertise.

Reviewer Recruitment

Reviewer recruitment consists of identifying suitable people and inviting them to join the reviewing body. This process is completed in collaboration with the Area Chairs, who nominate reviewers in their domains. For NIPS 2014 we were targeting 1400 reviewers to account for our duplication of papers and the anticipated increase in submissions. There is no unified database of machine learning expertise, and the history of who reviewed in what years for NIPS is currently not recorded. This means that year to year, we are typically only provided with those people that agreed to review in the previous year as our starting point for compiling this list. From February onwards Corinna and I focussed on increasing this starting number. NIPS 2013 had 1120 reviewers and 80 area chairs, these names formed the core starting point for invitations. Further,  since I program chaired AISTATS in 2012 we also had the list of reviewers who’d agreed to review for that conference (400 reviewers, 28 area chairs). These names were also added to our initial list of candidate reviewers (although, of course, some of these names had already agreed to be area chairs for NIPS 2014 and there were many duplicates in the lists).

Sustaining Expertise in the Reviewing Body

A major concern for Corinna and I was to ensure that we had as much expertise in our reviewing body as possible. Because of the way that reviewer names are propagated from year to year, and the fact that more senior people tend to be busier and therefore more likely to decline, many well known researcher names weren’t in this initial list. To rectify this we took from the web the lists of Area Chairs for all previous NIPS conferences going back to 2000, all ICML conferences going back to 2008 and all AISTATS conferences going back to 2011. We could have extended this search to COLT, COSYNE and UAI also. Back in 2000 there were only 13 Area Chairs at NIPS, by the time that I first did the job in 2005 there were 19 Area Chairs. Corinna and I worked together at the last Program Committee to have a physical meeting in 2006 when John Platt was Program Chair. I remember having an above-average allocation of about 50-60 papers as Area Chair that year. I had papers on Gaussian processes (about 20) and many more in dimensionality reduction, mainly on spectral approaches. Corinna also had a lot of papers that year because she was dealing with kernel methods. Although I think a more typical load was 30-40, and reviewer load was probably around 6-8. The physical meeting consisted of two days in a conference room discussing every paper in turn as a full program committee.  That was also the last year of a single program chair. The early NIPS program committees mainly read as a “who’s who of machine learning”, and it sticks in my mind how carefully each chair went through each of the papers that were around the borderline of acceptance. Many papers were re-read at that meeting. Overall 160 new names were added to the list of candidate reviewers from incorporating the Area Chairs from these meetings, giving us around 1600 candidate reviewers in total. Note that the sort of reviewing expertise we are after is not only the technical expertise necessary to judge the correctness of the paper. We are looking for reviewers that can judge whether the work is going to be of interest to the wider NIPS community and whether the ideas in the work are likely to have significant impact. The latter two areas are perhaps more subjective, and may require more experience than the first. However, the quality of papers submitted to NIPS is very high, and the number that are technically correct is a very large portion of those submitted. The objective of NIPS is not then to select those papers that are the ‘most technical’, but to select those papers that are likely to have an influence on the field. This is where understanding of likely impact is so important. To this end, Max and Zoubin introduced an ‘impact’ score, with the precise intent of reminding reviewers to think about this aspect. However, if the focus is too much on the technical side, then maybe a paper that is highly complex from a technical stand-point, but less unlikely to have an influence on the direction of the field, is more likely to be accepted than a paper that contains a potentially very influential idea that doesn’t present a strong technical challenge. Ideally then, a paper should have a distribution of reviewers who aren’t purely experts in the particular technical domain from where the paper arises, but also informed experts in the wider context of where the paper sits. The role of the Area Chair is also important here. The next step in reviewer recruitment was to involve the Area Chairs in adding to the list in areas where we had missed people. This is also an important route for new and upcoming NIPS researchers to become involved in reviewing. We provided Area Chairs with access to the list of candidate reviewers and asked them to add names of experts who they would like to recruit, but weren’t currently in the list. This led to a further 220 names.

At this point we had also begun to invite reviewers. Reviewer invitation was done in waves. We started with the first wave of around 1600-1700 invites in mid-April. At that point, the broad form of the Program Committee was already resolved. Acceptance rates for reviewer invites indicated that we weren’t going to hit our target of 1400 reviewers with our candidate list. By the end of April we had around 1000 reviewers accepted, but we were targeting another 400 reviewers to ensure we could keep reviewer load low.

A final source of candidates was from Chris Hiestand. Chris maintains the NIPS data base of authors and presenters on behalf of the NIPS foundation. This gave us another potential source of reviewers. We considered all authors that had 2 or more NIPS papers since 2007. We’d initially intended to restrict this number to 3, but that gained us only 91 more new candidate reviewers (because most of the names were in our candidate list already), relaxing this constraint to 2 led to 325 new candidate reviewers. These additional reviewers were invited at the end of April. However, even with this group, were likely to fall short of our target.

Our final group of reviewers came from authors who published either at NIPS 2013 or NIPS 2012. However, authors that have published only one paper are not necessarily qualified to review at NIPS. For example, the author may be a collaborator from another field. There were 697 authors who had one NIPS paper in 2012 or 2013 and were not in our current candidate list. For these 697 authors, we felt it was necessary to go through each author individually, checking their track record on through web searches (DBLP and Google Scholar as well as web pages) and ensuring they had the necessary track record to review for NIPS. This process resulted in an additional 174 candidate reviewer names. The remainder we either were unable to identify on the web (169 people) or they had a track record where we couldn’t be confident about their ability to review for NIPS without a personal recommendation (369 people).  This final wave of invites went out at the beginning of May and also included new reviewer suggestions from Area Chairs and invites to candidate Area Chairs who had not been able to commit to Area Chairing, but may have been able to commit to reviewing. Again, we wanted to ensure the expertise of the reviewing body was as highly developed as possible.

This meant that by the submission deadline we had 1390 reviewers in the system. On 15th July this number has increased slightly. This is because during paper allocation, Area Chairs have recruited additional specific reviewers to handle particular papers where they felt that the available reviewers didn’t have the correct expertise. This means that currently, we have 1400 reviewers exactly. This total number of reviewers comes from around 2255 invitations to review.

Overall, reviewer recruitment took up a very large amount of time, distributed over many weeks. Keeping track of who had been invited already was difficult, because we didn’t have a unique ID for our candidate reviewers. We have a local SQLite data base that indexes on email, and we try to check for matches based on names as well. Most of these checks are done in Python code which is now available on the github repository here, along with IPython notebooks that did the processing (with identifying information removed). Despite care taken to ensure we didn’t add potential reviewers twice to our data base, several people received two invites to review. Very often, they also didn’t notice that they were separate invites, so they agreed to review twice for NIPS. Most of these duplications were picked up at some point before paper allocation and they tended to arise for people whose names could be rendered in multple ways (e.g. because of accents)  who have multiple email addresses (e.g. due to change of affiliation).

 

Firstly, NIPS uses the CMT system for conference management. In an ideal world, choice of management system shouldn’t dictate how you do things, but in practice particularities of the system can affect our choices. CMT doesn’t store a uniques profile for conference reviewers (unlike for example EasyChair which stores every conference you’ve submitted to or reviewed/chaired for). This means that from year to year information about the previous years reviewers isn’t necessarily passed in a consistent way between program chairs. Corinna and I requested that the CMT set up for our year copied across the reviewers from NIPS 2013 along with their subject areas and conflicts to try and alleviate this. The NIPS program committee in 2013 consisted of 1120 reviewers and 80 area chairs. Corinna and I set a target of 1400 reviewers and 100 area chairs. This was to account for (a) increase in submissions of perhaps 10% and (b) duplication of papers for independent reviewing at a level of around 10%.

Open Data Science

Not sure if this is really a blog post, it’s more of a ‘position paper’ or a proposal, but it’s something that I’d be very happy to have comment on, so publishing it in the form of a blog seems most appropriate.

We are in the midst of the information revolution and it is being driven by our increasing ability to monitor, store, interconnect and analyse large interacting sets of data. Industrial mechanisation required a combination of coal and heat engine. Informational mechanisation requires the combination of data and data engines. By analogy with a heat engine, which takes high entropy heat energy, and converts it to low entropy, actionable, kinetic energy, a data engine is powered by large unstructured data sources and converts them to actionable knowledge. This can be achieved through a combination of mathematical and computational modelling and the combination of required skill sets falls across traditional academic boundaries.

Outlook for Compaines

From a commercial perspective companies are looking to characterise consumers/users in unprecedented detail. They need to characterize their users’ behavior in detail to

  1. provide better service to retain users,
  2. target those users with commercial opportunities.

These firms are competing for global dominance, to be the data repository. They are excited by the power of interconnected data, but made nervous about the natural monopoly that it implies. They view the current era as being analogous to the early days of ‘microcomputers’: competing platforms looking to dominate the market. They are nervous of the next stage in this process. They foresee the natural monopoly that the interconnectedness of data implies, and they are pursuing it with the vigour of a young Microsoft. They are paying very large fees to acquire potential competitors to ensure that they retain access to the data (e.g. Facebook’s purchase of Whatsapp for $19 billion) and they are acquiring expertise in the analysis of data from academia either through direct hires (Yann LeCun from NYU to Facebook, Andrew Ng from Stanford to found a $300 million Research Lab for Baidu) or purchasing academic start ups (Geoff Hinton’s DNNResearch from Toronto to Google, the purchase of DeepMind by Google for $400 million). The interest of these leading internet firms in machine learning is exciting and a sign of the major successes of the field, but it leaves a major challenge for firms that want to enter the market and either provide competing or introduce new services. They are debilitated by

  1. lack of access to data,
  2. lack of access to expertise.

 

Science

Science is far more evolved than the commercial world from the perspective of data sharing. Whilst its merits may not be
universally accepted by individual scientists, communities and funding agencies encourage widespread sharing. One of the most significant endeavours was the human genome project, now nearly 25 years old. In computational biology there is now widespread sharing of data and methodologies: measurement technology moves so quickly that an efficient pipeline for development and sharing is vital to ensure that analysis adapts to the rapidly evolving nature of the data (e.g. cDNA arrays to Affymetrix arrays to RNAseq). There are also large scale modelling and sharing challenges at the core of other disciplines such as astronomy (e.g. Sarah Bridle’s GREAT08 challenge for Cosmic Lensing) and climate science. However, for many scientists their access to these methodologies is restricted not by lack of availability of better methods, but through technical inaccessibility. A major challenge in science is bridging the gap between the data analyst and the scientist. Equipping the scientist with the fundamental concepts that will allow them to explore their own systems with a complete mathematical and computational toolbox, rather than being constrained by the provisions of a commercial ‘analysis toolbox’ software provider.

Health

Historically, in health, scientists have worked closely with clinicians to establish the cause of disease and, ideally, eradicate
them at source. Antibiotics and vaccinations have had major successes in this area. The diseases that remain are

  1. resulting from a large range of initial causes; and as a result having no discernible target for a ‘magic bullet’ cure (e.g. heart disease, cancers).
  2. difficult to diagnose at early stage, leading to identification only when progress is irreversible (e.g. dementias) or
  3. coevolving with our clinical advances developments to subvert our solutions (e.g. C difficile, multiple drug resistant tuberculosis).

Access to large scale interconnected data sources again gives the promise of a route to resolution. It will give us the ability to better characterize the cause of a given disease; the tools to monitor patients and form an early diagnosis of disease; and the biological
understanding of how disease agents manage to subvert our existing cures. Modern data allows us to obtain a very high resolution,
multifaceted perspective on the patient. We now have the ability to characterise their genotype (through high resolution sequencing) and their phenotype (through gene and protein expression, clinical measurements, shopping behaviour, social networks, music listening behaviour). A major challenge in health is ensuring that the privacy of patients is respected whilst leveraging this data for wider societal benefit in understanding human disease. This requires development of new methodologies that are capable of assimilating these information resources on population wide scales. Due to the complexity of the underlying system, the methodologies required are also more complex than the relatively simple approaches that are currently being used to, for example, understand commercial intent. We need more sophisticated and more efficient data engines.

International Development

The wide availability of mobile telephones in many developing countries provides opportunity for modes of development that differ considerably from the traditional paths that arose in the past (e.g. canals, railways, roads and fixed line telecommunications). If countries take advantage of these new approaches, it is likely that the nature of the resulting societies will be very different from those that arose through the industrial revolution. The rapid adoption of mobile money, which arguably places parts of the financial system in many sub-saharan African countries ahead of their apparently ‘more developed’ counterparts, illustrates what is possible. These developments are facilitated by low capital cost of deployment. They are reliant on the mobile telecommunications architecture and the widespread availability of handsets. The ease of deployment and development of mobile phone apps, and the rapidly increasing availability of affordable smartphone handsets presents opportunities that exploit the particular advantages of the new telecommunications ecosystem. A key strand to our thinking is that these developments can be pursued by local entrepeneurs and software developers (to see this in action check out the work of the AI-DEV group here). The two main challenges for enabling this to happen are mechanisms for data sharing that retain the individual’s control over their data and the education of local researchers and students. These aims are both facilitated by the open data science agenda.

Common Strands to these Challenges

The challenges described above have related strands to them that can be summarized in three areas:

  1. Access to data whilst balancing the individual’s right to privacy alongside the societal need for advance.
  2. Advancing methodologies: development of methodologies needed to characterize large interconnected complex data sets
  3. Analysis empowerment: giving scientists, clinicians, students, commercial and academic partners the ability to analyze their own data using the latest methodological advances.

The Open Data Science Idea

It now seems absurd to posit a ‘magic bullet cure’ for the challenges described above across such diverse fields, and indeed, the underlying circumstances of each challenge is sufficiently nuanced for any such sledge hammer to be brittle. However, we will attempt to describe a philosophical approach, that when combined with the appropriate domain expertise (whether that’s cultural, societal or technical)  will aim to address these issues in the long term.

Microsoft’s quasi-monopoly on desk top computing was broken by open source software. It has been estimated that the development cost of a full Linux system would be $10.8 billion dollars. Regardless of the veracity of this figure, we know that
several leading modern operating systems are based on open source (Android is based on Linux, OSX is based on FreeBSD). If it weren’t for open source software, then these markets would have been closed to Microsoft’s competitors due to entry costs. We can do much to celebrate the competition provided by OSX and Android and the contributions of Apple and Google in bringing them to market, but the enablers were the open source software community. Similarly, at launch both Google and Facebook’s architectures, for web search and social networking respectively, were entirely based on open source software and both companies have contributed informally and formally to its development.

Open data science aims to bring the same community resource assimilation together to capitalize on underlying social driver of this phenomenon: many talented people would like to see their ideas and work being applied for the widest benefit possible. The modern internet provides tools such as github, IPython notebook and reddit for easily distribution and comment on this material. In Sheffield we have started making our ideas available through these mechanisms. As academics in open data science part of our role should be to:

  1. Make new analysis methodologies available as widely and rapidly as possible with as few conditions on their use as possible
  2. Educate our commercial, scientific and medical partners in the use of these latest methodologies
  3. Act to achieve a balance between data sharing for societal benefit and the right of an individual to own their data.

We can achieve 1) through widespread distribution of our ideas under flexible BSD-like licenses that give commercial, scientific and medical partners as much flexibility to adapt our methods and analyses as possible to their own circumstances. We will achieve 2) through undergraduate courses, postgraduate courses, summer schools and widespread distribution of teaching materials. We will host projects from across the University from all departments. We will develop new programs of study that address the gaps in current expertise. Our actions regarding 3) will be to support and advise initiatives which look to return to the individual more control of their own data. We should do this simultaneously with engaging with the public on what the technologies behind data sharing are and how they will benefit society.

Summary

Open data science should be an inclusive movement that operates across traditional boundaries between companies and academia. It could bridge the technological gap between ‘data science’ and science. It could address the barriers to large scale analysis of health data and it will build bridges between academia and companies to ease access to methodologies and data. It will make our ideas publicly available for consumption by the individual; in developing countries, commercial organisations and public institutes.

In Sheffield we have already been actively pursuing this agenda through different strands: we have been making software available for over a decade, and now are doing so with extremely liberal licenses. We are running a series of Gaussian process summer schools, which have included roadshows in UTP, Colombia (hosted by Mauricio Alvarez) and Makerere University, Uganda (hosted by John Quinn). We have organised workshops targeted at Big Data and we are making our analysis approaches freely available. We have organised courses locally in Sheffield in programming targeted at biologists (taught by Marta Milo) and have begun a series of meetings on Data Science (speakers have included Fernando Perez, Fabian Pedregosa, Michael Betancourt and Mike Croucher). We have taught on the ML Summer School and at EBI Summer Schools focused on Computational Systems Biology. Almost all these activities have led to ongoing research collaborations, both for us and for other attendees. Open Data Science brings all these strands together, and it expands our remit to communicate using the latest tools to a wider cross section of clinicians and scientists. Driven by this agenda we will also expand our interaction with commercial partners, as collaborators, consultants and educators. We welcome other groups both in the UK and internationally in joining us in achieving these aims.

Paper Allocation for NIPS

With luck we will release papers to reviewers early next week. The paper allocations are being refined by area chairs at the moment.

Corinna and I thought it might be informative to give details of the allocation process we used, so I’m publishing it here. Note that this automatic process just gives the initial allocation. The current stage we are in is moving papers between Area Chairs (in response to their comments) whilst they also do some refinement of our initial allocation. If I find time I’ll also tidy up the python code that was used and publish it as well (in the form of an IPython notebook).

I wrote the process down in response to a query from Geoff Gordon. So the questions I answer are imagined questions from Geoff. If you like, you can picture Geoff asking them like I did, but in real life, they are words I put into Geoff’s mouth.

  •  How did you allocate the papers?

We ranked all paper-reviewer matches by a similarity and allocated each paper-reviewer pair from the top of the list, rejecting an allocation if the reviewer had a full quota, or the paper had a full complement of reviewers.

  • How was the similarity computed?

The similarity consisted of the following weighted components.

s_p = 0.25 * primary subject match.
s_s = 0.25 * bag of words match between primary and secondary subjects
m = 0.5 * TPMS score (rescaled to be between 0 and 1).

  •  So how were the bids used?

Each of the similarity scores was multiplied by 1.5^b where b is the bid. For: “eager” b=2, “willing” b=1, “in a pinch” b=-1, “not willing” b=-2 and no bid was b=0. So the final score used in the ranking was (s_p+s_s+m)*1.5^b

  • But how did you deal with the fact that different reviewers used the bidding in different ways?

The rows and columns were crudely normalized by the *square root* of their standard deviations

  • So what about conflicting papers?

Conflicting papers were given similarities of -inf.

  • How did you ensure that ‘expertise’ was evenly distributed?

We split the reviewing body into two groups. The first group of ‘experts’ were those people with two or more NIPS papers since 2007 (thanks to Chris Hiestand for providing this information). This was about 1/3 of the total reviewing body. We allocated these reviewers first to a maximum of one ‘expert’ per paper. We then allocated the remainder of the reviewing body to the papers up to a maximum of 3 reviewers per paper.

  • One or more of my papers has less than three reviewers, how did that happen?

When forming the ranking to allocate papers, we only retained papers scoring in the upper half. This was to ensure that we didn’t drop too far down the rank list. After passing through the rank list of scores once, some papers were still left unallocated.

  • But you didn’t leave unallocated papers to area chairs did you?

No, we needed all papers to have an area chair, so for area chairs we continued to allocate these ‘inappropriate papers’ to the best matching area chair with remaining quota, but for reviewers we left these allocations ‘open’ because we felt manual intervention was appropriate here.

  • Was anything else different about area chair allocation?

Yes, we found there was a tendency for high bidding area chairs to fill up their allocation quickly vs low bidding area chairs, meaning low bidding/similarity area chairs weren’t getting a very good allocation. To try and distribute things more evenly, we used a ‘staged quota’ system. We started by allocating area chairs five papers each. Then ten, then fifeteen etc. This meant that even if an area chair had the top 25 similarities in the overall list, many of those papers would still be matched to other reviewers. Our crude normalization was also designed to prevent this tendency. Perhaps a better idea still would be to rank similarities on a per reviewer basis and use this as the score instead of the similarity itself, although we didn’t try this.

  • Did you do the allocations for the bidding in the same way?

Yes, we did bidding allocations in a similar way, apart from two things. Firstly the similarity score was different, we didn’t have a separate match to primary key. This lead to problems for reviewers who had one dominant primary keyword and many less important secondary key words. Now, the allocated papers were also distributed in a different way. Each paper was allocated (for bidding) to those area chairs who were in the top 25 scores for that paper. This led to quite a wide variety in the number of papers you saw for bidding, but each paper was, (hopefully) seen at least 25 times.

  • That’s for area chairs, was it the same for the bidding allocation for reviewers?

No, for reviewers, we wanted to restrict the number of papers that each reviewer would see. We decided each reviewer should only see a maximum of 25 papers, we did something more similar to the ‘preliminary allocation’ that’s just been sent out. We went down the list allocating a maximum of 25 papers per reviewer, and ensuring each paper was viewed by 17 different reviewers.

  • Why did you do it like this? Did you research into this?

We never (orignally) intended to intervene so heavily in the allocation system, but with this year’s record numbers of submissions and reviewers the CMT database was failing to allocate. This, combined with time delays between Sheffield/New York/Seattle was causing delays in getting papers out for bidding, so at one stage we split the load into Corinna working with CMT to achieve an allocation and Neil working on coding the intervention described above. The intervention was finished first. Once we had a rough and ready system working for bids we realised we could have more fine control over the allocation than we’d get with CMT (for example trying to ensure that each paper got at least one ‘expert’), so we chose to continue with our approach. There may certainly be better ways of doing this.

  • How did you check the quality of the allocation?

The main approach we used for checking allocation quality was to check the allocation of an area chair whose domain we knew well, and ensure that the allocation made sense, i.e. we looked at the list of papers and judged whether it made ‘sense’.

  • That doesn’t sound very objective, isn’t there a better way?

We agree that it’s not very objective, but then again people seem to evaluate topic models like that all the time, and a topic model is a key part of this system (the TPMS matching service). The other approach was to wait until people complained about their allocation. There were only a few very polite complaints at the bidding stage, but these led us to realise we needed to upweight the similarities associated with the primary key word. We found that some people choose one very dominant primary keyword, and many, less important secondary keywords. These reviewers were not getting a very focussed allocation.

  • How about the code?

The code was written in python using pandas in the form of an IPython notebook.

And finally …

Thanks to all the reviewers and area chairs for their patience with the system and particular thanks to Laurent Charlin (TPMS) and the CMT support for their help getting everything uploaded.

Facebook Buys into Machine Learning

Yesterday was an exciting day. First, at the closing of the conference, it was announced that I, along with the with my colleague Corinna Cortes (of Google), would be one of the Program Chairs of next year’s conference. This is a really great honour. Many of the key figures in machine learning have done this job before me. It will be a lot of work, in particular because Max Welling and Zoubin Ghahramani did such a great job this year.

Then,  in the evening, we had a series of industrially sponsored receptions, one from Microsoft, one from Amazon and one from Facebook. I didn’t manage to make the Microsoft reception but did attend those from Facebook and Amazon. The big news from Facebook was the announcement of a new AI research lab to be led by Yann Le Cun, a long time friend and colleague in machine learning. They’ve also recruited Rob Fergus, a rising star in computer vision and deep learning.

The event was really big news, presented to a selected audience  by my old friend and collaborator Joaquin Quinonero Candela.  Facebook had already recruited Marc’Aurelio Ranzato, so they are really committed to this area. Mark Zuckerberg was there to endorse the proceedings, but I was really impressed by the way he let Joaquin and Yann take centre stage. It was a very exciting evening.

I’d guessed a big announcement was coming so I climbed up onto the mezzanine level and took photos and recorded large parts from my mobile phone. They’d asked for an embargo until 8 am this morning, so I’m just posting about this now. I also cleared it with Facebook before posting, they asked that I remove the details of what Mark had to say,  principally because they wanted the main focus of this to be on Yann (which I think is absolutely right … well done Yann!).

A commitment of this kind is a great endorsement for the machine learning community. But there is a lot of work now to be done to fulfil the promise recognized. Today we (myself, James Hensman from Sheffield and  Joaquin and Tianshi from Facebook) are running a workshop on Probabilistic Models for Big Data.

Mark Zuckerberg will be attending the deep learning workshop. The methods that are going to presented at these workshops will hopefully (in the long term) deal with some of the big issues that will face us when taking on these challenges. In the Probabilistic Models workshop we’ve already heard great talks from David Blei (Princeton), Max Welling (Amsterdam), Zoubin Gharamani (Cambridge) as well as some really interesting poster spotlights. This afternoon we will hear from Yoram Singer (Google), Ralf Herbrich (Amazon) and Joaquin Quinonero Candela (Facebook). I think the directions laid out at the workshop will be addressing the challenges that face us in the coming years to fulfil the promise that Mark Zuckerberg and Facebook have seen in the field.

Very proud to be a program chair for next year’s event, wondering if we will be able to sustain the level of excitement we’ve had this year.

 

Update: I’ve written a piece in “the conversation” about the announcement here:

https://theconversation.com/are-you-an-expert-in-machine-learning-facebook-is-hiring-21439

GPy: Moving from MATLAB to Python

Back in 2002 or 2003, when this paper was going through the journal revision stage, I was asked by the reviewers to provide the software that implemented the algorithm. After the initial annoyance at another job to do, I thought about it a bit and realised that not only was it a perfectly reasonable request, but that the software was probably the main output of research. In paticular, in terms of reproducibility, the implementation of the algorithm seems particularly important. As a result, when I visited Mike Jordan’s group in Berkeley in 2004, he began to write a software framework for publishing his research, based on a simple MATLAB kernel toolbox, and a set of likelihoods. This led to a reissuing of the IVM software and these toolboxes underpinned my group’s work for the next seven or eight years, going through multiple releases.

The initial plan for code release was to provide implementations of published software, but over time the code base evolved into quite a usable framework for Gaussian process modelling. The release of code proved particularly useful in spreading the ideas underlying the GP-LVM, enabling Aaron Hertzman and collaborators to pull together a style based inverse kinematics approach at SIGGRAPH which has proved very influential.

Even at that time it was apparent what a horrible language MATLAB was, but it was the language of choice for machine learning. Efficient data processing requires an interactive shell, and the only ‘real’ programming language with such a facility was python. I remember exploring python whilst at Berkeley with Misha Belkin, but at the time there was no practical implementation of numerical algorithms (at that time it was done in a module called numeric, which was later abandoned). Perhaps more importantly, there was no graph-plotting capability. Although as a result of that exploration, I did stop using perl for scripting and switched to python.

The issues with python as a platform for data analysis were actually being addressed by John D. Hunter with matplotlib. He presented at the NIPS workshop on machine learning open source software in 2008, where I was a member of the afternoon discussion panel. John Eaton, creator of Octave, was also at the workshop, although in the morning session, which I missed due to other commitments. By this time, the group’s MATLAB software was also compatible with Octave. But Octave has similar limitations to MATLAB in terms of language and also did not provide such a rich approach to GUIs. These MATLAB GUIs, whilst quick clunky in implementation, allow live demonstration of algorithms with simple interfaces. This is a facet that I used regularly in my talks.

In the meantime the group was based in Manchester, where in response to the developments in matplotlib and the new numerical module numpy I opened a debate in Manchester about Python in machine learning with this MLO lunch-time talk. At that point I was already persuaded of the potential for python in both teaching and research, but for research in particular there was the problem of translating the legacy code. At this point scikit-learn was fairly immature, so as a test, I began reimplementing portions of the netlab toolbox in python. The (rather poor) result can be found here, with comments from me at the top of netlab.py about issues I was discovering that confuse you when you first move from MATLAB to numpy. I also went through some of these issues in my MLO lunch time talk.

When Nicolo Fusi arrived as a summer student in 2009, he was keen to not use MATLAB for his work, on eQTL studies, and since it was a new direction and the modelling wouldn’t rely too much legacy code, I encouraged him to do this. Others in the group from that period (such as Alfredo Kalaitzis, Pei Gao and our visitor Antti Honkela) were using R as well as MATLAB, because they were focussed on biological problems and delivering code to Bioconductor, but the main part of the group doing methodological development (Carl Henrik Ek, Mauricio Alvarez, Michalis Titsias) were still using MATLAB. I encouraged Nicolo to use python for his work, rather than the normal group practice, which was to stay within the software framework provided by the MATLAB code. By the time Nicolo returned as a PhD student in April 2010 I was agitating in the MLO group in Manchester that all our machine learning should be done in python. In the end, this lobbying attempt was unsuccessful, perhaps because I moved back to Sheffield in August 2010.

On the run into the move, it was already clear where my mind was going. I gave this presentation at a workshop on Validation in Statistics and ML in Berlin in June, 2010 where I talked about the importance of imposing a rigid structure for code (at the time we used an svn repository and a particular directory structure) when reproducible research is the aim, but also mention the importance of responding to individuals and new technology (such as git and python). Nicolo had introduced me to git but we had the legacy of an SVN code base in MATLAB to deal with. So at that point the intention was there to move both in terms of research and teaching, but I don’t think I could yet see how we were going to do it. My main plan was to move the teaching there first, and then follow with the research code.

On the move to Sheffield in August, 2010, we had two new post-docs start (Jaakko Peltonen and James Hensman) and a new PhD student (Andreas Damianou). Jaakko and Andreas were started on the MATLAB code base but James also expressed a preference for python. Nicolo was progressing well with his python implementations, so James joined Nicolo in working in python. However, James began to work on particular methodological ideas that were targeted at Gaussian processes. I think this was the most difficult time in the transition. In particular, James initially was working on his own code, that was put together in a bespoke manner for solving a particular problem. A key moment in the transition was when James also realised the utility of a shared code base for delivering research, he set to work building a toolbox that replicated the functionality of the old code base, in particular focussing on covariance functions and sparse approximations. Now the development of the new code base had begun in earnest. With Nicolo joining in and the new recruits: Ricardo Andrade Pacheco (PhD student from September 2011), who focussed on developing the likelihood code with the EP approximation in mind and Nicolas Durrande (post-doc) working on covariance function (kernel) code. This tipped the balance in the group so all the main methodological work was now happening in the new python codebase, what was to become GPy. By the time of this talk at the RADIANT Project launch meeting in October 2012, the switch over had been pretty much completed, since then Alan Saul joined the team and has been focussing on the likelihood models and introducing the Laplace approximation. Max Zwiessele, who first visited us from MPI Tuebingen in 2012, returned in April 2013 and has been working on the Bayesian GP-LVM implementations (with Nicolo Fusi and Andreas Damianou).

GPy has now fully replaced the old MATLAB code base as the group’s approach to delivering code implementations.

I think the hardest part of the process was the period between fully committing to the transition and not yet having a fully functional code base in python. The fact that this transition was achieved so smoothly, and has led to a code base that is far more advanced than the MATLAB code, is entirely down to those that worked on the code, but particular thanks is due to James Hensman. As soon as James became convinced of the merits of a shared research code base he began to drive forward the development of GPy. Nicolo worked closely with James to get the early versions functional, and since then all the groups recruits have been contributing.

Four years ago, I knew where we wanted to be, but I didn’t know how (or if) we were going to get there. But actually, I think that’s the way of most things in research. As often, the answer is through the inspiration and perspiration of those that work with you. The result is a new software code base, more functional than before, and more appropriate for student projects, industrial collaborators and teaching. We have already used the code base in two summer schools, and have two more scheduled. It is still very much in alpha release, but we are sharing it through a BSD license to enable both industrial and academic collaborators to contribute. We hope for a wider user base thereby ensuring a more robust code base.

License

We had quite an involved discussion about what license to release source code under. The original MATLAB code base (now rereleased as the GPmat toolbox on github) was under a academic use only license. Primarily because the code was being released as papers were being submitted, and I didn’t want to have to make decisions about licensing (beyond letting people see the source code for reproduction) on submission of the paper. When our code was being transferred to bioconductor (e.g. PUMA and tigre) we were releasing as GPL licensed software as required. But when it comes to developing a framework, what license to use? It does bother me that many people continue to use code with out attribution, this has a highly negative effect, particularly when it comes to having to account for the group’s activities. It’s always irritated me that BSD license code can be simply absorbed by a company, without proper acknowledgement of the debt the firm owes open source or long term support of the code development. However, at the end of the day, our job as academics is to allow society to push forward. To make our ideas as accessible as possible so that progress can be made. A BSD license seems to be the most compatible with this ideal. Add to that the fact that some of my PhD students (e.g. Nicolo Fusi, now at Microsoft) move on to companies which are unable to use GPL licenses, but can happily continue to work on BSD licensed code and BSD became the best choice. However, I would ask people, if they do use our code, please acknowledge our efforts. Either by referencing the code base, or, if the code implements a research idea, please reference the paper.

Directions

The original MATLAB code base was originally just a way to get the group’s research out to the ‘user base’. But I think GPy is much more than that. Firstly, it is likely that we will be releasing our research papers with GPy as a dependency, rather than re-releasing the whole of GPy. That makes it more of a platform for research. It also will be a platform for modelling. Influenced by the probabilistic programming community we are trying to make the GPy interface easy to use for modellers. I see all machine learning as separated into model and algorithm. The model is what you say about the data, the algorithm is how you fit (or infer) the parameters of that model. An aim for GPy is to make it easy for users to model without worrying about the algorithm. Simultaneously, we hope that ML researchers will use it as a platform to demonstrate their new algorithms, which are applicable to particular models (certainly we hope to do this). Finally we are using GPy as a teaching tool in our series of Gaussian Process Summer Schools, Winter Schools and Road Shows. The use of python as an underlying platform means we can teach industry and academic collaborators with limited resources the fundamentals of Gaussian processes without requiring them to buy extortionate licenses for out of date programming paradigms.

EPSRC College of Reviewers

Yesterday, I resigned from the EPSRC college of reviewers.

The EPSRC is the national funding body in the UK for Engineering and Physical Sciences. The college of reviewers is responsible for reading grant proposals and making recommendations to panels with regard to the quality, feasibility and utility of the underlying work.

The EPSRC aims to fund international quality science, but the college of reviewers is a national body of researchers. Allocation of proposals to reviewers is done within the EPSRC.

In 2012 I was asked to view only one proposal, in 2013 so far I have received none. The average number of review requests per college member in 2012 was 2.7.

It’s not that I haven’t been doing any proposal reviewing over the last 18 months, I’ve reviewed for the Dutch research councils, the EU, the Academy of Finland, the National Science Foundation (USA), BBSRC, MRC and I’m contracted as part of a team to provide a major review for the Canadian Institute for Advanced Research. I’d estimate that I’ve reviewed around 20 international applications in the area of machine learning and computational biology across this period.

I resigned from the EPSRC College of Reviewers because I don’t wish people to read the list of names in the college and assume that, as a member of the college, I am active in assessing the quality of work the EPSRC is funding. Looking back over the last ten years all the proposals I have reviewed come from a very small body of researchers, all of whom, I know, nominate me as a reviewer.

Each submitted proposal nominates a number of reviewers who the proposers consider to be appropriate. The EPSRC chooses one of these nominated reviewers, and selects the remainder from the wider college.

Over a 12 year period as an academic, I have never been selected to review an EPSRC proposal unless I’ve been nominated by the proposers to do so.

So in many senses this resignation changes nothing, but by resigning from the college I’m highlighting the fact that if you do think I am appropriate for reviewing your proposal, then the only way it will happen is if you nominate me.