Data Science: Time for Professionalisation?

Neil D. Lawrence

Open Data Science Conference, London

What is Machine Learning?

What is Machine Learning?

\[ \text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]

  • data : observations, could be actively or passively acquired (meta-data).
  • model : assumptions, based on previous experience (other data! transfer learning etc), or beliefs about the regularities of the universe. Inductive bias.
  • prediction : an action to be taken or a categorization or a quality score.

What is Machine Learning?

\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]

  • To combine data with a model need:
  • a prediction function \(f(\cdot)\) includes our beliefs about the regularities of the universe
  • an objective function \(E(\cdot)\) defines the cost of misprediction.

Machine Learning

  • Driver of two different domains:
    1. Data Science: arises from the fact that we now capture data by happenstance.
    2. Artificial Intelligence: emulation of human behaviour.
  • Connection: Internet of Things

Machine Learning

  • Driver of two different domains:
    1. Data Science: arises from the fact that we now capture data by happenstance.
    2. Artificial Intelligence: emulation of human behaviour.
  • Connection: Internet of Things

Machine Learning

  • Driver of two different domains:
    1. Data Science: arises from the fact that we now capture data by happenstance.
    2. Artificial Intelligence: emulation of human behaviour.
  • Connection: Internet of People

Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981/1/28)

What does Machine Learning do?

  • ML Automates through Data
    • Strongly related to statistics.
    • Field underpins revolution in data science and AI
  • With AI:
    • logic, robotics, computer vision, speech
  • With Data Science:
    • databases, data mining, statistics, visualization

Embodiment Factors

bits/min billions 2,000
billion
calculations/s
~100 a billion
embodiment 20 minutes 5 billion years

Evolved Relationship with Information

New Flow of Information

Evolved Relationship

Evolved Relationship

What does Machine Learning do?

  • Automation scales by codifying processes and automating them.
  • Need:
    • Interconnected components
    • Compatible components
  • Early examples:
    • cf Colt 45, Ford Model T

Codify Through Mathematical Functions

  • How does machine learning work?
  • Jumper (jersey/sweater) purchase with logistic regression

\[ \text{odds} = \frac{p(\text{bought})}{p(\text{not bought})} \]

\[ \log \text{odds} = \beta_0 + \beta_1 \text{age} + \beta_2 \text{latitude}.\]

Codify Through Mathematical Functions

  • How does machine learning work?
  • Jumper (jersey/sweater) purchase with logistic regression

\[ p(\text{bought}) = \sigma\left(\beta_0 + \beta_1 \text{age} + \beta_2 \text{latitude}\right).\]

Codify Through Mathematical Functions

  • How does machine learning work?
  • Jumper (jersey/sweater) purchase with logistic regression

\[ p(\text{bought}) = \sigma\left(\boldsymbol{\beta}^\top \mathbf{ x}\right).\]

Codify Through Mathematical Functions

  • How does machine learning work?
  • Jumper (jersey/sweater) purchase with logistic regression

\[ y= f\left(\mathbf{ x}, \boldsymbol{\beta}\right).\]

We call \(f(\cdot)\) the prediction function.

Fit to Data

  • Use an objective function

\[E(\boldsymbol{\beta}, \mathbf{Y}, \mathbf{X})\]

  • E.g. least squares \[E(\boldsymbol{\beta}, \mathbf{Y}, \mathbf{X}) = \sum_{i=1}^n\left(y_i - f(\mathbf{ x}_i, \boldsymbol{\beta})\right)^2.\]

Two Components

  • Prediction function, \(f(\cdot)\)
  • Objective function, \(E(\cdot)\)

Deep Learning

Deep Learning

  • These are interpretable models: vital for disease modeling etc.

  • Modern machine learning methods are less interpretable

  • Example: face recognition

Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Color illustrates feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected.

Source: DeepFace (Taigman et al., 2014)

Data Science and Professionalisation

  • Industrial Revolution 4.0?
  • Industrial Revolution (1760-1840) term coined by Arnold Toynbee (1852-1883).
  • Maybe: But this one is dominated by data not capital
  • A revolution in information rather than energy.
  • That presents challenges and opportunities
  • Consider Apple vs Nokia: How you handle disruption.

compare digital oligarchy vs how Africa can benefit from the data revolution

A Time for Professionalisation?

  • New technologies historically led to new professions:
    • Brunel (born 1806): Civil, mechanical, naval
    • Tesla (born 1856): Electrical and power
    • William Shockley (born 1910): Electronic
    • Watts S. Humphrey (born 1927): Software

Why?

  • Codification of best practice.
  • Developing trust

Where are we?

  • Perhaps around the 1980s of programming.
    • We understand if, for, and procedures
    • But we don’t share best practice.
  • Let’s avoid the over formalisation of software engineering.

The Software Crisis

The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

Edsger Dijkstra (1930-2002), The Humble Programmer

The Data Crisis

The major cause of the data crisis is that machines have become more interconnected than ever before. Data access is therefore cheap, but data quality is often poor. What we need is cheap high-quality data. That implies that we develop processes for improving and verifying data quality that are efficient.

There would seem to be two ways for improving efficiency. Firstly, we should not duplicate work. Secondly, where possible we should automate work.

Me

Rest of this Talk: Two Areas of Focus

  • Reusability of Data

  • Deployment of Machine Learning Systems

Data Readiness Levels

Data Readiness Levels

https://arxiv.org/pdf/1705.02245.pdf Data Readiness Levels (Lawrence, 2017b)

Three Grades of Data Readiness

  • Grade C - accessibility
    • Transition: data becomes electronically available
  • Grade B - validity
    • Transition: pose a question to the data.
  • Grade A - usability

Accessibility: Grade C

  • Hearsay data.
  • Availability, is it actually being recorded?
  • privacy or legal constraints on the accessibility of the recorded data, have ethical constraints been alleviated?
  • Format: log books, PDF …
  • limitations on access due to topology (e.g. it’s distributed across a number of devices)
  • At the end of Grade C data is ready to be loaded into analysis software (R, SPSS, Matlab, Python, Mathematica)

Validity: Grade B

  • faithfulness and representation
  • visualisations.
  • exploratory data analysis
  • noise characterisation.

Grade B Checks

  • Missing values.
  • Schema alignment, record linkage, data fusion
  • Example:

Grade B Transition

  • At the end of Grade B, ready to define a task, or question
  • Compare with classical statistics:
    • Classically: question is first data comes later.
    • Today: data is first question comes later.

Data First

In a data first company teams own their data quality issues at least as far as grade B1.

Usability: Grade A

  • The usability of data
    • Grade A is about data in context.
  • Consider appropriateness of a given data set to answer a particular question or to be subject to a particular analysis.

Recursive Effects

  • Grade A may also require:
    • data integration
    • active collection of new data.
    • rebalancing of data to ensure fairness
    • annotation of data by human experts
    • revisiting the collection (and running through the appropriate stages again)

A1 Data

  • A1 data is ready to make available for challenges or AutoML platforms.

Contribute!

http://data-readiness.org

Also …

  • Encourage greater interaction between application domains and data scientists
  • Encourage visualization of data

Assessing the Organizations Readiness

See Also …

Deploying Artificial Intelligence

  • Challenges in deploying AI.
  • Currently this is in the form of “machine learning systems”

Internet of People

  • Fog computing: barrier between cloud and device blurring.
    • Computing on the Edge
  • Complex feedback between algorithm and implementation

Deploying ML in Real World: Machine Learning Systems Design

  • Major new challenge for systems designers.
  • Internet of Intelligence but currently:
    • AI systems are fragile

Machine Learning Systems Design

Fragility of AI Systems

  • They are componentwise built from ML Capabilities.
  • Each capability is independently constructed and verified.
    • Pedestrian detection
    • Road line detection
  • Important for verification purposes.

Pigeonholing

Robust

  • Need to move beyond pigeonholing tasks.
  • Need new approaches to both the design of the individual components, and the combination of components within our AI systems.

Rapid Reimplementation

  • Whole systems are being deployed.
  • But they change their environment.
  • The experience evolved adversarial behaviour.

Machine Learning Systems Design

Figure: Science on Holborn Viaduct, cradling the Centrifugal Governor.

On Governors, James Clerk Maxwell 1868

Adversaries

  • Stuxnet
  • Mischevious-Adversarial

An Intelligent System

Joint work with M. Milo

An Intelligent System

Joint work with M. Milo

Peppercorns

  • A new name for system failures which aren’t bugs.
  • Difference between finding a fly in your soup vs a peppercorn in your soup.

Peppercorns

Turnaround And Update

  • There is a massive need for turn around and update
  • A redeploy of the entire system.
    • This involves changing the way we design and deploy.
  • Interface between security engineering and machine learning.

Conclusion

Conclusion

  • Artificial Intelligence and Data Science are fundamentally different.
  • In one you are dealing with data collected by happenstance.
  • In the other you are trying to build systems in the real world, often by actively collecting data.
  • Our approaches to systems design are building powerful machines that will be deployed in evolving environments.

Thanks!

References

Lawrence, N.D., 2017b. Data readiness levels. ArXiv.
Lawrence, N.D., 2017a. Living together: Mind and machine intelligence. arXiv.
Taigman, Y., Yang, M., Ranzato, M., Wolf, L., 2014. DeepFace: Closing the gap to human-level performance in face verification, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2014.220