Machine Learning and the Physical World

Neil D. Lawrence

Center for Statistics and Machine Learning, Princeton

Figure: Science on Holborn Viaduct, cradling the Centrifugal Governor.

On Governors, James Clerk Maxwell 1868

\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]

From Model to Decision

\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]

Process Automation

Efficiency

  • Economies driven by ‘production.’
  • Greater production comes with better efficiency.
    • E.g. moving from gathering food to settled agriculture.
  • In the modern era one approach to becoming more efficient is automation of processes.
    • E.g. manufacturing production lines

Physical Processes

  • Production lines, robotic automation
  • Supply chain, logistics
  • Efficiency through automation.

Goods and Information

  • Manage flow of goods and information.
  • Flow of information is highly automated.
  • Processing of data is decomposed into stages in computer code.

Intervention

  • For all cases: manufacturing, logistics, data management
  • Pipeline requires human intervention from an operator.
  • Interventions create bottlenecks, slow the process.
  • Machine learning is a key technology in automating these manual stages.

Long Grass

  • Easy to replicate interventions have already been dealt with.
  • Components that still require human intervention are the knottier problems.
  • Difficult decompose into stages which could then be further automated.
  • These components are ‘process-atoms.’
  • These are the “long grass” regions of technology.

Nature of Challenge

  • In manufacturing or logistics settings atoms are flexible manual skills.
    • Requires emulation of a human’s motor skills.
  • In information processing: our flexible cognitive skills.
    • Our ability to mentally process an image or some text.

Artificial Intelligence and Data Science

  • AI aims to equip computers with human capabilities
    • Image understanding
    • Computer vision
    • Speech recognition
    • Natural language understanding
    • Machine translation

Supervised Learning for AI

  • Dominant approach today:
    • Generate large labelled data set from humans.
    • Use supervised learning to emulate that data.
      • E.g. ImageNet Russakovsky et al. (2015)
  • Significant advances due to deep learning
    • E.g. Alexa, Amazon Go

Data Science

  • Arises from happenstance data.
  • Differs from statistics in that the question comes after data collection.

The Gap

  • There is a gap between the world of data science and AI.
  • The mapping of the virtual onto the physical world.
  • E.g. Causal understanding.

Supply Chain

Cromford

Deep Freeze

Deep Freeze

Machine Learning in Supply Chain

  • Supply chain: Large Automated Decision Making Network
  • Major Challenge:
    • We have a mechanistic understanding of supply chain.
    • Machine learning is a data driven technology.

UNCERTAINTY QUANTIFICATION

Data Driven

  • Machine Learning: Replicate Processes through direct use of data.
  • Aim to emulate cognitive processes through the use of data.
  • Use data to provide new approaches in control and optimization that should allow for emulation of human motor skills.

Process Emulation

  • Key idea: emulate the process as a mathematical function.
  • Each function has a set of parameters which control its behaviour.
  • Learning is the process of changing these parameters to change the shape of the function
  • Choice of which class of mathematical functions we use is a vital component of our model.

Emukit Playground

Leah Hirst Cliff McCollum

Emukit Playground

Emukit Playground

Uncertainty Quantification

  • Deep nets are powerful approach to images, speech, language.
  • Proposal: Deep GPs may also be a great approach, but better to deploy according to natural strengths.

Uncertainty Quantification

  • Probabilistic numerics, surrogate modelling, emulation, and UQ.
  • Not a fan of AI as a term.
  • But we are faced with increasing amounts of algorithmic decision making.

ML and Decision Making

  • When trading off decisions: compute or acquire data?
  • There is a critical need for uncertainty.

Uncertainty Quantification

Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known.

  • Interaction between physical and virtual worlds of major interest.

Contrast

  • Simulation in reinforcement learning.
  • Known as data augmentation.
  • Newer, similar in spirit, but typically ignores uncertainty.

Example: Formula One Racing

  • Designing an F1 Car requires CFD, Wind Tunnel, Track Testing etc.

  • How to combine them?

Mountain Car Simulator

Car Dynamics

\[ \mathbf{ x}_{t+1} = f(\mathbf{ x}_{t},\textbf{u}_{t}) \] where \(\textbf{u}_t\) is the action force, \(\mathbf{ x}_t = (p_t, v_t)\) is the vehicle state

Policy

  • Assume policy is linear with parameters \(\boldsymbol{\theta}\) \[ \pi(\mathbf{ x},\theta)= \theta_0 + \theta_p p + \theta_vv. \]

Emulate the Mountain Car

  • Goal is find \(\theta\) such that \[ \theta^* = arg \max_{\theta} R_T(\theta). \]
  • Reward is computed as 100 for target, minus squared sum of actions

Random Linear Controller

Best Controller after 50 Iterations of Bayesian Optimization

Data Efficient Emulation

  • For standard Bayesian Optimization ignored dynamics of the car.
  • For more data efficiency, first emulate the dynamics.
  • Then do Bayesian optimization of the emulator.

\[ \mathbf{ x}_{t+1} =g(\mathbf{ x}_{t},\textbf{u}_{t}) \]

  • Use a Gaussian process to model \[ \Delta v_{t+1} = v_{t+1} - v_{t} \] and \[ \Delta x_{t+1} = p_{t+1} - p_{t} \]
  • Two processes, one with mean \(v_{t}\) one with mean \(p_{t}\)

Emulator Training

  • Used 500 randomly selected points to train emulators.
  • Can make proces smore efficient through experimental design.

Comparison of Emulation and Simulation

Data Efficiency

  • Our emulator used only 500 calls to the simulator.
  • Optimizing the simulator directly required 37,500 calls to the simulator.

Best Controller using Emulator of Dynamics

Mountain Car: Multi-Fidelity Emulation

\[ f_i\left(\mathbf{ x}\right) = \rho f_{i-1}\left(\mathbf{ x}\right) + \delta_i\left(\mathbf{ x}\right), \]

\[ f_i\left(\mathbf{ x}\right) = g_{i}\left(f_{i-1}\left(\mathbf{ x}\right)\right) + \delta_i\left(\mathbf{ x}\right), \]

Building the Multifidelity Emulation

n_initial_points = 25 random_design = RandomDesign(design_space) initial_design = random_design.get_samples(n_initial_points) acquisition = GPyOpt.acquisitions.AcquisitionEI(model, design_space, optimizer=aquisition_optimizer) evaluator = GPyOpt.core.evaluators.Sequential(acquisition)}

Best Controller with Multi-Fidelity Emulator

250 observations of high fidelity simulator and 250 of the low fidelity simulator

Emukit

Javier Gonzalez

Emukit

Emukit

Javier Gonzalez Andrei Paleyes Mark Pullin Maren Mahsereci

Modular Design

Introduce your own surrogate models.

from emukit.model_wrappers import GPyModelWrapper

To building your own model see this notebook.

from emukit.model_wrappers import YourModelWrapperHere

MXFusion: Modular Probabilistic Programming on MXNet

https://github.com/amzn/MXFusion

MxFusion

\ericMeissner{15%}\zhenwenDai{15%}

  • Work by Eric Meissner and Zhenwen Dai.
  • Probabilistic programming.
  • Available on Github

MxFusion

  • Targeted at challenges we face in emulation.
  • Composition of Gaussian processes (Deep GPs)
  • Combining GPs with neural networks.
  • Example PPCA Tutorial.

Why another framework?

  • Existing libraries had either:
    • Probabilistic modelling with rich, flexible models and universal inference or
    • Specialized, efficient inference over a subset of models

We need both

Key Requirements

  • Integration with deep learning
  • Flexiblility
  • Scalability
  • Specialized inference and models support
    • Bayesian Deep Learning methods
    • Rapid prototyping and software re-use
    • GPUs, specialized inference methods

Modularity

  • Specialized Inference
  • Composability (tinkerability)
    • Better leveraging of expert expertise

What does it look like?

Modelling

Inference

Modelling

Directed Graphs

  • Variable
  • Function
  • Distribution

Example

m = Model()
m.mu = Variable()
m.s = Variable(transformation=PositiveTransformation())
m.Y = Normal.define_variable(mean=m.mu, variance=m.s)

3 primary components in modeling

  • Variable
  • Distribution
  • Function

2 primary methods for models

  • log_pdf
  • draw_samples

Inference: Two Classes

  • Variational Inference
  • MCMC Sampling (soon) Built on MXNet Gluon (imperative code, not static graph)

Example

infr = GradBasedInference(inference_algorithm=MAP(model=m, observed=[m.Y]))
infr.run(Y=data)

Modules

  • Model + Inference together form building blocks.
    • Just doing modular modeling with universal inference doesn’t really scale, need specialized inference methods for specialized modelling objects like non-parametrics.

Enhancements MXFusion Brings

  • Use Monte Carlo integration instead of moment estimation
  • Use automatic differentiation
  • A flexible interface for Gaussian processes, trivial to switch to sparse or stochastic variational

Preparation

Example: Pendulum

Pendulum

Fit the Dynamics Model

  • Dynamics:

\[ p(y_{t+1}|y_t, a_t) \]

Define and fit the model

Policy

  • Make use of neural network with one hidden layer

Obtaining the policy gradients

The Loop

After First Epsiode

Policy after the first episode (random exploration):

After Fifth Episode

Policy after the 5th episode:

Contribute!

https://github.com/amzn/mxfusion

Future plans

  • Deep GPs (implemented, not yet merged)
  • MCMC Methods
  • Time series models (RGPs)

Long term Aim

  • Simulate/Emulate the components of the system.
    • Validate with real world using multifidelity.
    • Interpret system using e.g. sensitivity analysis.
  • Perform end to end learning to optimize.
    • Maintain interpretability.

Thanks!

References

Deisenroth, M.P., Rasmussen, C.E., n.d. PILCO: A model-based and data-efficient approach to policy search.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L., 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115, 211–252. https://doi.org/10.1007/s11263-015-0816-y