
# Machine Learning and the Physical World

Center for Statistics and Machine Learning, Princeton

Watt’s Steam Engine which made Steam Power Efficient and Practical

$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$

### Efficiency

• Economies driven by ‘production’.
• Greater production comes with better efficiency.
• E.g. moving from gathering food to settled agriculture.
• In the modern era one approach to becoming more efficient is automation of processes.
• E.g. manufacturing production lines

### Physical Processes

• Manufacturing processes consist of production lines and robotic automation.
• Logistics can also be decomposed into the supply chain processes.
• Efficiency can be improved by automation.

### Goods and Information

• For modern society: management of flow of goods and information.
• Flow of information is highly automated.
• Processing of data is decomposed into stages in computer code.

### Intervention

• For all cases: manufacturing, logistics, data management
• Pipeline requires human intervention from an operator.
• Interventions create bottlenecks, slow the process.
• Machine learning is a key technology in automating these manual stages.

### Long Grass

• Easy to replicate interventions have already been dealt with.
• Components that still require human intervention are the knottier problems.
• Difficult decompose into stages which could then be further automated.
• These components are ‘process-atoms’.
• These are the “long grass” regions of technology.

### Nature of Challenge

• In manufacturing or logistics settings atoms are flexible manual skills.
• Requires emulation of a human’s motor skills.
• In information processing: our flexible cognitive skills.
• Our ability to mentally process an image or some text.

### Artificial Intelligence and Data Science

• AI aims to equip computers with human capabilities
• Image understanding
• Computer vision
• Speech recognition
• Natural language understanding
• Machine translation

### Supervised Learning for AI

• Dominant approach today:
• Generate large labelled data set from humans.
• Use supervised learning to emulate that data.
• E.g. ImageNet Russakovsky et al. (2015)
• Significant advances due to deep learning
• E.g. Alexa, Amazon Go

### Data Science

• Arises from happenstance data.
• Differs from statistics in that the question comes after data collection.

### The Gap

• There is a gap between the world of data science and AI.
• The mapping of the virtual onto the physical world.
• E.g. Causal understanding.

### Machine Learning in Supply Chain

• Supply chain: Large Automated Decision Making Network
• Amazon’s supply chain: Possibly the world’s largest ‘AI’
• Major Challenge:
• We have a mechanistic understanding of supply chain.
• Machine learning is a data driven technology.

### Data Driven

• Machine Learning: Replicate Processes through direct use of data.
• Aim to emulate cognitive processes through the use of data.
• Use data to provide new approaches in control and optimization that should allow for emulation of human motor skills.

### Process Emulation

• Key idea: emulate the process as a mathematical function.
• Each function has a set of parameters which control its behavior.
• Learning is the process of changing these parameters to change the shape of the function
• Choice of which class of mathematical functions we use is a vital component of our model.

### Emukit Playground

• Work Adam Hirst, Software Engineering Intern and Cliff McCollum.

• Tutorial on emulation.

### Uncertainty Quantification

• Deep nets are powerful approach to images, speech, language.
• Proposal: Deep GPs may also be a great approach, but better to deploy according to natural strengths.

### Uncertainty Quantification

• Probabilistic numerics, surrogate modelling, emulation, and UQ.
• Not a fan of AI as a term.
• But we are faced with increasing amounts of algorithmic decision making.

### ML and Decision Making

• When trading off decisions: compute or acquire data?
• There is a critical need for uncertainty.

### Uncertainty Quantification

Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known.

• Interaction between physical and virtual worlds of major interest.

### Contrast

• Simulation in reinforcement learning.
• Known as data augmentation.
• Newer, similar in spirit, but typically ignores uncertainty.

### Example: Formula One Racing

• Designing an F1 Car requires CFD, Wind Tunnel, Track Testing etc.

• How to combine them?

### Car Dynamics

$\inputVector_{t+1} = \mappingFunction(\inputVector_{t},\textbf{u}_{t})$

where $$\textbf{u}_t$$ is the action force, $$\inputVector_t = (p_t, v_t)$$ is the vehicle state

### Policy

• Assume policy is linear with parameters $$\boldsymbol{\theta}$$

$\pi(\inputVector,\theta)= \theta_0 + \theta_p p + \theta_vv.$

### Emulate the Mountain Car

• Goal is find $$\theta$$ such that

$\theta^* = arg \max_{\theta} R_T(\theta).$

• Reward is computed as 100 for target, minus squared sum of actions

### Data Efficient Emulation

• For standard Bayesian Optimization ignored dynamics of the car.

• For more data efficiency, first emulate the dynamics.

• Then do Bayesian optimization of the emulator.

• Use a Gaussian process to model $\Delta v_{t+1} = v_{t+1} - v_{t}$ and $\Delta x_{t+1} = p_{t+1} - p_{t}$

• Two processes, one with mean $$v_{t}$$ one with mean $$p_{t}$$

### Emulator Training

• Used 500 randomly selected points to train emulators.

• Can make proces smore efficient through experimental design.

### Data Efficiency

• Our emulator used only 500 calls to the simulator.

• Optimizing the simulator directly required 37,500 calls to the simulator.

### Best Controller using Emulator of Dynamics

500 calls to the simulator vs 37,500 calls to the simulator

$\mappingFunction_i\left(\inputVector\right) = \rho\mappingFunction_{i-1}\left(\inputVector\right) + \delta_i\left(\inputVector \right)$

### Multi-Fidelity Emulation

$\mappingFunction_i\left(\inputVector\right) = \mappingFunctionTwo_{i}\left(\mappingFunction_{i-1}\left(\inputVector\right)\right) + \delta_i\left(\inputVector \right),$

### Best Controller with Multi-Fidelity Emulator

250 observations of high fidelity simulator and 250 of the low fidelity simulator

### Emukit Software

• Multi-fidelity emulation: build surrogate models for multiple sources of information;
• Bayesian optimisation: optimise physical experiments and tune parameters ML algorithms;
• Experimental design/Active learning: design experiments and perform active learning with ML models;
• Sensitivity analysis: analyse the influence of inputs on the outputs
• Bayesian quadrature: compute integrals of functions that are expensive to evaluate.

### MxFusion

 Work by Eric Meissner and Zhenwen Dai. Probabilistic programming. Available on Github

### MxFusion

• Targeted at challenges we face in emulation.
• Composition of Gaussian processes (Deep GPs)
• Combining GPs with neural networks.
• Example PPCA Tutorial.

### Why another framework?

Existing libraries had either: * Probabilistic modelling with rich, flexible models and universal inference or * Specialized, efficient inference over a subset of models

We needed both

### Key Requirements

• Integration with deep learning
• Flexiblility
• Scalability
• Specialized inference and models support
• Bayesian Deep Learning methods
• Rapid prototyping and software re-use
• GPUs, specialized inference methods

### Modularity

• Specialized Inference
• Composability (tinkerability)
• Better leveraging of expert expertise

Modelling

Inference

• Variable
• Function
• Distribution

### Example

m = Model()
m.mu = Variable()
m.s = Variable(transformation=PositiveTransformation())
m.Y = Normal.define_variable(mean=m.mu, variance=m.s)

• Variable
• Distribution
• Function

### 2 primary methods for models

• log_pdf
• draw_samples

### Inference: Two Classes

• Variational Inference
• MCMC Sampling (soon) Built on MXNet Gluon (imperative code, not static graph)

### Example

infr = GradBasedInference(inference_algorithm=MAP(model=m, observed=[m.Y]))
infr.run(Y=data)

### Modules

• Model + Inference together form building blocks.
• Just doing modular modeling with universal inference doesn’t really scale, need specialized inference methods for specialized modelling objects like non-parametrics.

### Enhancements MXFusion Brings

• Use Monte Carlo integration instead of moment estimation
• Use automatic differentiation
• A flexible interface for Gaussian processes, trivial to switch to sparse or stochastic variational

### Preparation

pip install mxnet mxfusion gym

Set the global configuration.

### Fit the Dynamics Model

• Dynamics:

$p(\dataScalar_{t+1}|\dataScalar_t, a_t)$

### Policy

• Make use of neural network with one hidden layer

### After First Epsiode

Policy after the first episode (random exploration):

### After Fifth Episode

Policy after the 5th episode:

### Contribute!

https://github.com/amzn/mxfusion

### Future plans

• Deep GPs (implemented, not yet merged)
• MCMC Methods
• Time series models (RGPs)

### Long term Aim

• Simulate/Emulate the components of the system.
• Validate with real world using multifidelity.
• Interpret system using e.g. sensitivity analysis.
• Perform end to end learning to optimize.
• Maintain interpretability.

### References

Deisenroth, M.P., Rasmussen, C.E., n.d. PILCO: A model-based and data-efficient approach to policy search, in:.

Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L., 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115, 211–252. https://doi.org/10.1007/s11263-015-0816-y