[edit]
Inducing Variable Demo
Setup
pods
In Sheffield we created a suite of software tools for ‘Open Data Science.’ Open data science is an approach to sharing code, models and data that should make it easier for companies, health professionals and scientists to gain access to data science techniques.
You can also check this blog post on Open Data Science.
The software can be installed using
%pip install --upgrade git+https://github.com/lawrennd/ods
from the command prompt where you can access your python installation.
The code is also available on github: https://github.com/lawrennd/ods
Once pods
is installed, it can be imported in the usual manner.
import pods
mlai
The mlai
software is a suite of helper functions for teaching and demonstrating machine learning algorithms. It was first used in the Machine Learning and Adaptive Intelligence course in Sheffield in 2013.
The software can be installed using
%pip install --upgrade git+https://github.com/lawrennd/mlai.git
from the command prompt where you can access your python installation.
The code is also available on github: https://github.com/lawrennd/mlai
Once mlai
is installed, it can be imported in the usual manner.
import mlai
%pip install gpy
GPy: A Gaussian Process Framework in Python
Gaussian processes are a flexible tool for non-parametric analysis with uncertainty. The GPy software was started in Sheffield to provide a easy to use interface to GPs. One which allowed the user to focus on the modelling rather than the mathematics.
GPy is a BSD licensed software code base for implementing Gaussian process models in python. This allows GPs to be combined with a wide variety of software libraries.
The software itself is available on GitHub and the team welcomes contributions.
The aim for GPy is to be a probabilistic-style programming language, i.e. you specify the model rather than the algorithm. As well as a large range of covariance functions the software allows for non-Gaussian likelihoods, multivariate outputs, dimensionality reduction and approximations for larger data sets.
The documentation for GPy can be found here.
A Simple Regression Problem
Here we set up a simple one dimensional regression problem. The input locations, \(\mathbf{X}\), are in two separate clusters. The response variable, \(\mathbf{ y}\), is sampled from a Gaussian process with an exponentiated quadratic covariance.
import numpy as np
import GPy
101) np.random.seed(
= 50
N = 0.01
noise_var = np.zeros((50, 1))
X 25, :] = np.linspace(0,3,25)[:,None] # First cluster of inputs/covariates
X[:25:, :] = np.linspace(7,10,25)[:,None] # Second cluster of inputs/covariates
X[
# Sample response variables from a Gaussian process with exponentiated quadratic covariance.
= GPy.kern.RBF(1)
k = np.random.multivariate_normal(np.zeros(N),k.K(X)+np.eye(N)*np.sqrt(noise_var)).reshape(-1,1) y
First we perform a full Gaussian process regression on the data. We create a GP model, m_full
, and fit it to the data, plotting the resulting fit.
= GPy.models.GPRegression(X,y)
m_full = m_full.optimize(messages=True) # Optimize parameters of covariance function _
Now we set up the inducing variables, \(\mathbf{u}\). Each inducing variable has its own associated input index, \(\mathbf{Z}\), which lives in the same space as \(\mathbf{X}\). Here we are using the true covariance function parameters to generate the fit.
= GPy.kern.RBF(1)
kern = np.hstack(
Z 2.5,4.,3),
(np.linspace(7,8.5,3)))[:,None]
np.linspace(= GPy.models.SparseGPRegression(X,y,kernel=kern,Z=Z)
m = noise_var
m.noise_var
m.inducing_inputs.constrain_fixed() display(m)
= m.optimize(messages=True)
_ display(m)
m.randomize()
m.inducing_inputs.unconstrain()= m.optimize(messages=True) _
Now we will vary the number of inducing points used to form the approximation.
=8
m.num_inducing
m.randomize()= 8
M 1)*12)
m.set_Z(np.random.rand(M,
= m.optimize(messages=True) _
And we can compare the probability of the result to the full model.
print(m.log_likelihood(), m_full.log_likelihood())