Uncertainty in Loss Functions
Abstract
Bayesian formalisms deal with uncertainty in parameters, frequentist formalisms deal with the risk of a data set, uncertainty in the data sample. In this talk, we consider uncertainty in the loss function. Uncertainty in the loss function. We introduce uncertainty through linear weightings of terms in the loss function and show how a distribution over the loss can be maintained through the maximum entropy principle. This allows us minimize the expected loss under our maximum entropy distribution of the loss function. We recover weighted least squares and a LOESS-like regression from the formalism.
Introduction [edit]
What is Machine Learning? [edit]
What is machine learning? At its most basic level machine learning is a combination of
$$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$$
where data is our observations. They can be actively or passively acquired (meta-data). The model contains our assumptions, based on previous experience. That experience can be other data, it can come from transfer learning, or it can merely be our beliefs about the regularities of the universe. In humans our models include our inductive biases. The prediction is an action to be taken or a categorization or a quality score. The reason that machine learning has become a mainstay of artificial intelligence is the importance of predictions in artificial intelligence. The data and the model are combined through computation.
In practice we normally perform machine learning using two functions. To combine data with a model we typically make use of:
a prediction function a function which is used to make the predictions. It includes our beliefs about the regularities of the universe, our assumptions about how the world works, e.g. smoothness, spatial similarities, temporal similarities.
an objective function a function which defines the cost of misprediction. Typically it includes knowledge about the world’s generating processes (probabilistic objectives) or the costs we pay for mispredictions (empiricial risk minimization).
The combination of data and model through the prediction function and the objectie function leads to a learning algorithm. The class of prediction functions and objective functions we can make use of is restricted by the algorithms they lead to. If the prediction function or the objective function are too complex, then it can be difficult to find an appropriate learning algorithm. Much of the acdemic field of machine learning is the quest for new learning algorithms that allow us to bring different types of models and data together.
A useful reference for state of the art in machine learning is the UK Royal Society Report, Machine Learning: Power and Promise of Computers that Learn by Example.
You can also check my post blog post on What is Machine Learning?..
Artificial vs Natural Systems [edit]
Let’s take a step back from artificial intelligence, and consider natural intelligence. Or even more generally, let’s consider the contrast between an artificial system and an natural system. The key difference between the two is that artificial systems are designed whereas natural systems are evolved.
Systems design is a major component of all Engineering disciplines. The details differ, but there is a single common theme: achieve your objective with the minimal use of resources to do the job. That provides efficiency. The engineering designer imagines a solution that requires the minimal set of components to achieve the result. A water pump has one route through the pump. That minimises the number of components needed. Redundancy is introduced only in safety critical systems, such as aircraft control systems. Students of biology, however, will be aware that in nature system-redundancy is everywhere. Redundancy leads to robustness. For an organism to survive in an evolving environment it must first be robust, then it can consider how to be efficient. Indeed, organisms that evolve to be too efficient at a particular task, like those that occupy a niche environment, are particularly vulnerable to extinction.
This notion is akin to the idea that only the best will survive, popularly encoded into an notion of evolution by Herbert Spencer’s quote.
Survival of the fittest
Herbet Spencer, 1864
Darwin himself never said “Survival of the Fittest” he talked about evolution by natural selection.
Non-survival of the non-fit
Evolution is better described as “non-survival of the non-fit”. You don’t have to be the fittest to survive, you just need to avoid the pitfalls of life. This is the first priority.
So it is with natural vs artificial intelligences. Any natural intelligence that was not robust to changes in its external environment would not survive, and therefore not reproduce. In contrast the artificial intelligences we produce are designed to be efficient at one specific task: control, computation, playing chess. They are fragile.
The first rule of a natural system is not be intelligent, it is “don’t be stupid”.
A mistake we make in the design of our systems is to equate fitness with the objective function, and to assume it is known and static. In practice, a real environment would have an evolving fitness function which would be unknown at any given time.
You can also read this blog post on Natural and Artificial Intelligence..
The first criterion of a natural intelligence is don’t fail, not because it has a will or intent of its own, but because if it had failed it wouldn’t have stood the test of time. It would no longer exist. In contrast, the mantra for artificial systems is to be more efficient. Our artificial systems are often given a single objective (in machine learning it is encoded in a mathematical function) and they aim to achieve that objective efficiently. These are different characteristics. Even if we wanted to incorporate don’t fail in some form, it is difficult to design for. To design for “don’t fail”, you have to consider every which way in which things can go wrong, if you miss one you fail. These cases are sometimes called corner cases. But in a real, uncontrolled environment, almost everything is a corner. It is difficult to imagine everything that can happen. This is why most of our automated systems operate in controlled environments, for example in a factory, or on a set of rails. Deploying automated systems in an uncontrolled environment requires a different approach to systems design. One that accounts for uncertainty in the environment and is robust to unforeseen circumstances.
Today’s Artificial Systems
The systems we produce today only work well when their tasks are pigeonholed, bounded in their scope. To achieve robust artificial intelligences we need new approaches to both the design of the individual components, and the combination of components within our AI systems. We need to deal with uncertainty and increase robustness. Today, it is easy to make a fool of an artificial intelligent agent, technology needs to address the challenge of the uncertain environment to achieve robust intelligences.
However, even if we find technological solutions for these challenges, it may be that the essence of human intelligence remains out of reach. It may be that the most quintessential element of our intelligence is defined by limitations. Limitations that computers have never experienced.
Claude Shannon developed the idea of information theory: the mathematics of information. He defined the amount of information we gain when we learn the result of a coin toss as a “bit” of information. A typical computer can communicate with another computer with a billion bits of information per second. Equivalent to a billion coin tosses per second. So how does this compare to us? Well, we can also estimate the amount of information in the English language. Shannon estimated that the average English word contains around 12 bits of information, twelve coin tosses, this means our verbal communication rates are only around the order of tens to hundreds of bits per second. Computers communicate tens of millions of times faster than us, in relative terms we are constrained to a bit of pocket money, while computers are corporate billionaires.
Our intelligence is not an island, it interacts, it infers the goals or intent of others, it predicts our own actions and how we will respond to others. We are social animals, and together we form a communal intelligence that characterises our species. For intelligence to be communal, our ideas to be shared somehow. We need to overcome this bandwidth limitation. The ability to share and collaborate, despite such constrained ability to communicate, characterises us. We must intellectually commune with one another. We cannot communicate all of what we saw, or the details of how we are about to react. Instead, we need a shared understanding. One that allows us to infer each other’s intent through context and a common sense of humanity. This characteristic is so strong that we anthropomorphise any object with which we interact. We apply moods to our cars, our cats, our environment. We seed the weather, volcanoes, trees with intent. Our desire to communicate renders us intellectually animist.
But our limited bandwidth doesn’t constrain us in our imaginations. Our consciousness, our sense of self, allows us to play out different scenarios. To internally observe how our self interacts with others. To learn from an internal simulation of the wider world. Empathy allows us to understand others’ likely responses without having the full detail of their mental state. We can infer their perspective. Self-awareness also allows us to understand our own likely future responses, to look forward in time, play out a scenario. Our brains contain a sense of self and a sense of others. Because our communication cannot be complete it is both contextual and cultural. When driving a car in the UK a flash of the lights at a junction concedes the right of way and invites another road user to proceed, whereas in Italy, the same flash asserts the right of way and warns another road user to remain.
Our main intelligence is our social intelligence, intelligence that is dedicated to overcoming our bandwidth limitation. We are individually complex, but as a society we rely on shared behaviours and oversimplification of our selves to remain coherent.
This nugget of our intelligence seems impossible for a computer to recreate directly, because it is a consequence of our evolutionary history. The computer, on the other hand, was born into a world of data, of high bandwidth communication. It was not there through the genesis of our minds and the cognitive compromises we made are lost to time. To be a truly human intelligence you need to have shared that journey with us.
Of course, none of this prevents us emulating those aspects of human intelligence that we observe in humans. We can form those emulations based on data. But even if an artificial intelligence can emulate humans to a high degree of accuracy it is a different type of intelligence. It is not constrained in the way human intelligence is. You may ask does it matter? Well, it is certainly important to us in many domains that there’s a human pulling the strings. Even in pure commerce it matters: the narrative story behind a product is often as important as the product itself. Handmade goods attract a price premium over factory made. Or alternatively in entertainment: people pay more to go to a live concert than for streaming music over the internet. People will also pay more to go to see a play in the theatre rather than a movie in the cinema.
In many respects I object to the use of the term Artificial Intelligence. It is poorly defined and means different things to different people. But there is one way in which the term is very accurate. The term artificial is appropriate in the same way we can describe a plastic plant as an artificial plant. It is often difficult to pick out from afar whether a plant is artificial or not. A plastic plant can fulfil many of the functions of a natural plant, and plastic plants are more convenient. But they can never replace natural plants.
In the same way, our natural intelligence is an evolved thing of beauty, a consequence of our limitations. Limitations which don’t apply to artificial intelligences and can only be emulated through artificial means. Our natural intelligence, just like our natural landscapes, should be treasured and can never be fully replaced.
Uncertainty in models is handled by Bayesian inference, here we consider uncertainty arising in loss functions.
Consider a loss function which decomposes across individual observations, $\dataScalar_{k,j}$, each of which is dependent on some set of features, $\inputVector_k$.
$$
\errorFunction(\dataVector, \inputMatrix) = \sum_{k}\sum_{j}
L(\dataScalar_{k,j}, \inputVector_k)
$$
Assume that the loss function depends on the features through some mapping function, $\mappingFunction_j(\cdot)$ which we call the prediction function.
$$
\errorFunction(\dataVector, \inputMatrix) = \sum_{k}\sum_{j} L(\dataScalar_{k,j},
\mappingFunction_j(\inputVector_k))
$$
without loss of generality, we can move the index to the inputs, so we have $\inputVector_i =\left[\inputVector \quad j\right]$, and we set $\dataScalar_i = \dataScalar_{k, j}$. So we have
$$
\errorFunction(\dataVector, \inputMatrix) = \sum_{i} L(\dataScalar_i, \mappingFunction(\inputVector_i))
$$
Bayesian inference considers uncertainty in $\mappingFunction$, often through parameterizing it, $\mappingFunction(\inputVector; \parameterVector)$, and considering a prior distribution for the parameters, $p(\parameterVector)$, this in turn implies a distribution over functions, $p(\mappingFunction)$. Process models, such as Gaussian processes specify this distribution, known as a process, directly.
Bayesian inference proceeds by specifying a likelihood which relates the data, $\dataScalar$, to the parameters. Here we choose not to do this, but instead we only consider the loss function for our objective. The loss is the cost we pay for a misclassification.
The risk function is the expectation of the loss under the distribution of the data. Here we are using the framework of empirical risk minimization, because we have a sample based approximation. The new expectation we are considering is around the loss function itself, not the uncertainty in the data.
The loss function and the log likelihood may take a mathematically similar form but they are philosophically very different. The log likelihood assumes something about the generating function of the data, whereas the loss function assumes something about the cost we pay. Importantly the loss function in Bayesian inference only normally enters at the point of decision.
The key idea in Bayesian inference is that the probabilistic inference can be performed without knowing the loss becasue if the model is correct, then the form of the loss function is irrelevant when performing inference. In practice, however, for real data sets the model is almost never correct.
Some of the maths below looks similar to the maths we can find in Bayesian methods, in particular variational Bayes, but that is merely a consequence of the availability of analytical mathematics. There are only particular ways of developing tractable algorithms, one route involves linear algebra. However, the similarity of the mathematics belies a difference in interpretation. It is similar to travelling a road (e.g. Ermine Street) in a wild landscape. We travel together because that is where efficient progress is to be made, but in practice a our destinations (Lincoln, York), may be different.
Introduce Uncertainty
To introduce uncertainty we consider a weighted version of the loss function, we introduce positive weights, $\left\{ \scaleScalar_i\right\}_{i=1}^\numData$.
$$
\errorFunction(\dataVector, \inputMatrix) = \sum_{i}
\scaleScalar_i L(\dataScalar_i, \mappingFunction(\inputVector_i))
$$
We now assume that tmake the assumption that these weights are drawn from a distribution, $q(\scaleScalar)$. Instead of looking to minimize the loss direction, we look at the expected loss under this distribution.
$$
\begin{align*}
\errorFunction(\dataVector, \inputMatrix) = & \sum_{i}\expectationDist{\scaleScalar_i L(\dataScalar_i, \mappingFunction(\inputVector_i))}{q(\scaleScalar)} \\
& =\sum_{i}\expectationDist{\scaleScalar_i }{q(\scaleScalar)}L(\dataScalar_i, \mappingFunction(\inputVector_i))
\end{align*}
$$
We will assume that our process, $q(\scaleScalar)$ can depend on a variety of inputs such as $\dataVector$, $\inputMatrix$ and time, t.
Principle of Maximum Entropy
To maximize uncertainty in $q(\scaleScalar)$ we maximize its entropy. Following Jaynes formalism of maximum entropy, in the continuous space we do this with respect to an invariant measure,
$$
H(\scaleScalar)= - \int q(\scaleScalar) \log \frac{q(\scaleScalar)}{m(\scaleScalar)} \text{d}\scaleScalar
$$
and since we minimize the loss, we balance this by adding in this term to form
$$
\begin{align*}
\errorFunction = & \beta\sum_{i}\expectationDist{\scaleScalar_i }{q(\scaleScalar)}L(\dataScalar_i, \mappingFunction(\inputVector_i)) - H(\scaleScalar)\\
&= \beta\sum_{i}\expectationDist{\scaleScalar_i }{q(\scaleScalar)}L(\dataScalar_i, \mappingFunction(\inputVector_i)) + \int q(\scaleScalar) \log \frac{q(\scaleScalar)}{m(\scaleScalar)}\text{d}\scaleScalar
\end{align*}
$$
where β serves to weight the relative contribution of the entropy term and the loss term.
We can now minimize this modified loss with respect to the density $q(\scaleScalar)$, the freeform optimization over this term leads to
$$
\begin{align*}
q(\scaleScalar) \propto & \exp\left(- \beta \sum_{i=1}^\numData \scaleScalar_i L(\dataScalar_i, \mappingFunction(\inputVector_i)) \right) m(\scaleScalar)\\
\propto & \prod_{i=1}^\numData \exp\left(- \beta \scaleScalar_i L(\dataScalar_i, \mappingFunction(\inputVector_i)) \right) m(\scaleScalar)
\end{align*}
$$
Example
Assume
$$
m(\scaleScalar) = \prod_i \lambda\exp\left(-\lambda\scaleScalar_i\right)
$$
which is the distribution with the maximum entropy for a given mean, $\scaleScalar$. Then we have
$$
q(\scaleScalar) = \prod_i q(\scaleScalar_i)
$$
$$
q(\scaleScalar_i) \propto \frac{1}{\lambda+\beta L_i} \exp\left(-(\lambda+\beta L_i) \scaleScalar_i\right)
$$
and we can compute
$$
\expectationDist{\scaleScalar_i}{q(\scaleScalar)} =
\frac{1}{\lambda + \beta L_i}
$$
Coordinate Descent
We can minimize with respect to $q(\scaleScalar)$ recovering,
$$
q(\scaleScalar_i) = \frac{1}{\lambda+\beta L_i} \exp\left(-(\lambda+\beta L_i) \scaleScalar_i\right)
$$
t allowing us to compute the expectation of $\scaleScalar$,
$$
\expectationDist{\scaleScalar_i}{q(\scaleScalar_i)} = \frac{1}{\lambda+\beta
L_i}
$$
then, we can minimize our expected loss with respect to $\mappingFunction(\cdot)$
$$
\beta \sum_{i=1}^\numData \expectationDist{\scaleScalar_i}{q(\scaleScalar_i)} L(\dataScalar_i, \mappingFunction(\inputVector_i))
$$
If the loss is the squared loss, then this is recognised as a reweighted least squares algorithm. However, the loss can be of any form as long as $q(\scaleScalar)$ defined above exists.
In addition to the above, in our example below, we updated β to normalize the expected loss to be $\numData$ at each iteration, so we have
$$
\beta = \frac{\numData}{\sum_{i=1}^\numData \expectationDist{\scaleScalar_i}{q(\scaleScalar_i)} L(\dataScalar_i, \mappingFunction(\inputVector_i))}
$$
Olympic Marathon Data
|
|
The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. First we load in the data and plot.
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
offset = y.mean()
scale = np.sqrt(y.var())
xlim = (1875,2030)
ylim = (2.5, 6.5)
yhat = (y-offset)/scale
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
_ = ax.plot(x, y, 'r.',markersize=10)
ax.set_xlabel('year', fontsize=20)
ax.set_ylabel('pace min/km', fontsize=20)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/datasets/olympic-marathon.svg',
transparent=True,
frameon=True)
Things to notice about the data include the outlier in 1904, in this year, the olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race meant that participants got lost, and only very few participants completed.
More recent years see more consistently quick marathons.
Example: Linear Regression
Create a weighted linear regression class, inheriting from the mlai.LM
class.
class LML(mlai.LM):
"""Linear model with evolving loss
:param X: input values
:type X: numpy.ndarray
:param y: target values
:type y: numpy.ndarray
:param basis: basis function
:param type: function
:param beta: weight of the loss function
:param type: float"""
def __init__(self, X, y, basis=None, beta=1.0, lambd=1.0):
"Initialise"
if basis is None:
basis = mlai.basis(mlai.polynomial, number=2)
mlai.LM.__init__(self, X, y, basis)
self.s = np.ones((self.num_data, 1))#np.random.rand(self.num_data, 1)>0.5
self.update_w()
self.sigma2 = 1/beta
self.beta = beta
self.name = 'LML_'+basis.function.__name__
self.objective_name = 'Weighted Sum of Square Training Error'
self.lambd = lambd
def update_QR(self):
"Perform the QR decomposition on the basis matrix."
self.Q, self.R = np.linalg.qr(self.Phi*np.sqrt(self.s))
def fit(self):
"""Minimize the objective function with respect to the parameters"""
for i in range(30):
self.update_w() # In the linear regression clas
self.update_s()
def update_w(self):
self.update_QR()
self.w_star = sp.linalg.solve_triangular(self.R, np.dot(self.Q.T, self.y*np.sqrt(self.s)))
self.update_losses()
def predict(self, X):
"""Return the result of the prediction function."""
return np.dot(self.basis.Phi(X), self.w_star), None
def update_s(self):
"""Update the weights"""
self.s = 1/(self.lambd + self.beta*self.losses)
def update_losses(self):
"""Compute the loss functions for each data point."""
self.update_f()
self.losses = ((self.y-self.f)**2)
self.beta = 1/(self.losses*self.s).mean()
def objective(self):
"""Compute the objective function."""
self.update_losses()
return (self.losses*self.s).sum()
Set up a linear model (polynomial with two basis functions).
num_basis=2
data_limits=[1890, 2020]
basis = mlai.basis(mlai.polynomial, num_basis, data_limits=data_limits)
model = LML(x, y, basis=basis, lambd=1, beta=1)
model2 = mlai.LM(x, y, basis=basis)
x_test = np.linspace(data_limits[0], data_limits[1], 130)[:, None]
f_test, f_var = model.predict(x_test)
f2_test, f2_var = model2.predict(x_test)
import teaching_plots as plot
from matplotlib import rc, rcParams
rcParams.update({'font.size': 22})
rc('text', usetex=True)
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
ax.plot(x_test, f2_test, linewidth=3, color='r')
ax.plot(x, y, 'g.', markersize=10)
ax.set_xlim(data_limits[0], data_limits[1])
ax.set_xlabel('year')
ax.set_ylabel('pace min/km')
_ = ax.set_ylim(2, 6)
mlai.write_figure('../slides/diagrams/ml/olympic-loss-linear-regression000.svg', transparent=True)
ax.plot(x_test, f_test, linewidth=3, color='b')
ax.plot(x, y, 'g.', markersize=10)
ax2 = ax.twinx()
ax2.bar(x.flatten(), model.s.flatten(), width=2, color='b')
ax2.set_ylim(0, 4)
ax2.set_yticks([0, 1, 2])
ax2.set_ylabel('$\langle s_i \\rangle$')
mlai.write_figure('../slides/diagrams/ml/olympic-loss-linear-regression001.svg', transparent=True)
import pods
pods.notebook.display_plots('olympic-loss-linear-regression{number:0>3}.svg',
directory='../slides/diagrams/ml', number=(0, 1))
Parameter Uncertainty
Classical Bayesian inference is concerned with parameter uncertainty, which equates to uncertainty in the prediction function, $\mappingFunction(\inputVector)$. The prediction function is normally an estimate of the value of $\dataScalar$ or constructs a probability density for $\dataScalar$.
Uncertainty in the prediction function can arise through uncertainty in our loss function, but also through uncertainty in parameters in the classical Bayesian sense. The full maximum entropy formalism would now be
$$
\expectationDist{\beta \scaleScalar_i L(\dataScalar_i,
\mappingFunction(\inputVector_i))}{q(\scaleScalar, \mappingFunction)} + \int
q(\scaleScalar, \mappingFunction) \log \frac{q(\scaleScalar,
\mappingFunction)}{m(\scaleScalar)m(\mappingFunction)}\text{d}\scaleScalar
\text{d}\mappingFunction
$$
$$
q(\mappingFunction, \scaleScalar) \propto
\prod_{i=1}^\numData \exp\left(- \beta \scaleScalar_i L(\dataScalar_i,
\mappingFunction(\inputVector_i)) \right) m(\scaleScalar)m(\mappingFunction)
$$
Approximation
Generally intractable, so assume:
$$ q(\mappingFunction, \scaleScalar) = q(\mappingFunction)q(\scaleScalar) $$Entropy maximization proceeds as before but with
$$ q(\scaleScalar) \propto \prod_{i=1}^\numData \exp\left(- \beta \scaleScalar_i \expectationDist{L(\dataScalar_i, \mappingFunction(\inputVector_i))}{q(\mappingFunction)} \right) m(\scaleScalar) $$
and
$$ q(\mappingFunction) \propto \prod_{i=1}^\numData \exp\left(- \beta \expectationDist{\scaleScalar_i}{q(\scaleScalar)} L(\dataScalar_i, \mappingFunction(\inputVector_i)) \right) m(\mappingFunction) $$Can now proceed with iteration between $q(\scaleScalar)$, $q(\mappingFunction)$
class BLML(mlai.BLM):
"""Bayesian Linear model with evolving loss
:param X: input values
:type X: numpy.ndarray
:param y: target values
:type y: numpy.ndarray
:param basis: basis function
:param type: function
:param beta: weight of the loss function
:param type: float"""
def __init__(self, X, y, basis=None, alpha=1.0, beta=1.0, lambd=1.0):
"Initialise"
if basis is None:
basis = mlai.basis(mlai.polynomial, number=2)
mlai.BLM.__init__(self, X, y, basis=basis, alpha=alpha, sigma2=1/beta)
self.s = np.ones((self.num_data, 1))#np.random.rand(self.num_data, 1)>0.5
self.update_w()
self.beta = beta
self.name = 'BLML_'+basis.function.__name__
self.objective_name = 'Weighted Sum of Square Training Error'
self.lambd = lambd
def update_QR(self):
"Perform the QR decomposition on the basis matrix."
self.Q, self.R = np.linalg.qr(np.vstack([self.Phi*np.sqrt(self.s), np.sqrt(self.sigma2/self.alpha)*np.eye(self.basis.number)]))
def fit(self):
"""Minimize the objective function with respect to the parameters"""
for i in range(30):
self.update_w()
self.update_s()
def update_w(self):
self.update_QR()
self.QTy = np.dot(self.Q[:self.y.shape[0], :].T, self.y*np.sqrt(self.s))
self.mu_w = sp.linalg.solve_triangular(self.R, self.QTy)
self.RTinv = sp.linalg.solve_triangular(self.R, np.eye(self.R.shape[0]), trans='T')
self.C_w = np.dot(self.RTinv, self.RTinv.T)
self.update_losses()
def update_s(self):
"""Update the weights"""
self.s = 1/(self.lambd + self.beta*self.losses)
def update_losses(self):
"""Compute the loss functions for each data point."""
self.update_f()
self.losses = ((self.y-self.f_bar)**2) + self.f_cov[:, np.newaxis]
self.beta = 1/(self.losses*self.s).mean()
self.sigma2=1/self.beta
model = BLML(x, y, basis=basis, alpha=1000, lambd=1, beta=1)
model2 = mlai.BLM(x, y, basis=basis, alpha=1000, sigma2=1)
x_test = np.linspace(data_limits[0], data_limits[1], 130)[:, None]
f_test, f_var = model.predict(x_test)
f2_test, f2_var = model2.predict(x_test)
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
from matplotlib import rc, rcParams
rcParams.update({'font.size': 22})
rc('text', usetex=True)
gp_tutorial.gpplot(x_test, f2_test, f2_test - 2*np.sqrt(f2_var), f2_test + 2*np.sqrt(f2_var), ax=ax, edgecol='r', fillcol='#CC3300')
ax.plot(x, y, 'g.', markersize=10)
ax.set_xlim(data_limits[0], data_limits[1])
ax.set_xlabel('year')
ax.set_ylabel('pace min/km')
_ = ax.set_ylim(2, 6)
mlai.write_figure('../slides/diagrams/ml/olympic-loss-bayes-linear-regression000.svg', transparent=True)
gp_tutorial.gpplot(x_test, f_test, f_test - 2*np.sqrt(f_var), f_test + 2*np.sqrt(f_var), ax=ax, edgecol='b', fillcol='#0033CC')
#ax.plot(x_test, f_test, linewidth=3, color='b')
ax.plot(x, y, 'g.', markersize=10)
ax2 = ax.twinx()
ax2.bar(x.flatten(), model.s.flatten(), width=2, color='b')
ax2.set_ylim(0, 0.2)
ax2.set_yticks([0, 0.1, 0.2])
ax2.set_ylabel('$\langle s_i \\rangle$')
mlai.write_figure('../slides/diagrams/ml/olympic-loss-bayes-linear-regression001.svg', transparent=True)
import pods
pods.notebook.display_plots('olympic-loss-bayes-linear-regression{number:0>3}.svg',
directory='../slides/diagrams/ml', number=(0, 1))
Correlated Scales
Going beyond independence between weights, we now consider $m(\vScalar)$ to be a Gaussian process, and scale by the square of $\vScalar$, $\scaleScalar=\vScalar^2$
$$
\vScalar \sim \mathcal{GP}\left(\meanScalar(\inputVector), \kernel(\inputVector, \inputVector^\prime)\right)
$$
$$
q(\vScalar) \propto
\prod_{i=1}^\numData \exp\left(- \beta \vScalar_i^2 L(\dataScalar_i,
\mappingFunction(\inputVector_i)) \right)
\exp\left(-\frac{1}{2}(\vVector-\meanVector)^\top \kernelMatrix^{-1}
(\vVector-\meanVector)\right)
$$
where $\kernelMatrix$ is the covariance of the process made up of elements taken from the covariance function, $\kernelScalar(\inputVector, t, \dataVector; \inputVector^\prime, t^\prime, \dataVector^\prime)$ so $q(\vScalar)$ itself is Gaussian with covariance
$$
\covarianceMatrix = \left(\beta\mathbf{L} + \kernelMatrix^{-1}\right)^{-1}
$$
and mean
$$
\meanTwoVector = \beta\covarianceMatrix\mathbf{L}\meanVector
$$
where L is a matrix containing the loss functions, $L(\dataScalar_i, \mappingFunction(\inputVector_i))$ along its diagonal elements with zeros elsewhere.
The update is given by
$$
\expectationDist{\vScalar_i^2}{q(\vScalar)} = \meanTwoScalar_i^2 +
\covarianceScalar_{i, i}.
$$
To compare with before, if the mean of the measure $m(\vScalar)$ was zero and the prior covariance was spherical, $\kernelMatrix=\lambda^{-1}\eye$. Then this would equate to an update,
$$
\expectationDist{\vScalar_i^2}{q(\vScalar)} = \frac{1}{\lambda + \beta L_i}
$$
which is the same as we had before for the exponential prior over $\scaleScalar$.
Conditioning the Measure
Now that we have defined a process over $\vScalar$, we could define a region in which we’re certain that we would like the weights to be high. For example, if we were looking to have a test point at location $\inputVector_\ast$, we could update our measure to be a Gaussian process that is conditioned on the observation of $\vScalar_\ast$ set appropriately at $\inputScalar_\ast$. In this case we have,
$$
\kernelMatrix^\prime = \kernelMatrix - \frac{\kernelVector_\ast\kernelVector^\top_\ast}{\kernelScalar_{*,*}}
$$
and
$$
\meanVector^\prime = \meanVector + \frac{\kernelVector_\ast}{\kernelScalar_{*,*}}
(\vScalar_\ast-\meanScalar)
$$
where $\kernelScalar_\ast$ is the vector computed through the covariance function between the training data $\inputMatrix$ and the proposed point that we are conditioning the scale upon, $\inputVector_\ast$ and $\kernelScalar_{*,*}$ is the covariance function computed for $\inputVector_\ast$. Now the updated mean and covariance can be used in the maximum entropy formulation as before.
$$
q(\vScalar) \propto \prod_{i=1}^\numData \exp\left(-
\beta \vScalar_i^2 L(\dataScalar_i, \mappingFunction(\inputVector_i)) \right)
\exp\left(-\frac{1}{2}(\vVector-\meanVector^\prime)^\top
\left.\kernelMatrix^\prime\right.^{-1} (\vVector-\meanVector^\prime)\right)
$$
We will consider the same data set as above. We first create a Gaussian process model for the update.
class GPL(mlai.GP):
def __init__(self, X, losses, kernel, beta=1.0, mu=0.0, X_star=None, v_star=None):
# Bring together locations
self.kernel = kernel
self.K = self.kernel.K(X)
self.mu = np.ones((X.shape[0],1))*mu
self.beta = beta
if X_star is not None:
kstar = kernel.K(X, X_star)
kstarstar = kernel.K(X_star, X_star)
kstarstarInv = np.linalg.inv(kstarstar)
kskssInv = np.dot(kstar, kstarstarInv)
self.K -= np.dot(kskssInv,kstar.T)
if v_star is not None:
self.mu = kskssInv*(v_star-self.mu)+self.mu
Xaug = np.vstack((X, X_star))
else:
raise ValueError("v_star should not be None when X_star is None")
class BLMLGP(BLML):
def __init__(self, X, y, basis=None, kernel=None, beta=1.0, mu=0.0, alpha=1.0, X_star=None, v_star=None):
BLML.__init__(self, X, y, basis=basis, alpha=alpha, beta=beta, lambd=None)
self.gp_model=GPL(self.X, self.losses, kernel=kernel, beta=beta, mu=mu, X_star=X_star, v_star=v_star)
def update_s(self):
"""Update the weights"""
self.gp_model.C = sp.linalg.inv(sp.linalg.inv(self.gp_model.K+np.eye(self.X.shape[0])*1e-6) + self.beta*np.diag(self.losses.flatten()))
self.gp_model.diagC = np.diag(self.gp_model.C)[:, np.newaxis]
self.gp_model.f = self.gp_model.beta*np.dot(np.dot(self.gp_model.C,np.diag(self.losses.flatten())),self.gp_model.mu) +self.gp_model.mu
#f, v = self.gp_model.K self.gp_model.predict(self.X)
self.s = self.gp_model.f*self.gp_model.f + self.gp_model.diagC # + 1.0/(self.losses*self.gp_model.beta)
model = BLMLGP(x, y,
basis=basis,
kernel=mlai.kernel(mlai.eq_cov, lengthscale=20, variance=1.0),
mu=0.0,
beta=1.0,
alpha=1000,
X_star=np.asarray([[2020]]),
v_star=np.asarray([[1]]))
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
ax.cla()
from matplotlib import rc, rcParams
rcParams.update({'font.size': 22})
rc('text', usetex=True)
gp_tutorial.gpplot(x_test, f2_test, f2_test - 2*np.sqrt(f2_var), f2_test + 2*np.sqrt(f2_var), ax=ax, edgecol='r', fillcol='#CC3300')
ax.plot(x, y, 'g.', markersize=10)
ax.set_xlim(data_limits[0], data_limits[1])
ax.set_xlabel('year')
ax.set_ylabel('pace min/km')
_ = ax.set_ylim(2, 6)
mlai.write_figure('../slides/diagrams/ml/olympic-gp-loss-bayes-linear-regression000.svg', transparent=True)
gp_tutorial.gpplot(x_test, f_test, f_test - 2*np.sqrt(f_var), f_test + 2*np.sqrt(f_var), ax=ax, edgecol='b', fillcol='#0033CC')
#ax.plot(x_test, f_test, linewidth=3, color='b')
ax.plot(x, y, 'g.', markersize=10)
ax2 = ax.twinx()
ax2.bar(x.flatten(), model.s.flatten(), width=2, color='b')
ax2.set_ylim(0, 3)
ax2.set_yticks([0, 0.5, 1])
ax2.set_ylabel('$\langle s_i \\rangle$')
mlai.write_figure('../slides/diagrams/ml/olympic-gp-loss-bayes-linear-regression001.svg', transparent=True)
pods.notebook.display_plots('olympic-gp-loss-bayes-linear-regression{number:0>3}.svg',
directory='../slides/diagrams/ml', number=(0, 1))
Finally, we make an attempt to show the joint uncertainty by first of all sampling from the loss function weights density, $q(\scaleScalar)$.
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
num_samps=10
samps=np.random.multivariate_normal(model.gp_model.f.flatten(), model.gp_model.C, size=100).T**2
ax.plot(x, samps, '-x', markersize=10, linewidth=2)
ax.set_xlim(data_limits[0], data_limits[1])
ax.set_xlabel('year')
_ = ax.set_ylabel('$s_i$')
mlai.write_figure('../slides/diagrams/ml/olympic-gp-loss-samples.svg', transparent=True)
}{olympic-gp-loss-samples}
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
ax.plot(x, y, 'r.', markersize=10)
ax.set_xlim(data_limits[0], data_limits[1])
ax.set_ylim(2, 6)
ax.set_xlabel('year')
ax.set_ylabel('pace min/km')
gp_tutorial.gpplot(x_test, f_test, f_test - 2*np.sqrt(f_var), f_test + 2*np.sqrt(f_var), ax=ax, edgecol='b', fillcol='#0033CC')
mlai.write_figure('../slides/diagrams/ml/olympic-gp-loss-bayes-linear-regression-and-samples000.svg', transparent=True)
allsamps = []
for i in range(samps.shape[1]):
model.s = samps[:, i:i+1]
model.update_w()
f_bar, f_cov =model.predict(x_test, full_cov=True)
f_samp = np.random.multivariate_normal(f_bar.flatten(), f_cov, size=10).T
ax.plot(x_test, f_samp, linewidth=0.5, color='k')
allsamps+=list(f_samp[-1, :])
mlai.write_figure('../slides/diagrams/ml/olympic-gp-loss-bayes-linear-regression-and-samples001.svg', transparent=True)
import pods
pods.notebook.display_plots('olympic-gp-loss-bayes-linear-regression-and-samples{number:0>3}.svg',
directory='../slides/diagrams/ml', number=(0, 1))
{
fig, ax = plt.subplots(figsize=plot.big_figsize)
ax.hist(np.asarray(allsamps), bins=30, density=True)
ax.set_xlabel='pace min/kim'
mlai.write_figure('../slides/diagrams/ml/olympic-gp-loss-histogram-2020.svg', transparent=True)
Conclusions
- Maximum Entropy Framework for uncertainty in
- Loss functions
- Prediction functions