Plotting Module

mlai.plot

Plotting utilities and visualization functions for the MLAI library.

This module provides a wide range of plotting functions for illustrating machine learning concepts, model fits, matrix visualizations, and more. It is designed to support both teaching and research by offering publication-quality figures and interactive visualizations.

Key features: - Matrix and covariance visualizations - Regression and classification plots - Model fit diagnostics (RMSE, holdout, cross-validation) - Neural network diagrams - Utility functions for figure generation

Dependencies: - numpy - matplotlib - (optional) daft, IPython, mpl_toolkits.mplot3d

Some functions expect models following the MLAI interface (e.g., LM, GP).

mlai.plot.pred_range(x, portion=0.2, points=200, randomize=False)[source]

Generate a range of prediction points based on the input array x. :type x: :param x: Input array (1D or 2D, numeric). :type portion: :param portion: Fraction of the span to extend beyond min/max (default: 0.2). :type points: :param points: Number of points in the generated range (default: 200). :type randomize: :param randomize: If True, randomly shuffle the generated points (default: False). :returns: Numpy array of prediction points.

mlai.plot.matrix(A, ax=None, bracket_width=3, bracket_style='square', type='values', colormap=None, highlight=False, highlight_row=None, highlight_col=None, highlight_width=3, highlight_color=[0, 0, 0], prec='.3', zoom=False, zoom_row=None, zoom_col=None, bracket_color=[0, 0, 0], fontsize=16)[source]

Plot a matrix with optional highlighting and custom brackets.

Parameters:
  • A – Matrix to plot (2D numpy array or list of lists).

  • ax – Matplotlib axis to draw the plot on (optional).

  • bracket_width – Width of the bracket lines (default: 3).

  • bracket_style – Style of brackets (‘square’ or ‘round’, default: ‘square’).

  • type – Display type (‘values’, ‘entries’, etc., default: ‘values’).

  • colormap – Colormap for matrix values (optional).

  • highlight – Whether to highlight a row/column (default: False).

  • highlight_row – Row to highlight (optional).

  • highlight_col – Column to highlight (optional).

  • highlight_width – Width of highlight lines (default: 3).

  • highlight_color – Color for highlights (default: black).

  • prec – String precision for values (default: ‘.3’).

  • zoom – Whether to zoom into a submatrix (default: False).

  • zoom_row – Row index for zoom (optional).

  • zoom_col – Column index for zoom (optional).

  • bracket_color – Color for brackets (default: black).

  • fontsize – Font size for text (default: 16).

Returns:

Matplotlib axis with the matrix plot.

mlai.plot.base_plot(K, ind=[0, 1], ax=None, contour_color=[0.0, 0.0, 1], contour_style='-', contour_size=4, contour_markersize=4, contour_marker='x', fontsize=20)[source]

Plot a base contour for a covariance matrix.

Parameters:
  • K – Covariance matrix (2D numpy array).

  • ind – Indices of variables to plot (default: [0, 1]).

  • ax – Matplotlib axis to draw the plot on (optional).

  • contour_color – Color for the contour (default: blue).

  • contour_style – Line style for the contour (default: ‘-‘).

  • contour_size – Line width for the contour (default: 4).

  • contour_markersize – Marker size for the contour (default: 4).

  • contour_marker – Marker style (default: ‘x’).

  • fontsize – Font size for labels (default: 20).

Returns:

Matplotlib axis with the contour plot.

mlai.plot.covariance_capacity(rotate_angle=0.7853981633974483, lambda1=0.5, lambda2=0.3, diagrams='../diagrams/gp', fill_color=[1.0, 1.0, 0.0], black_color=[0.0, 0.0, 0.0], blue_color=[0.0, 0.0, 1.0], magenta_color=[1.0, 0.0, 1.0])[source]

Visualize the capacity of a covariance matrix by plotting its eigenvalues and eigenvectors.

Parameters:
  • rotate_angle – Angle to rotate the covariance ellipse (default: pi/4).

  • lambda1 – First eigenvalue (default: 0.5).

  • lambda2 – Second eigenvalue (default: 0.3).

  • diagrams – Directory to save the plot (default: ‘../diagrams/gp’).

  • fill_color – Fill color for the ellipse (default: yellow).

  • black_color – Color for axes and lines (default: black).

  • blue_color – Color for one eigenvector (default: blue).

  • magenta_color – Color for the other eigenvector (default: magenta).

mlai.plot.prob_diagram(fontsize=20, diagrams='../diagrams')[source]

Plot a diagram demonstrating marginal and joint probabilities.

Parameters:
  • fontsize – Font size to use in the plot (default: 20).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.bernoulli_urn(ax, diagrams='../diagrams')[source]

Plot the urn of Jacob Bernoulli’s analogy for the Bernoulli distribution.

Parameters:
  • ax – Matplotlib axis to draw the plot on.

  • diagrams – Directory to save the diagram (default: ‘../diagrams’).

mlai.plot.bayes_billiard(ax, diagrams='../diagrams')[source]

Plot a series of figures representing Thomas Bayes’ billiard table for the Bernoulli distribution representation.

Parameters:
  • ax – Matplotlib axis to draw the plot on.

  • diagrams – Directory to save the diagrams (default: ‘../diagrams’).

mlai.plot.hyperplane_coordinates(w, b, plot_limits)[source]

Helper function for plotting the decision boundary of the perceptron.

Parameters:
  • w – The weight vector for the perceptron.

  • b – The bias parameter for the perceptron.

  • plot_limits – Dictionary containing ‘x’ and ‘y’ plot limits.

Returns:

Tuple of (x0, x1) coordinates for the hyperplane line.

mlai.plot.init_perceptron_plot(f, ax, x_plus, x_minus, w, b, fontsize=18)[source]

Initialize a plot for showing the perceptron decision boundary.

Parameters:
  • f – Matplotlib figure object.

  • ax – Array of matplotlib axes (should have 2 axes).

  • x_plus – Positive class data points (numpy array).

  • x_minus – Negative class data points (numpy array).

  • w – Weight vector for the perceptron.

  • b – Bias parameter for the perceptron.

  • fontsize – Font size for labels and titles (default: 18).

Returns:

Dictionary containing plot handles for updating.

mlai.plot.update_perceptron_plot(h, f, ax, x_plus, x_minus, i, w, b)[source]

Update plots after decision boundary has changed.

Parameters:
  • h – Dictionary containing plot handles from init_perceptron.

  • f – Matplotlib figure object.

  • ax – Array of matplotlib axes.

  • x_plus – Positive class data points.

  • x_minus – Negative class data points.

  • i – Current iteration number.

  • w – Updated weight vector.

  • b – Updated bias parameter.

mlai.plot.contour_error(x, y, m_center, c_center, samps=100, width=6.0)[source]

Generate error contour data for regression visualization.

Parameters:
  • x – Input data points.

  • y – Target values.

  • m_center – Center value for slope parameter.

  • c_center – Center value for intercept parameter.

  • samps – Number of samples for contour generation (default: 100).

  • width – Width of the parameter range (default: 6.0).

Returns:

Tuple of (m_vals, c_vals, E_grid) for contour plotting.

mlai.plot.regression_contour(f, ax, m_vals, c_vals, E_grid, fontsize=30)[source]

Plot regression error contours.

Parameters:
  • f – Matplotlib figure object.

  • ax – Matplotlib axis object.

  • m_vals – Slope parameter values.

  • c_vals – Intercept parameter values.

  • E_grid – Error values grid.

  • fontsize – Font size for labels (default: 30).

mlai.plot.init_regression(f, ax, x, y, m_vals, c_vals, E_grid, m_star, c_star, fontsize=20)[source]

Initialize regression visualization plots.

Parameters:
  • f – Matplotlib figure object.

  • ax – Array of matplotlib axes.

  • x – Input data points.

  • y – Target values.

  • m_vals – Slope parameter values.

  • c_vals – Intercept parameter values.

  • E_grid – Error values grid.

  • m_star – Optimal slope value.

  • c_star – Optimal intercept value.

  • fontsize – Font size for labels (default: 20).

Returns:

Dictionary containing plot handles for updating.

mlai.plot.update_regression(h, f, ax, m_star, c_star, iteration)[source]

Update regression plots during optimization.

Parameters:
  • h – Dictionary containing plot handles from init_regression.

  • f – Matplotlib figure object.

  • ax – Array of matplotlib axes.

  • m_star – Current optimal slope value.

  • c_star – Current optimal intercept value.

  • iteration – Current iteration number.

mlai.plot.update_regression_path(h, f, ax, m_star, c_star, iteration_text)[source]

Update regression plots during optimization with custom iteration text.

Parameters:
  • h – Dictionary containing plot handles from init_regression.

  • f – Matplotlib figure object.

  • ax – Array of matplotlib axes.

  • m_star – Current optimal slope value.

  • c_star – Current optimal intercept value.

  • iteration_text – Text to display for current iteration.

mlai.plot.regression_contour_fit(x, y, learn_rate=0.01, m_center=1.4, c_center=-3.1, m_star=0.0, c_star=-5.0, max_iters=1000, diagrams='../diagrams')[source]

Plot an evolving contour plot of regression optimisation.

Parameters:
  • x – Input data points.

  • y – Target values.

  • learn_rate – Learning rate for optimization (default: 0.01).

  • m_center – Center value for slope parameter (default: 1.4).

  • c_center – Center value for intercept parameter (default: -3.1).

  • m_star – Initial slope value (default: 0.0).

  • c_star – Initial intercept value (default: -5.0).

  • max_iters – Maximum number of iterations (default: 1000).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

Returns:

Number of frames generated.

mlai.plot.regression_contour_sgd(x, y, learn_rate=0.01, m_center=1.4, c_center=-3.1, m_star=0.0, c_star=-5.0, max_iters=4000, diagrams='../diagrams')[source]

Plot evolution of the solution of linear regression via SGD.

Parameters:
  • x – Input data points.

  • y – Target values.

  • learn_rate – Learning rate for SGD (default: 0.01).

  • m_center – Center value for slope parameter (default: 1.4).

  • c_center – Center value for intercept parameter (default: -3.1).

  • m_star – Initial slope value (default: 0.0).

  • c_star – Initial intercept value (default: -5.0).

  • max_iters – Maximum number of iterations (default: 4000).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

Returns:

Number of frames generated.

mlai.plot.regression_contour_coordinate_descent(x, y, m_center=1.4, c_center=-3.1, m_star=0.0, c_star=-5.0, max_iters=100, diagrams='../diagrams')[source]

Plot evolution of the solution of linear regression via coordinate descent.

Parameters:
  • x – Input data points.

  • y – Target values.

  • m_center – Center value for slope parameter (default: 1.4).

  • c_center – Center value for intercept parameter (default: -3.1).

  • m_star – Initial slope value (default: 0.0).

  • c_star – Initial intercept value (default: -5.0).

  • max_iters – Maximum number of iterations (default: 100).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

Returns:

Number of frames generated.

mlai.plot.over_determined_system(diagrams='../diagrams')[source]

Visualize what happens in an over determined system with linear regression.

Parameters:

diagrams – Directory to save the plots (default: ‘../diagrams’).

mlai.plot.gaussian_of_height(diagrams='../diagrams')[source]

Plot a Gaussian density representing heights.

Parameters:

diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.gaussian_volume_1D(r_yolk=0.95, r_iron_sulfide=1.05, directory='../diagrams')[source]

Plot Gaussian volumes in 1D with shaded regions representing different probability areas.

This function creates a visualization of a 1D Gaussian distribution with three distinct shaded regions representing different probability masses:

  • Yolk (65.8%): Central region from -0.95 to 0.95 standard deviations

  • Iron Sulfide (4.8%): Intermediate regions from 0.95 to 1.05 and -1.05 to -0.95 std devs

  • White (29.4%): Outer regions beyond ±1.05 standard deviations

The visualization is inspired by the composition of an egg, where different regions represent different probability masses of the Gaussian distribution.

Returns:

Saves the plot as ‘gaussian-volume-1D-shaded.svg’ in the specified directory

Return type:

None

Note

The function uses scipy.stats.norm for the Gaussian probability density function. The plot includes grid lines and proper axis labels for clarity.

mlai.plot.gaussian_volume_2D(r_yolk=0.95, r_iron_sulfide=1.05, r_outer=3.0, directory='../diagrams')[source]

Plot Gaussian volumes in 2D viewed from above using egg-shaped ellipses.

This function creates a visualization of a 2D Gaussian distribution viewed from above with three distinct elliptical regions representing different probability masses:

  • Yellow: Central elliptical region with radius r_yolk standard deviations

  • Green: Intermediate elliptical ring from radius r_yolk to r_iron_sulfide standard deviations

  • White: Outer elliptical region beyond radius r_iron_sulfide of 3 standard deviations

The visualization is inspired by the composition of an egg viewed from above, where different regions represent different probability masses of the 2D Gaussian distribution. The ellipses are slightly eccentric to give them a more egg-like appearance.

Returns:

Saves the plot as ‘gaussian-volume-2D.svg’ in the specified directory

Return type:

None

mlai.plot.gaussian_volume_3D(r_yolk=0.95, r_iron_sulfide=1.05, r_outer=3.0, directory='../diagrams')[source]

Plot Gaussian volumes in 3D viewed from above using egg-shaped ellipses with a semi-transparent white overlay to give a 3D depth effect.

This function creates a visualization of a 3D Gaussian distribution viewed from above with three distinct elliptical regions representing different probability masses, plus a semi-transparent white overlay to simulate looking through the white of the egg:

  • Yellow: Central elliptical region with radius r_yolk standard deviations

  • Green: Intermediate elliptical ring from radius r_yolk to r_iron_sulfide standard deviations

  • White: Outer elliptical region beyond radius r_iron_sulfide of 3 standard deviations

  • Semi-transparent white overlay: Gives the 3D depth effect

The visualization is inspired by the composition of an egg viewed from above, where different regions represent different probability masses of the 3D Gaussian distribution. The ellipses are slightly eccentric to give them a more egg-like appearance.

Returns:

Saves the plot as ‘gaussian-volume-3D.svg’ in the specified directory

Return type:

None

mlai.plot.marathon_fit(model, value, param_name, param_range, xlim, fig, ax, x_val=None, y_val=None, objective=None, diagrams='../diagrams', fontsize=20, objective_ylim=None, prefix='olympic', title=None, png_plot=False, samps=130)[source]

Plot fit of the olympic marathon data alongside error.

Parameters:
  • model – Model object with a predict method and data attributes.

  • value – Value to fit.

  • param_name – Name of the parameter being varied.

  • param_range – Range of parameter values.

  • xlim – Limits for the x-axis.

  • fig – Matplotlib figure object.

  • ax – Array of matplotlib axes.

  • x_val – Optional x value for highlighting (default: None).

  • y_val – Optional y value for highlighting (default: None).

  • objective – Objective function (optional).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

  • fontsize – Font size for labels (default: 20).

  • objective_ylim – Y-axis limits for the objective plot (optional).

  • prefix – Prefix for saved plot filenames (default: ‘olympic’).

  • title – Title for the plot (optional).

  • png_plot – Whether to save as PNG (default: False).

  • samps – Number of samples for prediction (default: 130).

mlai.plot.rmse_fit(x, y, param_name, param_range, model=<class 'mlai.linear_models.LM'>, objective_ylim=None, xlim=None, plot_fit=<function marathon_fit>, diagrams='../diagrams', **kwargs)[source]

Fit a model and show RMSE error.

Parameters:
  • x – The input x data.

  • y – The input y data.

  • param_name – The parameter name to vary.

  • param_range – The range over which to vary the parameter.

  • model – The model to fit (default is LM).

  • objective_ylim – The y limits for the plot of the objective.

  • xlim – The x limits for the plot.

  • plot_fit – Function to use for plotting the fit.

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

  • **kwargs

    Additional keyword arguments passed to plot_fit.

mlai.plot.holdout_fit(x, y, param_name, param_range, model=<class 'mlai.linear_models.LM'>, val_start=20, objective_ylim=None, xlim=None, plot_fit=<function marathon_fit>, permute=True, prefix='olympic_val', diagrams='../diagrams', **kwargs)[source]

Fit a model and show holdout error.

Parameters:
  • x – The input x data.

  • y – The input y data.

  • param_name – The parameter name to vary.

  • param_range – The range over which to vary the parameter.

  • model – The model to fit (default is LM).

  • val_start – Starting index for validation set (default: 20).

  • objective_ylim – The y limits for the plot of the objective.

  • xlim – The x limits for the plot.

  • plot_fit – Function to use for plotting the fit.

  • permute – Whether to permute the data (default: True).

  • prefix – Prefix for saved plot filenames (default: ‘olympic_val’).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

  • **kwargs

    Additional keyword arguments passed to plot_fit.

mlai.plot.loo_fit(x, y, param_name, param_range, model=<class 'mlai.linear_models.LM'>, objective_ylim=None, xlim=None, plot_fit=<function marathon_fit>, prefix='olympic_loo', diagrams='../diagrams', **kwargs)[source]

Fit a model and show leave one out error.

Parameters:
  • x – The input x data.

  • y – The input y data.

  • param_name – The parameter name to vary.

  • param_range – The range over which to vary the parameter.

  • model – The model to fit (default is LM).

  • objective_ylim – The y limits for the plot of the objective.

  • xlim – The x limits for the plot.

  • plot_fit – Function to use for plotting the fit.

  • prefix – Prefix for saved plot filenames (default: ‘olympic_loo’).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

  • **kwargs

    Additional keyword arguments passed to plot_fit.

mlai.plot.cv_fit(x, y, param_name, param_range, model=<class 'mlai.linear_models.LM'>, objective_ylim=None, xlim=None, plot_fit=<function marathon_fit>, num_parts=5, diagrams='../diagrams', **kwargs)[source]

Fit a model and show cross validation error.

Parameters:
  • x – The input x data.

  • y – The input y data.

  • param_name – The parameter name to vary.

  • param_range – The range over which to vary the parameter.

  • model – The model to fit (default is LM).

  • objective_ylim – The y limits for the plot of the objective.

  • xlim – The x limits for the plot.

  • plot_fit – Function to use for plotting the fit.

  • num_parts – Number of parts for cross-validation (default: 5).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

  • **kwargs

    Additional keyword arguments passed to plot_fit.

mlai.plot.under_determined_system(diagrams='../diagrams')[source]

Visualize what happens in an under determined system with linear regression.

Parameters:

diagrams – Directory to save the plots (default: ‘../diagrams’).

mlai.plot.bayes_update(diagrams='../diagrams')[source]

Visualize Bayesian updating with a simple example.

Parameters:

diagrams – Directory to save the plots (default: ‘../diagrams’).

mlai.plot.height_weight(h=None, w=None, muh=1.7, varh=0.0225, muw=75, varw=36, diagrams='../diagrams')[source]

Plot height and weight data with Gaussian distributions.

Parameters:
  • h – Height data (optional).

  • w – Weight data (optional).

  • muh – Mean height (default: 1.7).

  • varh – Variance of height (default: 0.0225).

  • muw – Mean weight (default: 75).

  • varw – Variance of weight (default: 36).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.independent_height_weight(h=None, w=None, muh=1.7, varh=0.0225, muw=75, varw=36, num_samps=20, diagrams='../diagrams')[source]

Plot independent height and weight samples.

Parameters:
  • h – Height data (optional).

  • w – Weight data (optional).

  • muh – Mean height (default: 1.7).

  • varh – Variance of height (default: 0.0225).

  • muw – Mean weight (default: 75).

  • varw – Variance of weight (default: 36).

  • num_samps – Number of samples to generate (default: 20).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.correlated_height_weight(h=None, w=None, muh=1.7, varh=0.0225, muw=75, varw=36, num_samps=20, diagrams='../diagrams')[source]

Plot correlated height and weight samples.

Parameters:
  • h – Height data (optional).

  • w – Weight data (optional).

  • muh – Mean height (default: 1.7).

  • varh – Variance of height (default: 0.0225).

  • muw – Mean weight (default: 75).

  • varw – Variance of weight (default: 36).

  • num_samps – Number of samples to generate (default: 20).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.two_point_pred(K, f, x, ax=None, ind=[0, 1], conditional_linestyle='-', conditional_linecolor=[1.0, 0.0, 0.0], conditional_size=4, fixed_linestyle='-', fixed_linecolor=[0.0, 1.0, 0.0], fixed_size=4, stub=None, start=0, diagrams='../diagrams')[source]

Plot two-point prediction for Gaussian processes.

Parameters:
  • K – Covariance matrix.

  • f – Function values.

  • x – Input points.

  • ax – Matplotlib axis (optional).

  • ind – Indices to plot (default: [0, 1]).

  • conditional_linestyle – Line style for conditional (default: ‘-‘).

  • conditional_linecolor – Color for conditional (default: red).

  • conditional_size – Line width for conditional (default: 4).

  • fixed_linestyle – Line style for fixed (default: ‘-‘).

  • fixed_linecolor – Color for fixed (default: green).

  • fixed_size – Line width for fixed (default: 4).

  • stub – Stub parameter (optional).

  • start – Starting index (default: 0).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.output_augment_x(x, num_outputs)[source]

Augment input x with output dimensions.

Parameters:
  • x – Input data.

  • num_outputs – Number of outputs.

Returns:

Augmented input data.

mlai.plot.basis(function, x_min, x_max, fig, ax, loc, text, diagrams='./diagrams', fontsize=20, num_basis=3, num_plots=3)[source]

Plot basis functions.

Parameters:
  • function – Basis function to plot.

  • x_min – Minimum x value.

  • x_max – Maximum x value.

  • fig – Matplotlib figure.

  • ax – Matplotlib axis.

  • loc – Location for text.

  • text – Text to display.

  • diagrams – Directory to save the plot (default: ‘./diagrams’).

  • fontsize – Font size (default: 20).

  • num_basis – Number of basis functions (default: 3).

  • num_plots – Number of plots (default: 3).

mlai.plot.computing_covariance(kernel, x, formula, stub, prec='1.2', diagrams='../slides/diagrams/kern')[source]

Visualize covariance computation.

Parameters:
  • kernel – Kernel function.

  • x – Input data.

  • formula – Formula to display.

  • stub – Stub parameter.

  • prec – Precision for values (default: ‘1.2’).

  • diagrams – Directory to save the plots (default: ‘../slides/diagrams/kern’).

mlai.plot.kern_circular_sample(K, mu=None, x=None, filename=None, fig=None, num_samps=5, num_theta=48, multiple=True, diagrams='../diagrams', **kwargs)[source]

Sample from a circular kernel and create animation.

Parameters:
  • K – Kernel function.

  • mu – Mean (optional).

  • x – Input data (optional).

  • filename – Output filename (optional).

  • fig – Matplotlib figure (optional).

  • num_samps – Number of samples (default: 5).

  • num_theta – Number of theta values (default: 48).

  • multiple – Whether to show multiple samples (default: True).

  • diagrams – Directory to save the plots (default: ‘../diagrams’).

  • **kwargs

    Additional keyword arguments.

Returns:

Animation object.

mlai.plot.animate_covariance_function(kernel_function, x=None, num_samps=5, multiple=False)[source]

Create animation of covariance function samples.

Parameters:
  • kernel_function – Kernel function to sample from.

  • x – Input data (optional).

  • num_samps – Number of samples (default: 5).

  • multiple – Whether to show multiple samples (default: False).

Returns:

Animation object.

mlai.plot.multi_output_covariance_func(kernel, x=None, num_outputs=2, shortname=None, longname=None, comment=None, num_samps=5, diagrams='../diagrams')[source]

Complete multi-output covariance function visualization with both static plots and animation.

Parameters:
  • kernel – Multi-output kernel function (e.g., icm_cov, lmc_cov)

  • x – Input data (optional)

  • num_outputs – Number of outputs to visualize

  • shortname – Short name for the kernel (optional)

  • longname – Long name for the kernel (optional)

  • comment – Comment to display (optional)

  • num_samps – Number of samples for animation (default: 5)

  • diagrams – Directory to save the plot (default: ‘../diagrams’)

mlai.plot.multi_output_covariance_heatmap(kernel, x=None, num_outputs=2, shortname=None, diagrams='../diagrams')[source]

Public wrapper for multi-output covariance heatmap visualization.

mlai.plot.multi_output_sample_plot(kernel, x=None, num_outputs=2, num_samps=3, shortname=None, diagrams='../diagrams')[source]

Public wrapper for multi-output sample plot visualization.

mlai.plot.multi_output_animate_covariance_function(kernel, x=None, num_outputs=2, num_samps=5, diagrams='../diagrams')[source]

Public wrapper for multi-output animated covariance function.

mlai.plot.covariance_func(kernel, x=None, shortname=None, longname=None, comment=None, num_samps=5, diagrams='../diagrams', multiple=False)[source]

Plot covariance function samples.

Parameters:
  • kernel – Kernel function to sample from.

  • x – Input data (optional).

  • shortname – Short name for the kernel (optional).

  • longname – Long name for the kernel (optional).

  • comment – Comment to display (optional).

  • num_samps – Number of samples (default: 5).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

  • multiple – Whether to show multiple samples (default: False).

mlai.plot.rejection_samples(kernel, x=None, num_few=20, num_many=1000, diagrams='../diagrams', **kwargs)[source]

Generate rejection samples from a kernel.

Parameters:
  • kernel – Kernel function to sample from.

  • x – Input data (optional).

  • num_few – Number of few samples (default: 20).

  • num_many – Number of many samples (default: 1000).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

  • **kwargs

    Additional keyword arguments.

mlai.plot.two_point_sample(kernel_function, diagrams='../diagrams')[source]

Sample from a two-point kernel function.

Parameters:
  • kernel_function – Kernel function to sample from.

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.poisson(diagrams='../diagrams')[source]

Plot Poisson distribution examples.

Parameters:

diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.logistic(diagrams='../diagrams')[source]

Plot logistic function examples.

Parameters:

diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.height(ax, h, ph)[source]

Plot height as a distribution.

mlai.plot.weight(ax, w, pw)[source]

Plot weight distribution.

Parameters:
  • ax – Matplotlib axis.

  • w – Weight values.

  • pw – Weight probabilities.

mlai.plot.low_rank_approximation(fontsize=25, diagrams='../diagrams')[source]

Visualize low-rank matrix approximation.

Parameters:
  • fontsize – Font size for labels (default: 25).

  • diagrams – Directory to save the plot (default: ‘../diagrams’).

mlai.plot.blank_canvas(ax)[source]

Create a blank canvas for plotting.

Parameters:

ax – Matplotlib axis to clear.

mlai.plot.kronecker_illustrate(fontsize=25, figsize=(10, 5), diagrams='../diagrams')[source]

Illustrate a Kronecker product

mlai.plot.kronecker_IK(fontsize=25, figsize=(10, 5), reverse=False, diagrams='../diagrams')[source]

Illustrate a Kronecker product

mlai.plot.kronecker_IK_highlight(fontsize=25, figsize=(10, 5), reverse=False, diagrams='../diagrams')[source]

Illustrate a Kronecker product

mlai.plot.kronecker_WX(fontsize=25, figsize=(10, 5), diagrams='../diagrams')[source]

Illustrate a Kronecker product

mlai.plot.perceptron(x_plus, x_minus, learn_rate=0.1, max_iters=10000, max_updates=30, seed=100001, diagrams='../diagrams')[source]

Fit a perceptron algorithm and record iterations of fit

mlai.plot.dist2(X, Y)[source]

Computer squared distances between two design matrices

mlai.plot.clear_axes(ax)[source]

Clear the axes lines and ticks

mlai.plot.non_linear_difficulty_plot_3(alpha=1.0, rbf_width=2, num_basis_func=3, num_samples=10, number_across=30, fontsize=30, diagrams='../diagrams')[source]

Push a Gaussian density through an RBF network and plot results

mlai.plot.non_linear_difficulty_plot_2(alpha=1.0, rbf_width=2, num_basis_func=3, num_samples=10, number_across=101, fontsize=30, diagrams='../diagrams')[source]

Plot a one dimensional line mapped through a two dimensional mapping.

mlai.plot.non_linear_difficulty_plot_1(alpha=1.0, data_std=0.2, rbf_width=0.1, num_basis_func=100, number_across=200, num_samples=1000, patch_color=[0.3, 0.3, 0.3], fontsize=30, diagrams='../diagrams')[source]

Plot a one dimensional Gaussian pushed through an RBF network.

class mlai.plot.network(layers=None)[source]

Bases: object

Class for drawing a neural network.

__init__(layers=None)[source]
add_layer(layer)[source]
property width

Return the widest layer number

property depth

Return the depth of the network

draw(grid_unit=2.5, node_unit=0.9, observed_style='shaded', line_width=1, origin=[0, 0])[source]

Draw the network using daft

class mlai.plot.layer(width=5, label='', observed=False, fixed=False, text='')[source]

Bases: object

Class for a neural network layer

__init__(width=5, label='', observed=False, fixed=False, text='')[source]
mlai.plot.neural_network(directory='../diagrams')[source]

Draw a neural network.

mlai.plot.deep_nn(directory='../diagrams')[source]

Draw a deep neural network.

mlai.plot.deep_nn_bottleneck(directory='../diagrams')[source]

Draw a deep neural network with bottleneck layers.

mlai.plot.box(lim_val=0.5, side_length=25)[source]

Plot a box for use in deep GP samples.

mlai.plot.stack_gp_sample(kernel=None, latent_dims=[2, 2, 2, 2, 2], side_length=25, lim_val=0.5, num_samps=5, figsize=(1.4, 7), directory='../diagrams')[source]

Draw a sample from a deep Gaussian process.

mlai.plot.vertical_chain(depth=5, grid_unit=1.5, node_unit=1, line_width=1.5, shape=None, target='y')[source]

Make a verticle chain representation of a deep GP

mlai.plot.horizontal_chain(depth=5, shape=None, origin=[0, 0], grid_unit=4, node_unit=1.9, line_width=3, target='y')[source]

Plot a horizontal Markov chain.

mlai.plot.shared_gplvm()[source]

Plot graphical model of a Shared GP-LVM

mlai.plot.ppca_graphical_model(directory='../diagrams')[source]

Plot graphical model of Probabilistic Principal Component Analysis (PPCA).

The model shows: - Y: Observed data (grayed/shaded) - X: Latent variables (white) - W: Weight matrix parameter (black dot) - σ²: Noise variance parameter (black dot)

Layout: - Y is at the center - X is above Y at -30° angle - W is above Y at +30° angle - σ² is to the right of Y

mlai.plot.dppca_graphical_model(directory='../diagrams')[source]

Plot graphical model of Dual Probabilistic Principal Component Analysis (DPPCA).

The model shows: - Y: Observed data (grayed/shaded) - X: Latent variables (white) - W: Weight matrix parameter (black dot) - σ²: Noise variance parameter (black dot)

Layout: - Y is at the center - X is above Y at -30° angle - W is above Y at +30° angle - σ² is to the right of Y

mlai.plot.three_pillars_innovation(directory='./diagrams')[source]

Plot graphical model of three pillars of successful innovation

mlai.plot.model_output(model, output_dim=0, scale=1.0, offset=0.0, ax=None, xlabel='$x$', ylabel='$y$', xlim=None, ylim=None, fontsize=20, portion=0.2)[source]

Plot the output of a GP. :type model: :param model: the model for the output plotting. :type output_dim: :param output_dim: the output dimension to plot. :type scale: :param scale: how to scale the output. :type offset: :param offset: how to offset the output. :type ax: :param ax: axis to plot on. :type xlabel: :param xlabel: label for the x axis (default: ‘$x$’). :type ylabel: :param ylabel: label for the y axis (default: ‘$y$’). :type xlim: :param xlim: limits of the x axis :type ylim: :param ylim: limits of the y axis :type fontsize: :param fontsize: fontsize (default 20) :type portion: :param portion: What proportion of the input range to put outside the data.

mlai.plot.model_sample(model, output_dim=0, scale=1.0, offset=0.0, samps=10, ax=None, xlabel='$x$', ylabel='$y$', fontsize=20, portion=0.2, xlim=None, ylim=None)[source]

Plot model output with samples.

mlai.plot.multiple_optima(ax=None, gene_number=937, resolution=80, model_restarts=10, seed=10000, max_iters=300, optimize=True, fontsize=20, directory='./diagrams')[source]

Show an example of a multimodal error surface for Gaussian process regression. Gene 937 has bimodal behaviour where the noisy mode is higher.

Plot google trends data for a number of different terms.

mlai.plot.gp_optimize_quadratic(lambda1=3, lambda2=1, directory='../diagrams', fontsize=20, plot_width=0.6, generate_frames=True)[source]

Create animated visualization of GP optimization quadratic data fit term.

This function replaces the MATLAB code that generates animated LaTeX diagrams showing the quadratic data fit term $

rac{mathbf{y}^ opmathbf{K}^{-1}mathbf{y}}{2}$

with elliptical contours and eigenvalue visualization.

mlai.plot.tsne_example(X, labels, perplexities=[5, 30, 50], random_state=42)[source]

Plot t-SNE embeddings with different perplexity values

mlai.plot.squared_distances(Y, shortname, description, directory='./diagrams', figsize=None)[source]

Plot squared distances between points in Y

mlai.plot.visualise_relu_activations(nn, X1, X2, layer_idx=0, directory='../diagrams', filename='relu-activations.svg')[source]

Visualise which ReLU units are activated in a specific layer.

This function creates a grid of subplots showing the activation patterns of each ReLU unit in a neural network layer. Each subplot shows where that particular unit is active (positive output) vs inactive (zero output).

Parameters:
  • nn (NeuralNetwork) – Trained neural network

  • X2 (X1,) – Meshgrid coordinates for visualisation

  • layer_idx (int) – Which hidden layer to visualise (0-indexed)

Returns:

Figure object

Return type:

matplotlib.figure.Figure

Examples

>>> x1 = np.linspace(-2, 2, 50)
>>> x2 = np.linspace(-2, 2, 50)
>>> X1, X2 = np.meshgrid(x1, x2)
>>> nn = NeuralNetwork([2, 10, 1], [ReLUActivation(), LinearActivation()])
>>> fig = visualise_relu_activations(nn, X1, X2, layer_idx=0)
mlai.plot.visualise_activation_summary(nn, X1, X2, layer_idx=0, directory='../diagrams', filename='activation-summary.svg')[source]

Create a summary visualisation showing network behavior.

This function creates a 3-panel visualization showing: 1. Network output 2. Number of active ReLUs per point 3. Binary activation pattern

Parameters:
  • nn (NeuralNetwork) – Trained neural network

  • X2 (X1,) – Meshgrid coordinates for visualization

  • layer_idx (int) – Which hidden layer to visualize (0-indexed)

Returns:

Figure object

Return type:

matplotlib.figure.Figure

Examples

>>> x1 = np.linspace(-2, 2, 50)
>>> x2 = np.linspace(-2, 2, 50)
>>> X1, X2 = np.meshgrid(x1, x2)
>>> nn = NeuralNetwork([2, 10, 1], [ReLUActivation(), LinearActivation()])
>>> fig = visualize_activation_summary(nn, X1, X2, layer_idx=0)
mlai.plot.visualise_decision_boundaries(nn, X1, X2, layer_idx=0, directory='../diagrams', filename='decision-boundaries.svg')[source]

Visualize the linear decision boundaries created by each ReLU unit.

This function shows the linear decision boundaries (where each ReLU unit transitions from inactive to active) overlaid on the network’s output.

Parameters:
  • nn (NeuralNetwork) – Trained neural network

  • X2 (X1,) – Meshgrid coordinates for visualization

  • layer_idx (int) – Which hidden layer to visualize (0-indexed)

Returns:

Figure object

Return type:

matplotlib.figure.Figure

Examples

>>> x1 = np.linspace(-2, 2, 50)
>>> x2 = np.linspace(-2, 2, 50)
>>> X1, X2 = np.meshgrid(x1, x2)
>>> nn = NeuralNetwork([2, 10, 1], [ReLUActivation(), LinearActivation()])
>>> fig = visualize_decision_boundaries(nn, X1, X2, layer_idx=0)