<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Neil Lawrence&apos;s Talks</title>
    <description>talks given by Neil Lawrence</description>
    <link>http://inverseprobability.com/talks/</link>
    <atom:link href="http://inverseprobability.com/talks/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Fri, 10 Apr 2026 10:44:50 +0000</pubDate>
    <lastBuildDate>Fri, 10 Apr 2026 10:44:50 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>AI and Security: From Bandwidth to Practical Implications</title>
        <description>&lt;p&gt;The evolution from classical security to AI-mediated security challenges represents a shift in how we think about protecting information systems. This talk explores the bandwidth limitations that create security vulnerabilities, introduces the Human Analogue Machine (HAM) from &lt;em&gt;The Atomic Human&lt;/em&gt; as “humans scaled up,” and examines practical security implications through three phases: classical security enhanced with GenAI, GenAI-specific security challenges, and broader information systems implications.&lt;/p&gt; &lt;p&gt;Through real-world examples including the Heathrow airport cyber-attack and Notion AI Agents research, we’ll examine how security thinking must evolve to address threats that exploit the very capabilities that make AI systems so powerful.&lt;/p&gt;</description>
        <pubDate>Sat, 01 Aug 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-and-security-berkeley-summit.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-and-security-berkeley-summit.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Agentic AI and Security</title>
        <description>&lt;p&gt;Agentic AI changes security because it turns natural language into actions: tool calls, API requests, and workflow execution. That shift amplifies both productivity and risk—incidents can unfold at machine bandwidth while human sense-making remains slow, distributed, and approval-bound.&lt;/p&gt; &lt;p&gt;This talk offers a practical frame for leaders building and deploying AI in European organisations: how to capture the upside of delegation (faster operations, reduced coordination overhead, partial paydown of technical and intellectual debt) while avoiding a new liability, &lt;em&gt;agentic debt&lt;/em&gt;, that accumulates when authority boundaries, evidence requirements, and recovery paths are left implicit.&lt;/p&gt; &lt;p&gt;We’ll move from first principles (bandwidth and interfaces) to concrete patterns: instruction hierarchy, least-privilege tooling, auditable action boundaries, reversible operations, and containment-by-default.&lt;/p&gt;</description>
        <pubDate>Mon, 01 Jun 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-and-security-handelsblatt-tech-2026.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-and-security-handelsblatt-tech-2026.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Inaccessible Game</title>
        <description>&lt;p&gt;In this talk we will explore a zero-player game based on an information isolation constraint. The dynamics of the game emerge from a “no-barber” selection principle that prohibits external structure. The aim is for the game to avoid impredictive-style inconsistencies. Motivated by the selection principle we will derive a “selected” trajectory in the game that consists of a second-order constrained maximum entropy production along the information geometry.&lt;/p&gt;</description>
        <pubDate>Wed, 20 May 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-inaccessible-game.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-inaccessible-game.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Humans, Economics and AI</title>
        <description>&lt;p&gt;As artificial intelligence advances from language models to agentic systems with economic agency, the Ministry of Economy and Finance and broader public sector face fundamental questions about how these technologies reshape economic behaviour, labour markets, and human capital. This talk explores themes from “The Atomic Human,” examining why attempts to measure and optimise human capital through AI risk creating new productivity paradoxes, and what this means for economic modelling and fiscal policy.&lt;/p&gt; &lt;p&gt;Drawing on innovation economics and information theory, we examine how the attention economy operates as a new form of capital accumulation, why traditional market mechanisms struggle to map AI capabilities to societal needs, and how governments might navigate an economy where the boundaries between human and machine intelligence increasingly blur.&lt;/p&gt;</description>
        <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/humans-economics-and-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/humans-economics-and-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Leadership and AI: Strategic Decision Making in the Age of Human-Analogue Machines</title>
        <description>&lt;p&gt;As AI technologies reshape business landscapes across industries, leaders face fundamental questions about balancing automation with human judgment, managing information flows, and designing organisational decision-making structures. This masterclass builds on the ideas in &lt;em&gt;The Atomic Human&lt;/em&gt; to provide MBA students with practical frameworks for understanding AI’s strategic implications through the lens of information topography, decision-making architectures, and human-AI collaboration.&lt;/p&gt; &lt;p&gt;Through a combination of conceptual frameworks, real-world case studies, and interactive exercises, participants will develop the critical thinking tools needed to lead organisations in the age of human-analogue machines. We’ll explore how to strategically implement AI while maintaining human agency, building intelligent accountability, and creating organisational effectiveness in a world where machines increasingly mimic human capabilities.&lt;/p&gt;</description>
        <pubDate>Sat, 18 Apr 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/leadership-and-ai-luiss-part-time-mba.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/leadership-and-ai-luiss-part-time-mba.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Information, Energy and Intelligence</title>
        <description>&lt;p&gt;David MacKay’s work emphasized explicit assumptions and operational clarity in modeling information and inference. In games like Conway’s Life, rules are explicit and self-contained. In analytic frameworks, we often take implicit adjudicators for granted—external observers, pre-specified outcome spaces, privileged decompositions. What if we forbid such external adjudication and seek only rules that can be applied from within the system?&lt;/p&gt; &lt;p&gt;This talk explores the “inaccessible game,” an information-theoretic dynamical system where all rules must be internally adjudicable. Starting from three axioms characterizing information loss (Baez-Fritz-Leinster), we show how a “no-barber principle” selects marginal entropy conservation, maximum entropy dynamics, and specific substrate properties, not by assumption but by consistency requirements. We explore when the constraints imply energy-entropy equivalence in the thermodynamic limit and how entropy time becomes a distinguished clock withn the framework.&lt;/p&gt; &lt;p&gt;This work is dedicated to the memory of David MacKay.&lt;/p&gt;</description>
        <pubDate>Fri, 27 Mar 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/information-energy-and-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/information-energy-and-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;In this talk Neil will introduce the notion of the Atomic Human: the indivisible component of our humanity that can’t be taken by the machine. He will argue that the algorithmic decision making that emerges from the machine offers a different place to stand and better understand our own intelligence and what is precious about us. Without this perspective we do risk displacing our human parts in favour of the machine, but by seeing our intelligence from the machine’s perspective we can ensure that we integrate these new technologies in ways that enhance who we are rather than replace who we are.&lt;/p&gt;</description>
        <pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-msr-cambridge.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-msr-cambridge.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Agentic Security at Trent: From Judgment to Time-Bounded Delegation</title>
        <description>&lt;p&gt;Building on our first Trent session, this short offsite talk focuses on one practical question: how do we scale agentic systems without ceding the institutional judgement layer that keeps decisions safe?&lt;/p&gt; &lt;p&gt;We frame the challenge through three ideas: (1) Data-Oriented Agents (DOAgents) for networks of specialised agents, (2) the Consistent Reasoning Paradox and why robust systems need explicit “I don’t know” behavior, and (3) agentic debt as the operational cost of delegation without bounded time, authority, and recovery paths.&lt;/p&gt; &lt;p&gt;The proposal is pragmatic: each subtask in an agent graph receives a time budget and explicit termination policy. Agents either complete with evidence, escalate with “I don’t know,” or trigger human involvement. These budgets can be tuned empirically by balancing human interruption cost against compute waste and risk exposure.&lt;/p&gt;</description>
        <pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-and-security-trent-offsite-2026.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-and-security-trent-offsite-2026.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI for Science</title>
        <description>&lt;p&gt;AI is changing how science is practiced: from data analysis and surrogate modelling to the use of large, general-purpose models as scientific assistants that can read, write, code, and coordinate work.&lt;/p&gt; &lt;p&gt;This opening lecture frames the workshop’s core questions as questions about &lt;em&gt;where knowledge lives&lt;/em&gt;, &lt;em&gt;what we mean by understanding&lt;/em&gt;, and &lt;em&gt;how we preserve scientific agency&lt;/em&gt; when useful models are not fully intelligible. We’ll build on Popper/Kuhn perspectives on scientific progress, and outline questions for an AI-for-science “playbook” with particular focus on the ideas of tacit knowledge and “agentic debt.”&lt;/p&gt;</description>
        <pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-for-science-bellairs-workshop-2026.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-for-science-bellairs-workshop-2026.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our futures.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s book “The Atomic Human.”&lt;/p&gt;</description>
        <pubDate>Fri, 13 Mar 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-mila.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-mila.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;As artificial intelligence advances from language models to agentic systems with economic agency, central banks and financial institutions face fundamental questions about how these technologies reshape economic behavior, market dynamics, and human capital. This talk explores themes from “The Atomic Human,” examining why attempts to measure and optimize human capital through AI risk creating new productivity paradoxes, and what this means for economic modeling and monetary policy.&lt;/p&gt; &lt;p&gt;Drawing on innovation economics and information theory, we examine how the attention economy operates as a new form of capital accumulation, why traditional market mechanisms struggle to map AI capabilities to societal needs, and how central banks might navigate an economy where the boundaries between human and machine intelligence increasingly blur.&lt;/p&gt;</description>
        <pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-cannot-replace-the-atomic-human-banca-ditalia.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-cannot-replace-the-atomic-human-banca-ditalia.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Leadership and AI: Strategic Decision Making in the Age of Human-Analogue Machines</title>
        <description>&lt;p&gt;As AI technologies reshape business landscapes across industries, leaders face fundamental questions about balancing automation with human judgment, managing information flows, and designing organisational decision-making structures. This masterclass builds on the ideas in &lt;em&gt;The Atomic Human&lt;/em&gt; to provide MBA students with practical frameworks for understanding AI’s strategic implications through the lens of information topography, decision-making architectures, and human-AI collaboration.&lt;/p&gt; &lt;p&gt;Through a combination of conceptual frameworks, real-world case studies, and interactive exercises, participants will develop the critical thinking tools needed to lead organisations in the age of human-analogue machines. We’ll explore how to strategically implement AI while maintaining human agency, building intelligent accountability, and creating organisational effectiveness in a world where machines increasingly mimic human capabilities.&lt;/p&gt;</description>
        <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/leadership-and-ai-luiss-mba.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/leadership-and-ai-luiss-mba.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Art and the Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI is a reflection of a fascination we have for our own intelligence. Fears of AI concern not just how it invades our digital lives, but the implied threat of an intelligence that might displace us from our position as creators and innovators. This talk examines an idea about what truly makes us human. I’ll argue that our intelligence is not defined by our capabilities, but by our fundamental limitations. I’ll suggest those limitations lead to need to communicate, to collaborate, and to create. For artists and designers working with AI, the perspective hopes to offer a framework. Rather than viewing AI systems as competitors for human creativity, we should see them as tools that might amplify our atomic humanity. The question is not “what can AI do?” but “how do we preserve and enhance what makes us irreplaceably human in this age of technology change?”&lt;/p&gt;</description>
        <pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/art-and-the-atomic-human-rca.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/art-and-the-atomic-human-rca.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>VibeSafe</title>
        <description>&lt;p&gt;When working with AI coding assistants, the traditional cost model inverts: generating documentation becomes cheap while debugging misimplementation becomes expensive. VibeSafe is a framework that forces intent to be explicit before implementation. The aim is to catch AI misinterpretation when it costs editing a markdown file, not unwinding code changes.&lt;/p&gt; &lt;p&gt;This talk introduces VibeSafe’s philosophy and core components (CIPs, Backlog, Tenets, Requirements) and explores how they create shared understanding between humans and AI systems. We’ll discuss the workflow, benefits, trade-offs, and gather your insights on applying these practices to real-world engineering teams.&lt;/p&gt;</description>
        <pubDate>Fri, 06 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/vibesafe-trent-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/vibesafe-trent-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>How AI Works and How it will Transform our Lives</title>
        <description>&lt;p&gt;Professor Lawrence will explore what artificial intelligence means for human society, drawing from over 25 years of research and real-world deployment experience at Amazon. He will explain current AI technologies—from machine learning to large language models—and their transformative potential across science, healthcare, and industry. Central to his discussion will be the question from his book The Atomic Human: what makes human intelligence unique in an age of sophisticated machines? He will address both the enormous benefits AI can deliver and the critical risks that must be managed, including data governance, algorithmic accountability, and the concentration of digital power. Professor Lawrence will discuss how we can ensure AI serves humanity rather than displacing human agency, and what steps policymakers and citizens can take to navigate our AI-driven future while preserving democratic values and human autonomy.&lt;/p&gt;</description>
        <pubDate>Thu, 05 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/how-ai-works-and-how-it-will-transform-our-lives.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/how-ai-works-and-how-it-will-transform-our-lives.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Cannot Replace the Atomic Human</title>
        <description>&lt;p&gt;Despite its transformative potential, artificial intelligence risks following a well-worn path where technological innovation fails to address society’s most pressing problems. As we transition from language models to agentic systems, the challenge isn’t just technical sophistication—it’s ensuring these advances serve real industrial and societal needs.&lt;/p&gt; &lt;p&gt;This talk examines this persistent gap through a lens inspired by innovation economics, with particular attention to Italy’s industrial heritage and the challenge of technology transfer. We explore why traditional market mechanisms have failed to map macro-level AI interventions to the micro-level needs of businesses and citizens, and what radical changes are needed to ensure that AI truly serves the Made in Italy ecosystem.&lt;/p&gt;</description>
        <pubDate>Wed, 04 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-cannot-replace-the-atomic-human-mimit.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-cannot-replace-the-atomic-human-mimit.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;In a space dedicated to Marconi’s legacy of communication, we find ourselves at a time where machines don’t only transmit messages: they summarise, recommend, predict, and &lt;em&gt;decide&lt;/em&gt;.&lt;/p&gt; &lt;p&gt;In this talk we frame AI as &lt;em&gt;information infrastructure&lt;/em&gt;. We connect today’s digital world with the deeper question from &lt;em&gt;The Atomic Human&lt;/em&gt;: what remains uniquely human when machines can mimic so much of what we do?&lt;/p&gt; &lt;p&gt;To make good decisions we need to clarify the real opportunities and challenges that sit between security and creativity: trust, autonomy, accountability, and the role of cultural institutions and art, in how we adopt and deploy these powerful tools.&lt;/p&gt;</description>
        <pubDate>Mon, 02 Feb 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-marconi-bologna.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-marconi-bologna.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Translating AI into Practice</title>
        <description>&lt;p&gt;Translating machine learning from research prototypes into robust, reliable systems is one of the greatest challenges facing industry today. This talk explores the practical hurdles of deploying AI in real-world environments, drawing from over 25 years of experience in machine learning systems design at Amazon and insights from “The Atomic Human.”&lt;/p&gt; &lt;p&gt;We’ll examine the interface between machine learning and systems research, exploring how traditional software engineering practices must evolve to handle the unique challenges of data-dependent systems. The talk will cover deployment challenges, intellectual debt in ML systems, and the importance of continuous monitoring in production environments.&lt;/p&gt; &lt;p&gt;Central to the discussion is the human perspective: how do we build AI systems that complement rather than replace human expertise, particularly in critical domains like healthcare? We’ll explore trust, autonomy, and the essential role of human oversight in ensuring AI serves society’s needs while maintaining the reliability and safety standards that sectors like pharmaceuticals demand.&lt;/p&gt;</description>
        <pubDate>Mon, 12 Jan 2026 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/translating-ai-into-practice-astrazeneca.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/translating-ai-into-practice-astrazeneca.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;It seems that wherever we look, on TV, on the internet, or in newspapers, people are telling us how they are using AI to give us better experiences but are they? What does that mean? Is it safe? Should we be concerned?&lt;/p&gt; &lt;p&gt;Artificial Intelligence refers to the ability of machines to perform tasks that typically require human intelligence. This includes capabilities like learning, problem-solving, decision-making, and perception but, as generative AI reshapes the technology landscape, we face fundamental questions about human-machine interaction.&lt;/p&gt; &lt;p&gt;In this talk I’ll discuss the limitations of artificial intelligence, why I find the notion of artificial general intelligence absurd, and how there’s a part of us that can never be replaced by the machine.&lt;/p&gt; &lt;p&gt;This talk is based on my book, The Atomic Human, published with Allen Lane.&lt;/p&gt;</description>
        <pubDate>Wed, 10 Dec 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-cses-aru-christmas.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-cses-aru-christmas.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI and Capability Shaping</title>
        <description>&lt;p&gt;Many employees see AI as a threat to their roles and value. But what if we’ve been thinking about AI wrong? Rather than replacing human judgment, AI can amplify the capabilities of hungry and humble teams who are willing to learn. This talk explores how companies that recognise human capital as their differentiator can use AI to enhance rather than diminish their people. Drawing on insights from “The Atomic Human,” we’ll examine how AI reshapes information flows in organisations, why trust and team collaboration become more, not less, important, and how leaders can build AI systems that embody institutional values. We’ll suggest that companies that win with AI are those that put their people first, creating belonging rather than displacement through technology.&lt;/p&gt;</description>
        <pubDate>Wed, 05 Nov 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-and-capability-shaping-servicenow.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-and-capability-shaping-servicenow.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Connecting Cambridge in AI</title>
        <description>&lt;p&gt;Artificial intelligence risks following a familiar pattern where technological innovation fails to address society’s most pressing problems. The UK’s experience with major IT projects - from the Horizon scandal to the £10 billion NHS Lorenzo failure - shows this disconnect. These weren’t just technical failures but failures to match supply and demand, to bridge between needs and solutions.&lt;/p&gt; &lt;p&gt;This talk examines this persistent gap through the lens of innovation economics and proposes an alternative: the attention reinvestment cycle. Rather than focusing solely on financial returns, this approach recognizes that efficiency gains can be measured through the liberation of human attention - our most precious resource. We’ll explore how Accelerate Science at Cambridge is putting this into practice, showcasing examples from small language model research (Pico) and autonomous research systems (CMBAgents) that demonstrate new pathways for AI to serve education, science, and society.&lt;/p&gt;</description>
        <pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-education-summit-accelerate-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-education-summit-accelerate-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI and Security: From Bandwidth to Practical Implications</title>
        <description>&lt;p&gt;The evolution from classical security to AI-mediated security challenges represents a shift in how we think about protecting information systems. This talk explores the bandwidth limitations that create security vulnerabilities, introduces the Human Analogue Machine (HAM) from &lt;em&gt;The Atomic Human&lt;/em&gt; as “humans scaled up,” and examines practical security implications through three phases: classical security enhanced with GenAI, GenAI-specific security challenges, and broader information systems implications.&lt;/p&gt; &lt;p&gt;Through real-world examples including the Heathrow airport cyber-attack and Notion AI Agents research, we’ll examine how security thinking must evolve to address threats that exploit the very capabilities that make AI systems so powerful.&lt;/p&gt;</description>
        <pubDate>Wed, 24 Sep 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-and-security-talk.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-and-security-talk.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Information in the Age of AI</title>
        <description>&lt;p&gt;As AI technologies reshape the business landscape, leaders face questions about balancing automation with individual judgment, information flows, and organisational decision-making. This talk builds on the ideas in Atomic Human to explore the practical implications of AI for businesses through the lens of information topography, decision-making structures, and human-AI collaboration. Drawing from real-world examples and insights from &lt;em&gt;The Atomic Human&lt;/em&gt; we’ll explore how businesses can strategically implement AI while maintaining human agency, intelligent accountability, and organisational effectiveness.&lt;/p&gt;</description>
        <pubDate>Thu, 04 Sep 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/information-in-the-age-of-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/information-in-the-age-of-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Leading with AI: Strategic Decision Making in the Age of Human-Analogue Machines</title>
        <description>&lt;p&gt;As AI technologies reshape the banking landscape, senior executives face questions about balancing automation with human judgment, information flows, and organizational decision-making. This keynote builds on the ideas in &lt;em&gt;The Atomic Human&lt;/em&gt; to explore the practical implications of AI for financial institutions through the lens of information topography, decision-making structures, and human-AI collaboration. Drawing from real-world examples and insights from the book, we’ll explore how banks can strategically implement AI while maintaining human agency, intelligent accountability, and organizational effectiveness in an industry where trust and judgment are paramount.&lt;/p&gt;</description>
        <pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/leading-with-ai-lloyds-bank.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/leading-with-ai-lloyds-bank.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Challenges and Opportunities</title>
        <description>&lt;p&gt;As AI technologies reshape the business landscape, leaders face questions about balancing automation with individual judgment, information flows, and organisational decision-making. This talk builds on the ideas in Atomic Human to explore the practical implications of AI for businesses through the lens of information topography, decision-making structures, and human-AI collaboration. Drawing from real-world examples and insights from the book we’ll explore how businesses can strategically implement AI while maintaining human agency, intelligent accountability, and organisational effectiveness.&lt;/p&gt;</description>
        <pubDate>Thu, 10 Jul 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-celp.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-celp.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Educating the Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people.&lt;/p&gt; &lt;p&gt;The thinking in this talk comes from Neil’s book The Atomic Human.&lt;/p&gt; &lt;p&gt;Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt;</description>
        <pubDate>Fri, 04 Jul 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/educating-the-atomic-human-educationfest.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/educating-the-atomic-human-educationfest.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;In this talk Neil will introduce the notion of the Atomic Human, the indivisible component of our humanity that can’t be taken by the machine. He will argue that the algorithmic decision making that emerges from the machine offers a different place to stand and better understand our own intelligence and what is precious about us. Without this perspective we do risk displacing our human parts in favour of the machine, but by seeing our intelligence from the machine’s perspective we can ensure that we integrate these new technologies in ways that enhance who we are rather than replace who we are.&lt;/p&gt;</description>
        <pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-munich.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-munich.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Transformative Power of AI and its Challenges</title>
        <description>&lt;p&gt;As we enter an era machines are mimicking tasks that were traditional undertaken by humans, CHROs are facing fundamental transformations in organisations both in the roles of the individuals and the form of culture. Generative AI brings new challenges in how organizations manage, develop, and empower their workforce. Drawing on insights from &lt;em&gt;The Atomic Human&lt;/em&gt;, this session explores how the unique characteristics of human intelligence – our social context, cultural understanding, and ability to handle uncertainty – give us the foundations on which we will reshape the future of work and the CHRO function.&lt;/p&gt;</description>
        <pubDate>Tue, 17 Jun 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-transformative-power-of-ai-and-its-challenges.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-transformative-power-of-ai-and-its-challenges.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Towards AI that Works for Everyone</title>
        <description>&lt;p&gt;Despite significant advances in machine learning technologies, public dialogues indicate that artificial intelligence is failing to deliver in the areas of most importance to UK citizens. In this talk we examine why this may be the case and using case studies of previous digital deployments suggest that there may be a dislocation between macroeconomic incentives for deployment and the micro economic demand. As a result, the innovation flywheel stalls for many of the domains we care about most leading to a new productivity paradox where the fruits of new technology are unevenly distributed through society. We examine an alternative model of innovation deployment that we call the attention reinvestment cycle.&lt;/p&gt;</description>
        <pubDate>Mon, 16 Jun 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/towards-ai-that-works-for-everyone.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/towards-ai-that-works-for-everyone.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Business and the Atomic Human</title>
        <description>&lt;p&gt;: As AI technologies reshape the business landscape, leaders face questions about balancing automation with individual judgment, information flows, and organisational decision-making. This talk builds on the ideas in Atomic Human to explore the practical implications of AI for businesses through the lens of information topography, decision-making structures, and human-AI collaboration. Drawing from real-world examples and insights from the book we’ll explore how businesses can strategically implement AI while maintaining human agency, intelligent accountability, and organisational effectiveness.&lt;/p&gt;</description>
        <pubDate>Wed, 11 Jun 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/business-and-the-atomic-human.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/business-and-the-atomic-human.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Mind the Gap: Bridging Innovation’s Supply and Demand in the AI Era</title>
        <description>&lt;p&gt;Despite significant advances in machine learning technologies, public dialogues indicate that artificial intelligence is failing to deliver in the areas of most importance to UK citizens. In this talk we examine why this may be the case and using case studies of previous digital deployments suggest that there may be a dislocation between macroeconomic incentives for deployment and the micro economic demand. As a result, the innovation flywheel stalls for many of the domains we care about most leading to a new productivity paradox where the fruits of new technology are unevenly distributed through society. We examine an alternative model of innovation deployment that we call the attention reinvestment cycle.&lt;/p&gt;</description>
        <pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/mind-the-gap-manchester.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/mind-the-gap-manchester.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human and Africa’s Digital Future</title>
        <description>&lt;p&gt;The Atomic Human is suggests that humans are defined by their limitations and how we overcome them. Ovecoming hese limitations builds on our cultures and communities. Modern institutions are adapted to European models of community interaction that when imposed on the African context doesn’t allow the strengths and aspirations of local people to emerge. The new wave of AI offers an opportunity for people to build institutions that respect their own cultural contexts and align with the aspirations of people from across the countries, cities, towns and villages of the African continent.&lt;/p&gt;</description>
        <pubDate>Thu, 05 Jun 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-africa-digital-future.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-africa-digital-future.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Human-Machine Collaboration in the AI Era</title>
        <description>&lt;p&gt;The latest wave of machine learning technology doesn’t just represent advancements in algorithms and capabilities - it represents a fundamental shift in how humans and machines can interact. For decades, we’ve adapted our workflows and thinking to accommodate what computers could do. Now, we’re entering an era where machines can adapt to us, where natural language interfaces allow humans to express their needs directly, and where the collaboration between humans and machines can reach new heights of productivity and creativity.&lt;/p&gt; &lt;p&gt;This talk explores how financial institutions and investors can leverage these technologies to enhance decision-making, reduce cognitive burden, and create more intuitive workflows that put humans back in control of technology.&lt;/p&gt;</description>
        <pubDate>Wed, 28 May 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/human-machine-collaboration.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/human-machine-collaboration.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The AI Value Chain: Research and Policy Priorities</title>
        <description>&lt;p&gt;A healthy relationship between society and technology would be one where both shape each. As innovation proceeds so policymakers face complex decisions about how to govern these technologies. This keynote examines the AI value chain from technical research through development, deployment, and societal impact.&lt;/p&gt; &lt;p&gt;The talk explores the dynamics that shape AI governance: the bandwidth disparity between human and machine, the evolution of societal information flow and the necessary shift from productivity-driven to attention-driven innovation models. These dynamics create power imbalances that traditional policy approaches struggle to address.&lt;/p&gt; &lt;p&gt;By comparing the traditional productivity flywheel with the emerging attention reinvestment cycle, we identify why conventional macroeconomic interventions may not connect effectively to microeconomic incentives in AI markets. The talk concludes with research and policy priorities that can help bridge this gap, ensuring AI development serves broader societal goals through both technical solutions and democratic engagement.&lt;/p&gt;</description>
        <pubDate>Tue, 06 May 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-value-chain-research-policy-priorities.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-value-chain-research-policy-priorities.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Opportunities and Challenges</title>
        <description>&lt;p&gt;As generative AI reshapes the technology landscape, the public sector faces questions about human-machine interaction and organizational adaptation. This session explores how the economics of attention, human bandwidth limitations, and information flows affect AI implementation in organizations. Through examining real-world examples and emerging patterns, we’ll develop practical approaches for technology leaders to navigate this rapidly evolving landscape.&lt;/p&gt;</description>
        <pubDate>Thu, 01 May 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-opportunities-and-challenges-may-2025.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-opportunities-and-challenges-may-2025.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Jaynes’ World</title>
        <description>&lt;p&gt;The relationship between physical systems and intelligence has long fascinated researchers in computer science and physics. This talk explores fundamental connections between thermodynamic systems and intelligent decision-making through the lens of free energy principles.&lt;/p&gt; &lt;p&gt;We examine how concepts from statistical mechanics - particularly the relationship between total energy, free energy, and entropy - might provide novel insights into the nature of intelligence and learning. By drawing parallels between physical systems and information processing, we consider how measurement and observation can be viewed as processes that modify available energy. The discussion encompasses how model approximations and uncertainties might be understood through thermodynamic analogies, and explores the implications of treating intelligence as an energy-efficient state-change process.&lt;/p&gt; &lt;p&gt;While these connections remain speculative, they offer a potential shared language for discussing the emergence of natural laws and societal systems through the lens of information.&lt;/p&gt;</description>
        <pubDate>Tue, 15 Apr 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/information-engines-sorrento.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/information-engines-sorrento.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Information Topography</title>
        <description>&lt;p&gt;Physical landscapes are shaped by elevation, valleys, and peaks. We might expect information landscapes are molded by entropy, precision, and capacity constraints. To explore how these ideas might manifest we introduce Jaynes’ world, an entropy game that maximises instantaneous entropy production.&lt;/p&gt; &lt;p&gt;In this talk we’ll argue that this landscape has a precision/capacity trade-off that suggests the underlying configuration requires a density matrix representation.&lt;/p&gt;</description>
        <pubDate>Mon, 14 Apr 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/information-topography.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/information-topography.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI that Serves Science, Citizens and Society</title>
        <description>&lt;p&gt;Despite its transformative potential, artificial intelligence risks following a well-worn path where technological innovation fails to address society’s most pressing problems. The UK’s experience with major IT projects shows this disconnect: from the Horizon scandal’s wrongful prosecutions to the £10 billion failure of the NHS Lorenzo project. These weren’t only technical failures but a failure to bridge between needs and the provided solution.&lt;/p&gt; &lt;p&gt;This talk examines how we can ensure AI truly serves citizens, science, and society. We’ll explore why conventional approaches to technology deployment continue to fall short and propose changes needed to build human-centered AI innovation that delivers real benefits while maintaining human agency and values.&lt;/p&gt;</description>
        <pubDate>Thu, 10 Apr 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-that-serves-science-citizens-and-society.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-that-serves-science-citizens-and-society.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Bridging BeSci &amp;amp; Data Science in the UN</title>
        <description>&lt;p&gt;How can behavioural science and data science come together to drive meaningful change in the UN? This advanced session will explore innovative and existing approaches to using BeSci and data for efficiency gains, informed decision-making, and strategic insights. Professor Neil Lawrence (University of Cambridge) will provide opening remarks, setting the stage for a discussion on human-centred data science in UN contexts. Colleagues from UNHCR and UNICEF will then share real-world examples, discuss the challenges of integrating data and BeSci, and highlight the value of partnerships and research. From overcoming data constraints to investing in the right skill sets, this session will examine how to maximise BeSci and analytics for scalable impact particularly in resource-sensitive environments.&lt;/p&gt;</description>
        <pubDate>Wed, 09 Apr 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/bridging-besci-and-data-science-at-the-un.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/bridging-besci-and-data-science-at-the-un.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Cannot Replace the Atomic Human</title>
        <description>&lt;p&gt;Despite its transformative potential, artificial intelligence risks following a well-worn path where technological innovation fails to address society’s most pressing problems.&lt;/p&gt; &lt;p&gt;This talk examines this persistent gap through a lens that’s inspired by innovation economics. We argue that traditional market mechanisms have failed to map macro-level interventions to the micro-level societal needs. We’ll explore why conventional approaches to technology deployment continue to fall short and propose radical changes needed to ensure that AI truly serves citizens, science, and society.&lt;/p&gt;</description>
        <pubDate>Fri, 04 Apr 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-cannot-replace-the-atomic-human.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-cannot-replace-the-atomic-human.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Information Engines</title>
        <description>&lt;p&gt;The relationship between physical systems and intelligence has long fascinated researchers in computer science and physics. This talk explores fundamental connections between thermodynamic systems and intelligent decision-making through the lens of free energy principles.&lt;/p&gt; &lt;p&gt;We examine how concepts from statistical mechanics - particularly the relationship between total energy, free energy, and entropy - might provide novel insights into the nature of intelligence and learning. By drawing parallels between physical systems and information processing, we consider how measurement and observation can be viewed as processes that modify available energy. The discussion encompasses how model approximations and uncertainties might be understood through thermodynamic analogies, and explores the implications of treating intelligence as an energy-efficient state-change process.&lt;/p&gt; &lt;p&gt;While these connections remain speculative, they offer intriguing perspectives for discussing the fundamental nature of intelligence and learning systems. The talk aims to stimulate discussion about these potential relationships rather than present definitive conclusions.&lt;/p&gt;</description>
        <pubDate>Wed, 26 Mar 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/information-engines.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/information-engines.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Educating the Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people.&lt;/p&gt; &lt;p&gt;The thinking in this talk comes from Neil’s forthcoming book to be published with Allen Lane in June 2024. The questions raised in this talk will be around how we educate the atomic human in the age of AI.&lt;/p&gt; &lt;p&gt;Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt;</description>
        <pubDate>Tue, 18 Mar 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/educating-the-atomic-human-cambridge-assessment.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/educating-the-atomic-human-cambridge-assessment.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Humans in the AI World</title>
        <description>&lt;p&gt;As we enter an era machines are mimicking tasks that were traditional undertaken by humans, CHROs are facing fundamental transformations in organisations both in the roles of the individuals and the form of culture. Generative AI brings new challenges in how organizations manage, develop, and empower their workforce. Drawing on insights from &lt;em&gt;The Atomic Human&lt;/em&gt;, this session explores how the unique characteristics of human intelligence – our social context, cultural understanding, and ability to handle uncertainty – give us the foundations on which we will reshape the future of work and the CHRO function.&lt;/p&gt;</description>
        <pubDate>Fri, 14 Mar 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/humans-in-the-ai-world.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/humans-in-the-ai-world.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Educating the Atomic Human</title>
        <description>&lt;p&gt;How do we assimilate artificial intelligence technologies in a way that respects the essence of humanity? The answer lies in education, but the nature of education will also radically change in the face of these technologies. In this shifting landscape we first step back and explore the essence of being human in the age of AI.&lt;/p&gt; &lt;p&gt;Drawing parallels between AI as a modern philosopher’s stone we argue we are facing a battle between attention capture and reinvestment, and explain how education and skills are at the heart of preserving our human essence within that battle.&lt;/p&gt;</description>
        <pubDate>Mon, 03 Mar 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/educating-the-atomic-human-kings.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/educating-the-atomic-human-kings.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Opportunities and Challenges</title>
        <description>&lt;p&gt;As generative AI reshapes the technology landscape, CTOs and technology leaders face fundamental questions about human-machine interaction and organizational adaptation. This session explores how the economics of attention, human bandwidth limitations, and information flows affect AI implementation in organizations. Through examining real-world examples and emerging patterns, we’ll develop practical approaches for technology leaders to navigate this rapidly evolving landscape.&lt;/p&gt;</description>
        <pubDate>Thu, 13 Feb 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-opportunities-and-challenges.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-opportunities-and-challenges.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What I can do that AI Can’t</title>
        <description>&lt;p&gt;In this talk Neil will talk about the limitations of artificial intelligence, why he finds the notion of artificial general intelligence absurd, and how there’s a part of us that can never be replaced by the machine.&lt;/p&gt;</description>
        <pubDate>Tue, 11 Feb 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-i-can-do-that-ai-cant-digital-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-i-can-do-that-ai-cant-digital-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;In this talk Neil will talk about the limitations of artificial intelligence, why he finds the notion of artificial general intelligence absurd, and how there’s a part of us that can never be replaced by the machine.&lt;/p&gt;</description>
        <pubDate>Tue, 04 Feb 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-cfi.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-cfi.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What An I can do that AI can’t</title>
        <description>&lt;p&gt;In this talk Neil will talk about the limitations of artificial intelligence, why he finds the notion of artificial general intelligence absurd, and how there’s a part of us that can never be replaced by the machine.&lt;/p&gt;</description>
        <pubDate>Tue, 28 Jan 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-an-i-can-do-that-ai-cant.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-an-i-can-do-that-ai-cant.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Neil D. Lawrence’s visionary book shows why these fears may be misplaced.&lt;/p&gt;</description>
        <pubDate>Sat, 18 Jan 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-lewes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-lewes.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Neil D. Lawrence’s visionary book shows why these fears may be misplaced.&lt;/p&gt;</description>
        <pubDate>Wed, 15 Jan 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-bbc.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-bbc.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Achievement and the Atomic Human</title>
        <description>&lt;p&gt;In this talk Neil will reflect on the political themes that arose in the book, the atomic human, where the challenges arise from and where the solutions might lie.&lt;/p&gt; &lt;p&gt;Broadly speaking we face challenges in the modern information topography from both corporate and government entities, the answers may lie in an improved form of democractic institutionalism but this in turn implies that the individuals that make up our society from citizens to professionals are empowered rather than disempowered by the technology.&lt;/p&gt;</description>
        <pubDate>Tue, 07 Jan 2025 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-mst-2.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-mst-2.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Neil D. Lawrence’s visionary book shows why these fears may be misplaced.&lt;/p&gt;</description>
        <pubDate>Tue, 17 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/atomic-human-astrazeneca.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/atomic-human-astrazeneca.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Revisiting the Revisiting of the Revisit of the 2014 NeurIPS Experiment</title>
        <description>&lt;p&gt;In 2014, along with Corinna Cortes, I was Program Chair of the Neural Information Processing Systems conference. At the time, when wondering about innovations for the conference, Corinna and I decided it would be interesting to test the consistency of reviewing. With this in mind, we randomly selected 10% of submissions and had them reviewed by two independent committees. In this talk I will review the construction of the experiment, explain how the NeurIPS review process worked and talk about what I felt the implications for reviewing were, vs what the community reaction was. The talk was originally given in 2021 when the long term impact of papers were measured by seven years of citations. Here we augment the results with citations from today, 2024, nearly a decade after papers were published.&lt;/p&gt;</description>
        <pubDate>Fri, 13 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/revisiting-the-revisiting-of-the-revisit-of-the-2014-neurips-experiment.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/revisiting-the-revisiting-of-the-revisit-of-the-2014-neurips-experiment.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Educating the Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people.&lt;/p&gt; &lt;p&gt;The thinking in this talk comes from Neil’s forthcoming book to be published with Allen Lane in June 2024. The questions raised in this talk will be around how we educate the atomic human in the age of AI.&lt;/p&gt; &lt;p&gt;Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt;</description>
        <pubDate>Wed, 04 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/educating-the-atomic-human-queen-elizabeths.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/educating-the-atomic-human-queen-elizabeths.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Mind the Gap: Bridging Innovation’s Supply and Demand in the AI Era</title>
        <description>&lt;p&gt;Despite its transformative potential, artificial intelligence risks following a well-worn path where technological innovation fails to address society’s most pressing problems. The UK’s experience with major IT projects shows this disconnect: from the Horizon scandal’s wrongful prosecutions to the £10 billion failure of the NHS Lorenzo project. These weren’t only technical failures but a failure to bridge between needs and the provided solution, a failure to match supply and demand.&lt;/p&gt; &lt;p&gt;This misalignment persists in AI development: in 2017, the Royal Society’s Machine Learning Working group conducted research with Ipsos MORI to explore citizens’ aspirations for AI. It showed strong desire for AI to tackle challenges in health, education, security, and social care, while showing explicit disinterest in AI-generated art. Yet seven years later, while AI has made remarkable progress in emulating human creative tasks, the demand in these other areas remains unfulfilled.&lt;/p&gt; &lt;p&gt;This talk examines this persistent gap through a lens that’s inspired by innovation economics. We argue that traditional market mechanisms have failed to map macro-level interventions to the micro-level societal needs. We’ll explore why conventional approaches to technology deployment continue to fall short and propose radical changes needed to ensure that AI truly serves citizens, science, and society.&lt;/p&gt;</description>
        <pubDate>Tue, 03 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/mind-the-gap-briding-innovations-supply-and-demand-in-the-ai-era.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/mind-the-gap-briding-innovations-supply-and-demand-in-the-ai-era.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Science and the Atomic Human</title>
        <description>&lt;p&gt;In this talk Neil will reflect on the interaction between science and our approach to AI from the particular perspective of his book, The Atomic Human.&lt;/p&gt; &lt;p&gt;Neil will introduce how he sees the human as unique in the context of the machine but how some of our science perspectives may have undermined our understanding of this uniqueness. Fortunately, the possibility of these new forms of automated decision-making, that we commonly refer to as “artificial intelligence” also give us a scientific “place to stand” and introspect and reflect on who we are and what makes us special.&lt;/p&gt;</description>
        <pubDate>Thu, 28 Nov 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/science-and-the-atomic-human.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/science-and-the-atomic-human.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>How Do We Cope with Rapid Change Like AI/ML?</title>
        <description>&lt;p&gt;Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Tue, 12 Nov 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/how-do-we-cope-with-rapid-change-like-ai-ml-2024.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/how-do-we-cope-with-rapid-change-like-ai-ml-2024.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. For engineering consultants, understanding these differences is crucial for developing effective solutions that leverage both human and machine capabilities appropriately.&lt;/p&gt;</description>
        <pubDate>Fri, 08 Nov 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-cambridge-consultants.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-cambridge-consultants.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we consider the challenge of bringing technology to bear on the problems we care about, how to “bridge the innovation economy”.&lt;/p&gt;</description>
        <pubDate>Thu, 07 Nov 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-november-24.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-november-24.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we are having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;The talk is based on Neil’s book, The Atomic Human, published by Allen Lane.&lt;/p&gt;</description>
        <pubDate>Fri, 01 Nov 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-zangwill.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-zangwill.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt;</description>
        <pubDate>Thu, 17 Oct 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-max-planck-lecture.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-max-planck-lecture.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Neil D. Lawrence’s visionary book shows why these fears may be misplaced.&lt;/p&gt;</description>
        <pubDate>Sun, 13 Oct 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-new-scientist-live.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-new-scientist-live.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our future.&lt;/p&gt;</description>
        <pubDate>Fri, 11 Oct 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-schmidt-retreat.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-schmidt-retreat.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Neil D. Lawrence’s visionary book shows why these fears may be misplaced.&lt;/p&gt;</description>
        <pubDate>Sat, 28 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-alumni.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-alumni.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;The book reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded; not just by the experts, but also ordinary people.&lt;/p&gt;</description>
        <pubDate>Thu, 26 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-mst-1.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-mst-1.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Intelligent Machines and Humans</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;This session will introduce the ideas in the Atomic Human and turn them over to a Q&amp;amp;A session.&lt;/p&gt;</description>
        <pubDate>Thu, 19 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-ypo-global-leaders.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-ypo-global-leaders.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;What if machines could think like humans? Can AI truly understand us? Ever wondered how AI will shape our future?&lt;/p&gt; &lt;p&gt;Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he is also the academic lead of AI@Cam, the University’s flagship mission on AI. He has been working on machine learning models for over 25 years. He returned to academia in 2019 after three years as Director of Machine Learning at Amazon. He is also a Senior AI Fellow at the Alan Turing Institute, visiting Professor at the University of Sheffield and author of the book The Atomic Human - understanding ourselves in the age of AI.&lt;/p&gt;</description>
        <pubDate>Wed, 18 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-google.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-google.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;The book reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded; not just by the experts, but also ordinary people.&lt;/p&gt; &lt;p&gt;An informal reception will follow the talk.&lt;/p&gt;</description>
        <pubDate>Mon, 16 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-elevated-nyc.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-elevated-nyc.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;The book reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded; not just by the experts, but also ordinary people.&lt;/p&gt; &lt;p&gt;The talk will be followed by a Q&amp;amp;A with David Mindell.&lt;/p&gt;</description>
        <pubDate>Fri, 13 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-mit-museum.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-mit-museum.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;Neil’s book reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded; not just by the experts, but also ordinary people. In this conversation Neil will discuss with Mary Gray the origins of these ideas and how we can better shape the conversation around AI.&lt;/p&gt;</description>
        <pubDate>Thu, 12 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-msr-ne.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-msr-ne.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;The book reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded; not just by the experts, but also ordinary people.&lt;/p&gt; &lt;p&gt;An informal reception will follow the talk. This event is kindly hosted at the office of the CAm Bay Area Advisory Committee member Ronjon Nag (Wolfson 1984).&lt;/p&gt;</description>
        <pubDate>Wed, 11 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-utah-valley-university.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-utah-valley-university.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;The Cambridge Bay Area Ring and Cambridge in America (CAm) invite you to a book talk and reception with Professor Neil D. Lawrence, author of Atomic Human: Understanding Ourselves in the Age of AI, a visionary new book on the evolution of human and machine intelligence.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;The book reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded; not just by the experts, but also ordinary people.&lt;/p&gt; &lt;p&gt;An informal reception will follow the talk. This event is kindly hosted at the office of the CAm Bay Area Advisory Committee member Ronjon Nag (Wolfson 1984).&lt;/p&gt;</description>
        <pubDate>Tue, 10 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-cambridge-ring.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-cambridge-ring.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt;</description>
        <pubDate>Fri, 06 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-ckiwfest.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-ckiwfest.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Launching RSS: Data Science and Artificial Intelligence</title>
        <description>&lt;p&gt;The Royal Statistical Society is proud to launch its new journal, RSS: Data Science and Artificial Intelligence. This journal aims to unify various data science fields and provide a platform for high-quality papers with broad interest across AI, ML, statistics, bioinformatics, econometrics, and more.&lt;/p&gt;</description>
        <pubDate>Wed, 04 Sep 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/rss-data-science-and-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/rss-data-science-and-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our futures.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s book, The Atomic Human,published with Allen Lane.&lt;/p&gt;</description>
        <pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-ellis-summer-school.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-ellis-summer-school.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024. Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda.&lt;/p&gt; &lt;p&gt;In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Wed, 10 Jul 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-ai-in-education.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-ai-in-education.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Reacting, Fast and Slow</title>
        <description>&lt;p&gt;What is fundamental to our intelligence? In this talk building on the ideas in &lt;em&gt;The Atomic Human&lt;/em&gt; I argue that the external world is key to our intelligence, and how that world is filtered before we perceive it. This leads to the Eisenhower illusion, where we feel ourselves in charge but we are in fact reliant on fast reacting systems.&lt;/p&gt;</description>
        <pubDate>Wed, 10 Jul 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/reacting-fast-and-slow.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/reacting-fast-and-slow.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;What if machines could think like humans? Can AI truly understand us? Ever wondered how AI will shape our future?&lt;/p&gt; &lt;p&gt;Discover some of the answers with Neil Lawrence, one of the world’s foremost experts in AI and machine learning. In this insightful talk, Neil Lawrence will reveal how AI serves as a powerful assistant to human intelligence, not a replacement. He will discuss the limits of AI in replicating human thought and its profound impact on society and information management.&lt;/p&gt; &lt;p&gt;Additionally, the talk will explore our society’s fascination and fears about AI, examining its influence on human identity. Lawrence will give an overview of the current state of AI, the challenges we face, and the importance of transparency and data quality. This session will offer valuable insights into the real-world applications of AI and its future.&lt;/p&gt; &lt;p&gt;Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he is also the academic lead of AI@Cam, the University’s flagship mission on AI. He has been working on machine learning models for over 25 years. He returned to academia in 2019 after three years as Director of Machine Learning at Amazon. He is also a Senior AI Fellow at the Alan Turing Institute, visiting Professor at the University of Sheffield and author of the book The Atomic Human - understanding ourselves in the age of AI.&lt;/p&gt;</description>
        <pubDate>Sun, 07 Jul 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-summer-of-science-2.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-summer-of-science-2.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;What if machines could think like humans? Can AI truly understand us? Ever wondered how AI will shape our future?&lt;/p&gt; &lt;p&gt;Discover some of the answers with Neil Lawrence, one of the world’s foremost experts in AI and machine learning. In this insightful talk, Neil Lawrence will reveal how AI serves as a powerful assistant to human intelligence, not a replacement. He will discuss the limits of AI in replicating human thought and its profound impact on society and information management.&lt;/p&gt; &lt;p&gt;Additionally, the talk will explore our society’s fascination and fears about AI, examining its influence on human identity. Lawrence will give an overview of the current state of AI, the challenges we face, and the importance of transparency and data quality. This session will offer valuable insights into the real-world applications of AI and its future.&lt;/p&gt; &lt;p&gt;Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he is also the academic lead of AI@Cam, the University’s flagship mission on AI. He has been working on machine learning models for over 25 years. He returned to academia in 2019 after three years as Director of Machine Learning at Amazon. He is also a Senior AI Fellow at the Alan Turing Institute, visiting Professor at the University of Sheffield and author of the book The Atomic Human - understanding ourselves in the age of AI.&lt;/p&gt;</description>
        <pubDate>Fri, 05 Jul 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-summer-of-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-summer-of-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Educating the Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people.&lt;/p&gt; &lt;p&gt;The thinking in this talk comes from Neil’s forthcoming book to be published with Allen Lane in June 2024. The questions raised in this talk will be around how we educate the atomic human in the age of AI.&lt;/p&gt; &lt;p&gt;Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt;</description>
        <pubDate>Fri, 05 Jul 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/educating-the-atomic-human-pti.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/educating-the-atomic-human-pti.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our futures.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024.&lt;/p&gt;</description>
        <pubDate>Thu, 27 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-ellis.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-ellis.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI needs to serve people, science, and society</title>
        <description>Artificial intelligence offers great promise, but we must ensure it does not deepen inequalities. Today we are setting out our vision for AI@Cam, a new flagship mission at the University of Cambridge.</description>
        <pubDate>Mon, 24 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society-robust.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society-robust.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI guardians: who holds power over our data</title>
        <description>&lt;p&gt;This event will explore ethics and bias in AI, and examine the need for diverse and inclusive data teams and decision-makers.&lt;/p&gt; &lt;p&gt;Chandrima Ganguly is a Data and AI Ethicist working in the CDAO. Their background is research in AI Ethics and Theoretical physics and they hold a PhD in the letter from the University of Cambridge. They are also a community organizer working for many years in the gender justice space. They are passionate about discovering and collaboratively creating ways in which technology can help people come together to find ways of better existing in community with each other and wider society.&lt;/p&gt; &lt;p&gt;Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he is also the academic lead of AI@Cam, the University’s flagship mission on AI. He is also a Senior AI Fellow at the Alan Turing Institute, visiting Professor at the University of Sheffield and author of the forthcoming book The Atomic Human (release date 6th June 2024).&lt;/p&gt; &lt;p&gt;Sadiqah Musa has over 12 years of experience in data and analytics. She started her career analysing seismic data as an Interpretation Geophysicist. With the desire to expand her knowledge and expertise, Sadiqah moved into customer and behavioural analytics. She is currently working as a Senior Analytics Manager at Trustpilot and is the founder - CEO at Black in Data where she is an advocate for increased representation of ethnic diversity within data.&lt;/p&gt; &lt;p&gt;Kenneth Benoit is Director of the Data Science Institute at the London School of Economics and Political Science, and Professor of Computational Social Science in the Department of Methodology. He is also Professor (Part-time) in the School of Politics and International Relations, Australian National University. He has previously held positions in the Department of Political Science at Trinity College Dublin and at the Central European University (Budapest).&lt;/p&gt;</description>
        <pubDate>Sat, 15 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-guardians-who-holds-power-over-our-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-guardians-who-holds-power-over-our-data.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human: Talk and Fireside Chat with James Marshall</title>
        <description>&lt;ul&gt; &lt;li&gt;Introduction by Professor James Marshall, Director of the Centre for Machine Intelligence * Talk by Professor Neil Lawrence on The Atomic Human * Fireside chat with Neil and James * Q&amp;amp;A * Book signing&lt;/li&gt; &lt;/ul&gt;</description>
        <pubDate>Fri, 07 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-sheffield.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-sheffield.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human: In Conversation with Gina Neff</title>
        <description>&lt;p&gt;Join us and ai@cam for the launch of Atomic Human: Understanding Ourselves in the Age of AI, a visionary new book from Neil D. Lawrence on the evolution of human and machine intelligence.&lt;/p&gt;</description>
        <pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-minderoo.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-minderoo.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Age of Generative AI</title>
        <description>&lt;ul&gt; &lt;li&gt;Societal Norms: working in the age of Generative AI ▪ Impact on people function ▪ Tasking: Composition, performance, tracking&lt;/li&gt; &lt;/ul&gt;</description>
        <pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-age-of-generative-ai-june-2024-alp.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-age-of-generative-ai-june-2024-alp.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human: In Conversation with Sylvie Delacroix</title>
        <description>&lt;p&gt;The Centre for data Futures celebrates the publication of ‘The Atomic Human’ by Professor Neil Lawrence with a cross-disciplinary conversation. Professor Sylvie Delacroix, Director of the Centre for Data Futures, will host a conversation with Neil about The Atomic Human, whose publication comes at a pivotal point in the so-called ‘AI revolution’. What does it take for us to not only choose but carve out a future where ‘AI is a tool for us’? Who does the ‘us’ stand for? Does the fact that this revolution is powered by a resource -data- that largely comes from us change anything?&lt;/p&gt;</description>
        <pubDate>Wed, 05 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-kings.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-kings.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI needs to serve people, science, and society</title>
        <description>Artificial intelligence offers great promise, but we must ensure it does not deepen inequalities. In this provocation we will argue that AI hasn’t delivered on what society has asked of it, but it new technologies mean it could do.</description>
        <pubDate>Tue, 04 Jun 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society-aru.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society-aru.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Revisiting the Revisit of the 2014 NeurIPS Experiment</title>
        <description>&lt;p&gt;In 2014, along with Corinna Cortes, I was Program Chair of the Neural Information Processing Systems conference. At the time, when wondering about innovations for the conference, Corinna and I decided it would be interesting to test the consistency of reviewing. With this in mind, we randomly selected 10% of submissions and had them reviewed by two independent committees. In this talk I will review the construction of the experiment, explain how the NeurIPS review process worked and talk about what I felt the implications for reviewing were, vs what the community reaction was. The talk was originally given in 2021 when the long term impact of papers were measured by seven years of citations. Here we augment the results with citations from today, 2024, nearly a decade after papers were published.&lt;/p&gt;</description>
        <pubDate>Wed, 22 May 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-neurips-experiment-iwcv.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-neurips-experiment-iwcv.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What Makes Us Unique in the Age of AI</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people.&lt;/p&gt; &lt;p&gt;We are undergoing a cognitive revolution tha is akin to the cosmological revolution Nicolaus Copernicus triggered in 1543. But just like the Earth became no less interesting because it isn’t at the centre of the cosmological universe, our intelligence is no less interesting because it doesn’t dominate the cognitive universe.&lt;/p&gt; &lt;p&gt;The thinking in this talk comes from Neil’s forthcoming book to be published with Allen Lane in June 2024. The main premise is that AI represents a cogntive revolution akin to the Copernican celestial revolution. But as we realise that we are not at the centre of the cognitive universe, we can understand better who we are and what is precious about us.&lt;/p&gt;</description>
        <pubDate>Fri, 17 May 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-makes-us-unique-in-the-age-of-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-makes-us-unique-in-the-age-of-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;ul&gt; &lt;li&gt;Societal Norms: working in the age of Generative AI ▪ Impact on people function ▪ Tasking: Composition, performance, tracking&lt;/li&gt; &lt;/ul&gt;</description>
        <pubDate>Sat, 20 Apr 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-miss-tweed.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-miss-tweed.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Age of Generative AI</title>
        <description>&lt;ul&gt; &lt;li&gt;Societal Norms: working in the age of Generative AI ▪ Impact on people function ▪ Tasking: Composition, performance, tracking&lt;/li&gt; &lt;/ul&gt;</description>
        <pubDate>Tue, 16 Apr 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-age-of-generative-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-age-of-generative-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Power and Pitfalls of Machine Learning in the Design of New Molecules</title>
        <description>&lt;p&gt;AI is currently revolutionising science. It is helping us to discover more promising candidates for therapeutic drugs; it is being used to generate more accurate weather forecasts; and it is at the forefront of efforts to tackle the biodiversity and climate crises. At this inaugural event, we explore the effects that AI is already having on science and those areas of science soon to be influenced.&lt;/p&gt; &lt;p&gt;Professor Charlotte Deane MBE delivers our plenary talk, “The power and pitfalls of Machine Learning in the design of new molecules”. Professor Deane is Executive Chair of the Engineering and Physical Sciences Research Council, and Professor of Structural Bioinformatics in the Department of Statistics at the University of Oxford, where she leads the Oxford Protein Informatics Group.&lt;/p&gt; &lt;p&gt;Following her talk, Professor Deane discusses “AI as the main driver for future science” in a panel session including:&lt;/p&gt; &lt;p&gt;Professor Louise Slater is Professor of Hydroclimatology and Tutorial Fellow at Hertford College, University of Oxford. Louise leads the Hydro-Climate Extremes Research Group, which develops computational approaches to detect, attribute and predict how changes in climate and land cover may affect water-related extremes and society.&lt;/p&gt; &lt;p&gt;Professor Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge. He has been working on machine learning models for over 20 years and recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world.&lt;/p&gt; &lt;p&gt;The panel is chaired by Professor Stephen Roberts, Professor of Machine Learning and Professorial Fellow at Somerville College, University of Oxford. Stephen is co-lead of Oxford’s Schmidt AI in Science Postdoctoral Fellowship Programme.&lt;/p&gt;</description>
        <pubDate>Wed, 10 Apr 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-power-and-pitfalls-of-machine-learning-in-the-design-of-new-molecules.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-power-and-pitfalls-of-machine-learning-in-the-design-of-new-molecules.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our futures.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024.&lt;/p&gt;</description>
        <pubDate>Mon, 01 Apr 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-bellairs.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-bellairs.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our futures.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024.&lt;/p&gt;</description>
        <pubDate>Thu, 28 Mar 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-vector.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-vector.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Law and the Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;I’ll contrast our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence and speculate on what it means for our future with a particular focus on the legal profession.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024.&lt;/p&gt;</description>
        <pubDate>Tue, 19 Mar 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-kings-llm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-kings-llm.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024. Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda.&lt;/p&gt; &lt;p&gt;In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Tue, 12 Mar 2024 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-st-andrews.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-st-andrews.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Artificial Intelligence</title>
        <description>&lt;p&gt;Waves of automation have driven human advance, and each wave requires humans to The promise of AI is to launch new systems of automated intellectual endeavour that will be the first systems to adapt to us. In this talk I’ll introduce the notion of the atomic human and show how artificial intelligence may be a way to better understand human intelligence.&lt;/p&gt;</description>
        <pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/artificial-intelligence-ludgate.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/artificial-intelligence-ludgate.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Educating the Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people.&lt;/p&gt; &lt;p&gt;The thinking in this talk comes from Neil’s forthcoming book to be published with Allen Lane in June 2024. The questions raised in this talk will be around how we educate the atomic human in the age of AI.&lt;/p&gt; &lt;p&gt;Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt;</description>
        <pubDate>Wed, 22 Nov 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/educating-the-atomic-human.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/educating-the-atomic-human.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>How Do We Cope with Rapid Change Like AI/ML?</title>
        <description>&lt;p&gt;Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Tue, 21 Nov 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/how-do-we-cope-with-rapid-change-like-ai-ml.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/how-do-we-cope-with-rapid-change-like-ai-ml.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Thu, 09 Nov 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-november-23.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-november-23.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Atomic Human</title>
        <description>&lt;p&gt;A vital perspective is missing from the discussions we’re having about Artificial Intelligence: what does it mean for our identity?&lt;/p&gt; &lt;p&gt;Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it’s what differentiates us. Fears of AI not only concern how it invades our digital lives, but also the implied threat of an intelligence that displaces us from our position at the centre of the world.&lt;/p&gt; &lt;p&gt;Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.&lt;/p&gt; &lt;p&gt;By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The Atomic Human reveals the technical origins, capabilities and limitations of AI systems, and how they should be wielded. Not just by the experts, but ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose the future we want.&lt;/p&gt; &lt;p&gt;This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024. Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda.&lt;/p&gt; &lt;p&gt;In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Mon, 23 Oct 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-atomic-human-churchill.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-atomic-human-churchill.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Decision Making in the Era of Generative AI</title>
        <description>The excitement around large language models such as ChatGPT has led to great enthusiasm about the possibilities for computer aided decision making across the professions. In this talk, we will look at why these models are important and speculate on how they may help and hinder progress in automated decision making.</description>
        <pubDate>Mon, 02 Oct 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/decision-making-in-the-era-of-generative-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/decision-making-in-the-era-of-generative-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Communications and Digital Committee</title>
        <description>&lt;p&gt;The Communications and Digital Committee is launching an inquiry that will examine large language models and what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks.This will involve evaluating the work of Government and regulators, examining how well this addresses current and future technological capabilities, and reviewing the implications of approaches taken elsewhere in the world.&lt;/p&gt;</description>
        <pubDate>Tue, 12 Sep 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/health/lords-evidence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/health/lords-evidence.html</guid>
        
        
        <category>health</category>
        
      </item>
    
      <item>
        <title>What is the Future for Probability in the Era of Generative AI?</title>
        <description>&lt;p&gt;In this talk I will speculate on what the current generation of generative AI technologies means for those of us who have been building probabilistic models in machine learning. In particular, I’ll explore what these models mean at the human computer interface, suggesting that the generative AI models allow for a new type of computer a “human-analagous machine” (HAM) which constructs a feature space that is analagouse the equivalent “feature space” we use in our head for human reasoning. This allows for these machines to be much more robust to the types of ambiguity typically expressed by humans and to present the most salient information to humans about the status of a machine system. However, it also allows for what Daniel Dennet has referred to as “counterfeit humans”. This presents new opportunities for those in probabilistic modelling to understand what it means for a human to gain a calibrated understanding of uncertainty through interacting with a HAM.&lt;/p&gt;</description>
        <pubDate>Fri, 21 Jul 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-is-the-future-for-probability-in-the-era-of-generative-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-is-the-future-for-probability-in-the-era-of-generative-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Thu, 08 Jun 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-june-23.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-june-23.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Being Human in the Age of AI</title>
        <description>&lt;p&gt;Waves of automation have driven human advance, and each wave requires humans to adapt to the machine.The promise of AI is to be the first wave of automation that will adapt to us.&lt;/p&gt; &lt;p&gt;As this promise seems to be coming close to being fulfilled we are seeing that the machine is gaining new capabilities that we previously thought of as unique to us, so where does this leave the human being in the age of AI?&lt;/p&gt; &lt;p&gt;In this talk Neil will argue that rather than supplanting our intelligence, AI provides a new lens with which we can better understand ourselves. He’ll argue that rather than our identity being driven by our capabilities, its driven by our limitations and fragilities. And that if we take this seriously we see that the real role of AI can be not to make us transhuman, but more human.&lt;/p&gt;</description>
        <pubDate>Mon, 22 May 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/being-human-in-the-age-of-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/being-human-in-the-age-of-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Harnessing Data Science for Africa’s Socio-Economic Development</title>
        <description></description>
        <pubDate>Wed, 10 May 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/harnessing-data-science-for-africas-socio-economic-development.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/harnessing-data-science-for-africas-socio-economic-development.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Use or be Used: Regaining Control of AI</title>
        <description>&lt;p&gt;It’s said that Henry Ford’s customers wanted a “a faster horse”. If Henry Ford was selling us artificial intelligence today, what would the customer call for, “a smarter human”? That’s certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we’ve developed is far more mundane.&lt;/p&gt; &lt;p&gt;Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us?&lt;/p&gt; &lt;p&gt;If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don’t understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.&lt;/p&gt;</description>
        <pubDate>Tue, 02 May 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/use-or-be-used.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/use-or-be-used.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI needs to serve people, science, and society</title>
        <description>As artificial intelligence becomes ubiquitous in our homes and workplaces, we need to develop a widespread understanding of what it is and how we use it in the interests of our societies. Neil will discuss how the artificial systems we have developed operate in a fundamentally different way to our own intelligence and how this difference in operational capability leads us to misunderstand the influence that decisions made by machine intelligence are having on our lives. Without this understanding we cannot take back control of those decisions from the machine. This will set the scene for approaches we are taking in Cambridge to address these challenges such as AI@Cam, the University’s flagship mission on AI.</description>
        <pubDate>Fri, 14 Apr 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society-cais.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society-cais.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI needs to serve people, science, and society</title>
        <description>Artificial intelligence offers great promise, but we must ensure it does not deepen inequalities. Today we are setting out our vision for AI@Cam, a new flagship mission at the University of Cambridge.</description>
        <pubDate>Tue, 14 Mar 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-needs-to-serve-people-science-and-society.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Caught by Surprise Retrospective: The Mechanistic Fallacy and Modelling How We Think</title>
        <description>&lt;p&gt;In this talk, I revisit a talk given at NeurIPS 2015 where I speculated about the next directions for ML. The aim is to find surprising things, so I’ll try to reconstruct what I was thinking at the time and compare that to what I think now.&lt;/p&gt; &lt;p&gt;In this talk we will discuss how our current set of modelling solutions relates to dual process models from psychology. By analogising with layered models of networks we first address the danger of focussing purely on mechanism (or biological plausibility) when discussion modelling in the brain. We term this idea the mechanistic fallacy. In an attempt to operate at a higher level of abstraction, we then take a conceptual approach and attempt to map the broader domain of mechanistic and phenomological models to dual process ideas from psychology. it seems that System 1 is closer to phenomological and System 2 is closer to mechanistic ideas. We will draw connections to surrogate modelling (also known as emmulation) and speculate that one role of System 2 may be to provide additional simulation data for System 1.&lt;/p&gt;</description>
        <pubDate>Thu, 05 Jan 2023 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-mechanistic-fallacy-and-modelling-how-we-think-caught-by-surprise.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-mechanistic-fallacy-and-modelling-how-we-think-caught-by-surprise.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Understanding Artificial Intelligence</title>
        <description>&lt;p&gt;Though artificial intelligence is ubiquitous in our homes and workplaces, there is widespread misunderstanding of what it really is. Join Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, as he encourages us to reframe our view of AI.&lt;/p&gt; &lt;p&gt;Neil will discuss how the artificial systems we have developed operate in a fundamentally different way to our own intelligence. He will describe how this difference in operational capability leads us to misunderstand the influence that decisions made by machine intelligence are having on our lives. Without this understanding we cannot take back control of those decisions from the machine.&lt;/p&gt;</description>
        <pubDate>Wed, 30 Nov 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/understanding-ai-rothschild.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/understanding-ai-rothschild.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>How Engineers Solve Big and Difficult Problems Part 1: The Challenges/Opportunities Presented to Engineers by AI/ML</title>
        <description>&lt;p&gt;Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Mon, 14 Nov 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/how-engineers-solve-big-and-difficult-problems-part-1-the-challenge-opportunities-presented-to-engineers-by-ai-ml.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/how-engineers-solve-big-and-difficult-problems-part-1-the-challenge-opportunities-presented-to-engineers-by-ai-ml.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Thu, 10 Nov 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-november-22.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-november-22.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI for Science: An Oral Report from a Recent Dagstuhl Workshop</title>
        <description>&lt;p&gt;As part of the Accelerate Science programme with Jess Montgomery, the University of Tuebingen, the University of Wisconsin, and NYU we recently hosted a week-long programme at the Leibniz-Zentrum für Informatik in Dagstuhl, Germany. In this talk Neil will give an oral report on the discussions sharing some of the ideas presented at the meeting. The ideas will feed into a longer report on the area produced by the Accelerate Science team.&lt;/p&gt;</description>
        <pubDate>Tue, 01 Nov 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-for-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-for-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Emulation</title>
        <description>In this session we introduce the notion of emulation and systems modeling with Gaussian processes.</description>
        <pubDate>Tue, 13 Sep 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/emulation-2022.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/emulation-2022.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Organisational Data Science</title>
        <description>&lt;p&gt;In this talk we review the challenges in making an organisation data-driven in its decision making. Building on experience working within Amazon and providing advice through the Royal Society convened DELVE group we review challenges and solutions for improving the data capabilities of an institution. This talk is targeted at data-aware leaders working in an institution.&lt;/p&gt;</description>
        <pubDate>Sun, 04 Sep 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/organisational-data-science-cdei.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/organisational-data-science-cdei.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Thu, 07 Jul 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-post-digital-transformation-and-intellectual-debt-july-22.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-post-digital-transformation-and-intellectual-debt-july-22.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes: A Motivation and Introduction</title>
        <description>&lt;p&gt;Modern machine learning methods have driven significant advances in artificial intelligence, with notable examples coming from Deep Learning, enabling super-human performance in the game of Go and highly accurate prediction of protein folding e.g. AlphaFold. In this talk we look at deep learning from the perspective of Gaussian processes. Deep Gaussian processes extend the notion of deep learning to propagate uncertainty alongside function values. We’ll explain why this is important and show some simple examples.&lt;/p&gt;</description>
        <pubDate>Fri, 17 Jun 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deep-gaussian-processes-a-motivation-and-introduction-sheffield.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deep-gaussian-processes-a-motivation-and-introduction-sheffield.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The AI Paradigm Shift: Machine Learning, Automated Decision Making and Modern Society</title>
        <description>&lt;p&gt;The term Artificial Intelligence means different things to different people, but we can distil some commonality across different expectations of the term. It seems that the word intelligence drives us to believe that this new approach to automation will be the first to adapt to us as humans rather than requiring us to adapt to it. This promise presents challenges, because machine learning technologies that underpin the revolution in artificial intelligence are not capable of adapting to humans as we are to each other. In this talk we introduce the challenges and overview the research directions we are taking to uncover the solutions.&lt;/p&gt;</description>
        <pubDate>Tue, 14 Jun 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-ai-paradigm-shift-machine-learning-automated-decision-making-and-modern-society.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-ai-paradigm-shift-machine-learning-automated-decision-making-and-modern-society.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Thu, 09 Jun 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-post-digital-transformation-and-intellectual-debt-june-22.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-post-digital-transformation-and-intellectual-debt-june-22.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Understanding Artificial Intelligence</title>
        <description>&lt;p&gt;Though artificial intelligence is ubiquitous in our homes and workplaces, there is widespread misunderstanding of what it really is. Join Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, as he encourages us to reframe our view of AI.&lt;/p&gt; &lt;p&gt;Neil will discuss how the artificial systems we have developed operate in a fundamentally different way to our own intelligence. He will describe how this difference in operational capability leads us to misunderstand the influence that decisions made by machine intelligence are having on our lives. Without this understanding we cannot take back control of those decisions from the machine.&lt;/p&gt;</description>
        <pubDate>Tue, 07 Jun 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/understanding-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/understanding-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes: A Motivation and Introduction</title>
        <description>&lt;p&gt;Modern machine learning methods have driven significant advances in artificial intelligence, with notable examples coming from Deep Learning, enabling super-human performance in the game of Go and highly accurate prediction of protein folding e.g. AlphaFold. In this talk we look at deep learning from the perspective of Gaussian processes. Deep Gaussian processes extend the notion of deep learning to propagate uncertainty alongside function values. We’ll explain why this is important and show some simple examples.&lt;/p&gt;</description>
        <pubDate>Mon, 06 Jun 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deep-gaussian-processes-a-motivation-and-introduction-bristol.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deep-gaussian-processes-a-motivation-and-introduction-bristol.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The NeurIPS Experiment</title>
        <description>&lt;p&gt;In 2014, along with Corinna Cortes, I was Program Chair of the Neural Information Processing Systems conference. At the time, when wondering about innovations for the conference, Corinna and I decided it would be interesting to test the consistency of reviewing. With this in mind, we randomly selected 10% of submissions and had them reviewed by two independent committees. In this talk I will briefly review the construction of the experiment, explain how the NeurIPS review process worked and talk about what I felt the implications for reviewing were, vs what the community reaction was.&lt;/p&gt;</description>
        <pubDate>Tue, 10 May 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-neurips-experiment-snsf.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-neurips-experiment-snsf.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI and Data Trusts</title>
        <description>&lt;p&gt;Resolutely complementary to top-down regulation, bottom-up data trusts aim to ‘give a voice’ to data subjects whose choices when it comes to data governance are often reduced to binary, ill-informed consent. While the rights granted by instruments like the GDPR can be used as tools in a bit to shape possible data-reliant futures - such as better use of natural resources, medical care etc., their exercise is both demanding and unlikely to be as impactful when leveraged individually. The power that stems from aggregated data should be returned to individuals through the legal mechanism of trusts.&lt;/p&gt;</description>
        <pubDate>Mon, 02 May 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-and-data-trusts.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-and-data-trusts.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>My Experience with AI</title>
        <description>&lt;p&gt;In 2016 Geoff Hinton, one of the most respected researchers in AI suggested ‘People should stop training radiologists now, it’s just completely obvious that within 5 years deep learning is going to do a lot better than radiologists, it might be 10 years, but we’ve got plenty of radiologists already.’&lt;/p&gt; &lt;p&gt;Geoff likes to court controversy, but how well does that statement stand up with time? Elon Musk also said that we’d have fully autonomous drivng by 2018. What happened to that? Both these predictions are examples of the Great AI Fallacy. The idea that we have finally devoloped automation technology that can adapt to that. The reality is more mundane, but it doesn’t mean that AI can’t help.&lt;/p&gt; &lt;p&gt;In this talk I’ll give some sense of the challenges and the solutions.&lt;/p&gt;</description>
        <pubDate>Wed, 27 Apr 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/my-experience-with-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/my-experience-with-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Post-Digital Transformation, Decision Making and Intellectual Debt</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits.&lt;/p&gt; &lt;p&gt;I’ll discuss how the artificial systems we have developed operate in a fundamentally different way to our own intelligence. I’ll describe how this difference in operational capability leads us to misunderstand the influence the nature of decisions made by machine intelligence.&lt;/p&gt; &lt;p&gt;Developing this understanding is important in integrating human decisions with those from the machine. These ideas are designed to help with the challenge of ‘post digital transformation’: doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Tue, 26 Apr 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/post-digital-transformation-decision-making-and-intellectual-debt.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/post-digital-transformation-decision-making-and-intellectual-debt.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Artificial Intelligence: Reclaiming Control</title>
        <description>&lt;p&gt;Though artificial intelligence is ubiquitous in our homes and workplaces, there is widespread misunderstanding of what it really is. Join us for this public lecture as Neil Lawrence, DeepMind Professor of Machine Learning encourages us to reframe our view of AI.&lt;/p&gt; &lt;p&gt;He’ll discuss how the artificial systems we have developed operate in a fundamentally different way to our own intelligence. He’ll describe how this difference in operational capability leads us to misunderstand the influence that decisions made by machine intelligence are having on our lives. Without this understanding we cannot take back control of those decisions from the machine. Along the way, he’ll chat with fellow Cambridge University researchers about how we maximise the benefits of these technologies while minimising the harms.&lt;/p&gt;</description>
        <pubDate>Sat, 09 Apr 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-reclaiming-control.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-reclaiming-control.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Leveraging Opportunities of the 4th Industrial Revolution to Develop Successful Careers in the field of Healthcare: the pains and gains</title>
        <description></description>
        <pubDate>Fri, 25 Feb 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/leveraging-opportunities-of-the-fourth-industrial-revolution-to-develop-successful-careers-in-the-field-of-healthcare-the-pains-and-gains.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/leveraging-opportunities-of-the-fourth-industrial-revolution-to-develop-successful-careers-in-the-field-of-healthcare-the-pains-and-gains.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Organisational Data Science</title>
        <description>&lt;p&gt;In this talk we review the challenges in making an organisation data-driven in its decision making. Building on experience working within Amazon and providing advice through the Royal Society convened DELVE group we review challenges and solutions for improving the data capabilities of an institution. This talk is targeted at data-aware leaders working in an institution.&lt;/p&gt;</description>
        <pubDate>Wed, 12 Jan 2022 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/organisational-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/organisational-data-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI4ER Lecture: Projects</title>
        <description>&lt;p&gt;In this session we look at some projects from data science africa and review challenges around ethical artificial intelligence from a perspective of data governance. We’ll give some background to how these challenges have emerged and then consider some solutions including the mechanism of data trusts and some pointers to work around data sharing in Africa for the Covid19 pandemic.&lt;/p&gt;</description>
        <pubDate>Thu, 25 Nov 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-governance-ai-for-er.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-governance-ai-for-er.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Mind and Machine Intelligence</title>
        <description>What is the nature of machine intelligence and how does it differ from humans? In this talk we introduce embodiment factors. They represent the extent to which our intelligence is locked inside us. The locked in nature of our intelligence makes us fundamentally different from the machine intelligences we are creating around us. Having summarized these differences we consider the Three Ds of machine learning system design: a set of considerations to take into acount when building machine intelligences.</description>
        <pubDate>Thu, 11 Nov 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/mind-and-machine-intelligence-comenius.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/mind-and-machine-intelligence-comenius.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>&lt;p&gt;Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.&lt;/p&gt;</description>
        <pubDate>Thu, 11 Nov 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture-post-digital-transformation-and-intellectual-debt.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture-post-digital-transformation-and-intellectual-debt.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes: A Motivation and Introduction</title>
        <description>&lt;p&gt;Modern machine learning methods have driven significant advances in artificial intelligence, with notable examples coming from Deep Learning, enabling super-human performance in the game of Go and highly accurate prediction of protein folding e.g. AlphaFold. In this talk we look at deep learning from the perspective of Gaussian processes. Deep Gaussian processes extend the notion of deep learning to propagate uncertainty alongside function values. We’ll explain why this is important and show some simple examples.&lt;/p&gt;</description>
        <pubDate>Thu, 04 Nov 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deep-gaussian-processes-a-motivation-and-introduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deep-gaussian-processes-a-motivation-and-introduction.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Governance for Ethical AI</title>
        <description>&lt;p&gt;In this big picture session we look at the challenges around ethical artificial intelligence from a perspective of data governance. We’ll give some background to how these challenges have emerged and then consider some solutions including the mechanism of data trusts and some pointers to work around data sharing in Africa for the Covid19 pandemic.&lt;/p&gt;</description>
        <pubDate>Mon, 01 Nov 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-governance-for-ethical-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-governance-for-ethical-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI, Data Science and the Covid19 Pandemic</title>
        <description>&lt;p&gt;With the world watching case numbers increase and publics and policymakers scrutinising projections from epidemiological models, the covid-19 pandemic brought with it increased attention on the use of data to inform policy. Alongside this scrutiny came a new wave of interest in the ability of data and artificial intelligence (AI) to help tackle major scientific and social challenges: could our increasing ability to collect, combine and interrogate large datasets lead to new insights that unlock more effective policy responses?&lt;/p&gt;</description>
        <pubDate>Mon, 18 Oct 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-data-science-and-the-covid19-pandemic.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-data-science-and-the-covid19-pandemic.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Access, Assess and Address: A Pipeline for (Automated?) Data Science</title>
        <description>&lt;p&gt;Data Science is an emerging discipline that is being promoted as a universal panacea for the world’s desire to make better informed decisions based on the wealth of data that is available in our modern interconnected society. In practice data science projects often find it difficult to deliver. In this talk we will review efforts to drive data informed in real world examples, e.g., the UK’s early Covid19 pandemic response. We will introduce a framework for categorising the stages and challenges of the data science pipeline and relate it to the challenges we see when giving data driven answers to real world questions. We will speculate on where automation may be able to help but emphasise that automation in this landscape is challenging when so many issues remain for getting humans to do the job well.&lt;/p&gt;</description>
        <pubDate>Fri, 17 Sep 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/access-assess-address-a-pipeline-for-automated-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/access-assess-address-a-pipeline-for-automated-data-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Emulation</title>
        <description>In this session we introduce the notion of emulation and systems modeling with Gaussian processes.</description>
        <pubDate>Wed, 15 Sep 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/emulation.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/emulation.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Physical World</title>
        <description>&lt;p&gt;Machine learning technologies have underpinned the recent revolution in artificial intelligence. But at their heart, they are simply data driven decision making algorithms. While the popular press is filled with the achievements of these algorithms in important domains such as object detection in images, machine translation and speech recognition, there are still many open questions about how these technologies might be implemented in domains where we have existing solutions but we are constantly looking for improvements. Roughly speaking, we characterise this domain as “machine learning in the physical world.” How do we design, build and deploy machine learning algorithms that are part of a decision making system that interacts with the physical world around us. In particular, machine learning is a data driven endeavour, but real world systems are physical and mechanistic. In this talk we will introduce some of the challenges for this domain and and propose some ways forward in terms of solutions.&lt;/p&gt;</description>
        <pubDate>Tue, 13 Jul 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ml-and-the-physical-world-tuebingen.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ml-and-the-physical-world-tuebingen.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Physical World</title>
        <description>&lt;p&gt;You can’t have trust without understanding. We must view the systems we build as our tools, because if we can’t manipulate these systems, then we are at risk of being manipulated by these systems. Inspired by the centrifugal governor, this talk describes how statistical emulation provides a possible root for giving understanding to complex AI systems at a level of abstraction that allows humans to view the system as a tool, rather than being a tool of the system.&lt;/p&gt;</description>
        <pubDate>Wed, 07 Jul 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ml-and-the-physical-world-trustworthy-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ml-and-the-physical-world-trustworthy-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>A Retrospective on the 2014 NeurIPS Experiment</title>
        <description>&lt;p&gt;In 2014, along with Corinna Cortes, I was Program Chair of the Neural Information Processing Systems conference. At the time, when wondering about innovations for the conference, Corinna and I decided it would be interesting to test the consistency of reviewing. With this in mind, we randomly selected 10% of submissions and had them reviewed by two independent committees. In this talk I will review the construction of the experiment, explain how the NeurIPS review process worked and talk about what I felt the implications for reviewing were, vs what the community reaction was.&lt;/p&gt;</description>
        <pubDate>Wed, 16 Jun 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-neurips-experiment.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-neurips-experiment.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Can’t Fix This: Happenstance Data, Modelling, and the Covid19 Pandemic</title>
        <description>&lt;p&gt;With the world watching case numbers increase and publics and policymakers scrutinising projections from epidemiological models, the covid-19 pandemic brought with it increased attention on the use of data to inform policy. Alongside this scrutiny came a new wave of interest in the ability of data and artificial intelligence (AI) to help tackle major scientific and social challenges: could our increasing ability to collect, combine and interrogate large datasets lead to new insights that unlock more effective policy responses? Experiences from the DELVE Initiative, convened to bring data science to bear on covid-19 policy, suggests achieving this aim requires wider adoption of open data science methods to deploy data science and AI expertise and resources to tackle real-world problems.&lt;/p&gt;</description>
        <pubDate>Thu, 20 May 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-cant-fix-this-happenstance-data-modelling-and-the-covid19-pandemic.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-cant-fix-this-happenstance-data-modelling-and-the-covid19-pandemic.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Post-Digital Transformation: Intellectual Debt</title>
        <description>Digital transformation has offered the promise of moving from a manual decision-making world to a world where decisions can be rational, data-driven and automated. The first step to digital transformation is mapping the world of atoms (material, customers, logistic networks) into the world of bits. But the real challenges may start once this is complete. In this talk we introduce the notion of ‘post digital transformation’: the challenges of doing business in a digital world.</description>
        <pubDate>Mon, 17 May 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/post-digital-transformation-intellectual-debt.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/post-digital-transformation-intellectual-debt.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Physical World</title>
        <description>&lt;p&gt;Machine learning technologies have underpinned the recent revolution in artificial intelligence. But at their heart, they are simply data driven decision making algorithms. While the popular press is filled with the achievements of these algorithms in important domains such as object detection in images, machine translation and speech recognition, there are still many open questions about how these technologies might be implemented in domains where we have existing solutions but we are constantly looking for improvements. Roughly speaking, we characterise this domain as “machine learning in the physical world.” How do we design, build and deploy machine learning algorithms that are part of a decision making system that interacts with the physical world around us. In particular, machine learning is a data driven endeavour, but real world systems are physical and mechanistic. In this talk we will introduce some of the challenges for this domain and and propose some ways forward in terms of solutions.&lt;/p&gt;</description>
        <pubDate>Wed, 05 May 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ml-and-the-physical-world-data-centric-engineering.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ml-and-the-physical-world-data-centric-engineering.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI Faith Panel Discussion</title>
        <description></description>
        <pubDate>Tue, 20 Apr 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-faith-panel-discussion.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-faith-panel-discussion.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Auto AI: Resolving Intellectual Debt in Complex Systems</title>
        <description>&lt;p&gt;Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. Our capability to deploy complex decision-making systems has improved, but our ability to explain them has reduced. This phenomenon is known as intellectual debt. The reality of deployed systems is they are constructed from interacting components of individual models. While a lot of focus has been on the explainability and reliability of an individual model, the real challenge is explainability and reliability of the entire system.&lt;/p&gt; &lt;p&gt;In this talk we introduce the concept of Auto AI and give a road map to achieving fair, explainable and transparent AI systems.&lt;/p&gt;</description>
        <pubDate>Tue, 23 Mar 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/auto-ai-resolving-intellectual-debt-in-complex-systems.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/auto-ai-resolving-intellectual-debt-in-complex-systems.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Interpretable Models</title>
        <description>&lt;p&gt;The great AI fallacy is that we are building the first generation of automation that will adapt to humans rather than humans adapting to us. The more sobering reality is that we are building complex algorithmic decision making system that we are unable to explain. A FIT model is fair, interpretable and transparent. The machine learning community has placed effort into understanding how to improve interpretability into individual models, but the real challenge is how to build FIT systems. At the heart of the development of machine learning is the notion of separation of concerns, but this can obscure the real challenge which is responding to the human.&lt;/p&gt;</description>
        <pubDate>Tue, 09 Mar 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/interpretable-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/interpretable-models.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Uncertainty, Procrastination and Artificial Intelligence</title>
        <description>In this talk I will introduce the importance of uncertainty in decision making and describe how it provides a mathematical justification for procrastination through the game of Kappenball.</description>
        <pubDate>Mon, 01 Mar 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/uncertainty-procrastination-and-artificial-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/uncertainty-procrastination-and-artificial-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Laplace’s Gremlin: Uncertainty and Artificial Intelligence</title>
        <description>&lt;p&gt;With breakthroughs in understanding images, translating language, transcribing speech artificial intelligence promises to revolutionise the technological landscape. Machine learning algorithms are able to convert unstructured data into actionable knowledge. With the increasing impact of these technologies, society’s interest is also growing. The word &lt;em&gt;intelligence&lt;/em&gt; conjures notions of human-like capabilities. But are we really on the cusp of creating machines that match us? We associate intelligence with knowledge, but in this talk I will argue that the true marvel of our intelligence is the way it deals with ignorance. Despite the large strides forward we have made, I will argue that we have a long way to go to deliver on the promise of artificial intelligence. And it is a journey that our societies need to take together, not just as computer scientists, but together by rediscovering the interdisciplinary spirit that Celsius, Linnaeus and their contemporaries did so much to demonstrate.&lt;/p&gt;</description>
        <pubDate>Thu, 11 Feb 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/laplaces-gremlin-uncertainty-and-artificial-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/laplaces-gremlin-uncertainty-and-artificial-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI and the Future of Work</title>
        <description>null</description>
        <pubDate>Wed, 03 Feb 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ai-future-of-work-hsm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ai-future-of-work-hsm.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Introduction to Machine Intelligence</title>
        <description>&lt;p&gt;With breakthroughs in understanding images, translating language, transcribing speech artificial intelligence promises to revolutionise the technological landscape. Machine learning algorithms are able to convert unstructured data into actionable knowledge. With the increasing impact of these technologies, society’s interest is also growing. The word intelligence conjures notions of human-like capabilities. But are we really on the cusp of creating machines that match us? We associate intelligence with knowledge, but in this talk I will argue that the true marvel of our intelligence is the way it deals with ignorance. Despite the large strides forward we have made, I will argue that we have a long way to go to deliver on the promise of artificial intelligence. And it is a journey that science and artificial inteligence need to take together.&lt;/p&gt;</description>
        <pubDate>Tue, 02 Feb 2021 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/introduction-to-machine-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/introduction-to-machine-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Trusts Salon</title>
        <description></description>
        <pubDate>Thu, 17 Dec 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ostrom-workshop.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ostrom-workshop.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>When Scientists Work with Government</title>
        <description>&lt;ul&gt; &lt;li&gt;&lt;p&gt;The Delve initiative is a group that was convened by the Royal Society to help provide data-driven insights about the pandemic, with an initial focus on exiting the first lockdown and particular interest in using the variation of strategies across different international governments to inform policy.&lt;/p&gt;&lt;/li&gt; &lt;li&gt;&lt;p&gt;Drawing from a multidisciplinary team of domain experts in policy, public health, economics, education, immunology, epidemiology, and social science, alongside statisticians, mathematicians, computer scientists and machine learning scientists, DELVE set out to provide advice and analysis that could feed into live policy decisions.&lt;/p&gt;&lt;/li&gt; &lt;li&gt;&lt;p&gt;The main philosophy of the Delve group was to follow the “Supply Chain of Ideas”, connecting scientific evidence to policy questions.&lt;/p&gt;&lt;/li&gt; &lt;/ul&gt;</description>
        <pubDate>Wed, 16 Dec 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/when-scientists-work-with-government.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/when-scientists-work-with-government.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AutoAI: Systems, Machine Learning and Mathematics</title>
        <description>&lt;p&gt;Deployed artificial intelligence solutions consist of interacting components often trained as the result of &lt;em&gt;supervised machine learning&lt;/em&gt;. Automatic training of these sub-components is known as AutoML. But the real world challenges of deployment consist of the monitoring of system performance in the real world, in terms of accuracy but also for fairness and bias. To make such systems easily maintainable there is a need for automation of the process of monitoring and redeploying models as well as checking the quality of the overall system decomposition. In contrast to AutoML, we call this system-wide approach “Auto AI”. This is the subject of my Turing Fellowship&lt;/p&gt;</description>
        <pubDate>Wed, 16 Dec 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/auto-ai-systems-machine-learning-and-mathematics.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/auto-ai-systems-machine-learning-and-mathematics.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Accelerate-Spark Information Session</title>
        <description></description>
        <pubDate>Fri, 04 Dec 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/accelerate-overview.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/accelerate-overview.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Auto AI and Machine Learning Systems Design</title>
        <description>&lt;p&gt;It seems that we would like to design artificial intelligences, robust decision-making systems that understand the broader context of the decisions they are making, including the history and nature of human experience. At least, that is what the global hype around artificial intelligence implies we are doing. The reality is very different. In practice, we are designing and deploying data-driven decision-making systems within complex software systems with little to no understanding of the downstream implications. At the heart of the challenge is standard practice around the design and construction of modern, complex, software systems. In particular, we have resolved the challenge of the mythical person-month through separation of concerns. Decomposition of the task into separate entities, each of which has defined input and outputs and each of which is normally developed and/or maintained by a single software team. The challenge with such large-scale software systems is that they have incredible complexity. Separation of concerns enables us to deal with such complexity with a decomposition of components. Unfortunately, this means that no team is ‘concerned’ with the overall operation of this system. Modern artificial intelligence is based on machine learning algorithms. In deployment these become components of the larger system that make decisions through observing historic data around those decisions and emulating those decisions through fitting mathematical functions to the data. The field of machine learning is closely related to statistics, but in contrast to statistics, less emphasis has traditionally been placed on the interpretability of model outputs or the validity of decisions in the sense of some form of ‘statistical truth’. This released the field from the constraints of the simpler models that statisticians have typically focussed on, but the success of these models has triggered a wave of head scratching around the fairness, explainability and transparency of such models (FET models). FET models are an active area of machine learning research with their own conference. The challenge we are interested in is deeper: FET systems. When separation of concerns has been deployed, even if an individual model is FET then there is no guarantee that the entire system of interacting components will be FET. That would require composition of our criteria for fairness, explainability and transparency. Other authors have already pointed out the challenges of &lt;em&gt;technical debt&lt;/em&gt; in machine learning systems. Technical debt is the challenge of building systems that are &lt;em&gt;maintainable&lt;/em&gt; in production without significant additional labour, but the deeper problem is one of &lt;em&gt;intellectual debt&lt;/em&gt;. We are deploying systems that are not &lt;em&gt;explainable&lt;/em&gt; in production without deeper significant additional intellectual labour. This presentation is a call for help. We urgently need the expertise of the UK Systems Community around these issues to ensure we can construct safe, maintainable and explainable artificial intelligence solutions through FET systems.&lt;/p&gt;</description>
        <pubDate>Wed, 25 Nov 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/auto-ai-and-machine-learning-systems-design.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/auto-ai-and-machine-learning-systems-design.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Policy, Science and the Convening Power of Data</title>
        <description>&lt;p&gt;With the world watching case numbers increase and publics and policymakers scrutinising projections from epidemiological models, the covid-19 pandemic brought with it increased attention on the use of data to inform policy. Alongside this scrutiny came a new wave of interest in the ability of data and artificial intelligence (AI) to help tackle major scientific and social challenges: could our increasing ability to collect, combine and interrogate large datasets lead to new insights that unlock more effective policy responses? Experiences from the DELVE Initiative, convened to bring data science to bear on covid-19 policy, suggests achieving this aim requires wider adoption of open data science methods to deploy data science and AI expertise and resources to tackle real-world problems.&lt;/p&gt;</description>
        <pubDate>Tue, 24 Nov 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/policy-science-and-the-convening-power-of-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/policy-science-and-the-convening-power-of-data.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Science, Evidence and Government; Reflections on the Covid-19 Experience</title>
        <description>&lt;p&gt;This high-profile event brings together four senior academics from across the University of Cambridge who have all been advising policy-makers during the Covid-19 pandemic. The speakers will draw on their extensive experience of advising and being consulted by policy-makers, and will reflect on some of the lessons, debates and controversies associated with governmental responses to the pandemic. And they will consider what this episode tells us about the relationship between science, evidence and public policy in times of crisis.&lt;/p&gt;</description>
        <pubDate>Tue, 10 Nov 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/science-evidence-and-government-reflections-on-the-covid-19-experience.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/science-evidence-and-government-reflections-on-the-covid-19-experience.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Sharing and Data Trusts</title>
        <description>&lt;p&gt;Computational biologists know better than perhaps any other domain the importance of data sharing in progress in understanding complex decisions. Underlying the revolution in “artificial intelligence” is really a revolution in data. But when data is persona or has legal protections placed upon there are challenges to data sharing. In this talk we introduce the ideas behind data sharing and the model of data trusts, an approach to data sharing that relies on trust law to form its governance structure.&lt;/p&gt;</description>
        <pubDate>Tue, 20 Oct 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-sharing-and-data-trusts.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-sharing-and-data-trusts.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deploying Machine Learning: Intellectual Debt and AutoAI</title>
        <description>&lt;p&gt;From the dawn of cybernetics, and across the last eight decades, we’ve worked to make machine learning methods successful. But now that these methods are being widely adopted we need to deal with the consequences of success. Many of those consequences can only be understood when a holistic approach to the machine learning problem is considered: the deployment of a method within a context for a particular objective. In this circumstance, it’s easy to see that questions of interpretability, fairness and transparency are all contextual. In this talk we summarize this challenge using Jonathan Zittrain’s term of &apos;intellectual debt&apos;, we discuss how it pans out in reality and how this challenge could be addressed using machine learning techniques to give us &apos;Auto AI&apos;. This work is sponsored by an ATI Senior AI Fellowship.&lt;/p&gt;</description>
        <pubDate>Tue, 06 Oct 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deploying-machine-learning-systems-intellectual-debt-and-auto-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deploying-machine-learning-systems-intellectual-debt-and-auto-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AI and Data Science</title>
        <description>&lt;p&gt;Waves of automation have driven human advance, and each wave requires humans to The promise of AI is to launch new systems of automated intellectual endeavour that will be the first systems to adapt to us. In reality, the systems we have will not achieve this, and it is the biological sciences that teach us this lesson most starkly. In this talk I will review some of the successes and challenges of AI and its deployment and propose practical visions for the future based on approaches that have worked in the past.&lt;/p&gt;</description>
        <pubDate>Tue, 22 Sep 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/health/ai-and-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/health/ai-and-data-science.html</guid>
        
        
        <category>health</category>
        
      </item>
    
      <item>
        <title>Will AI Make the Workplace - Wherever it is - More Equal?</title>
        <description>&lt;p&gt;COVID-19 has brought more flexible working, particularly homeworking, for many. Will those changes be sustained after the pandemic and allow previously excluded workers into the labour market? And how will the artificial intelligence revolution affect the jobs we do and who does them? With Drs Christopher Markou, Helen McCarthy, Neil Lawrence and Stella Pachidi. This event is taking place in partnership with The Hay Festival (&amp;lt;www.hayfestival.com&amp;gt;)&lt;/p&gt;</description>
        <pubDate>Sat, 19 Sep 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/will-ai-make-the-workplace-wherever-it-is-more-equal.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/will-ai-make-the-workplace-wherever-it-is-more-equal.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deep GPs</title>
        <description>&lt;p&gt;In this talk we introduce deep Gaussian processes, an approach to stochastic process modelling that relies on the composition of individual stochastic proceses.&lt;/p&gt;</description>
        <pubDate>Wed, 16 Sep 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deep-gps.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deep-gps.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>FIT Machine Learning Systems</title>
        <description>&lt;p&gt;As machine learning becomes more widely deployed, it is important that we understand what we have deployed. There has been a lot of focus in machine learning research on the fairness and interpretability of individual models, but less attention paid to how this fits into a wider machine learning system. In this talk I’ll motivate the importance of fair, interpretable and transparent machine learning systems. I’ll outline the challenges and highlight some of the directions we are considering to address these challenges. This work is sponsored by an Alan Turing Institute Senior AI Fellowship.&lt;/p&gt;</description>
        <pubDate>Tue, 15 Sep 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/fit-machine-learning-systems.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/fit-machine-learning-systems.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Introduction to Machine Learning Systems</title>
        <description>This notebook introduces some of the challenges of building machine learning data systems. It will introduce you to concepts around joining of databases together. The storage and manipulation of data is at the core of machine learning systems and data science. The goal of this notebook is to introduce the reader to these concepts, not to authoritatively answer any questions about the state of Nigerian health facilities or Covid19, but it may give you ideas about how to try and do that in your own country.</description>
        <pubDate>Fri, 24 Jul 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/ml-systems.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/ml-systems.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Open Challenges for Automated Machine Learning: Solving Intellectual Debt with AutoAI</title>
        <description>Machine learning models are deployed as part of wider systems where outputs of one model are consumed by other models. This composite structure for machine learning systems is the dominant approach for deploying artificial intelligence. Such deployed systems can be complex to understand, they bring with them intellectual debt. In this talk we’ll argue that the next frontier for automated machine learning is to move to automation of the systems design, going from AutoML to AutoAI.</description>
        <pubDate>Sat, 18 Jul 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/open-challenges-for-auto-ml-solving-intellectual-debt-with-auto-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/open-challenges-for-auto-ml-solving-intellectual-debt-with-auto-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Future of AI and Machine Learning</title>
        <description>&lt;p&gt;Machine learning technologies have driven a revolution in artificial intelligence. Our machines are now able to identify objects in images, transcribe spoke language, translate between languages and even generate text of their own. In this talk we consider what this means for the future of AI and our own intelligence with a particular focus on what the opportunities and pitfalls for businesses are.&lt;/p&gt;</description>
        <pubDate>Wed, 10 Jun 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/future-of-ai-and-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/future-of-ai-and-machine-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Great AI Fallacy</title>
        <description>&lt;p&gt;Artificial intelligence is a form of intellectual automation. The promise of artificial intelligence is that it will be the first generation of automation that adapts to humans, rather than humans having to adapt to it. I see no evidence that this is true, but this fallacy is having very real effects on the way we think about creating and deploying artificial intelligence solutions. In this talk I introduce the Great AI Fallacy and discuss strategies for deployment that pre-emptively deal with the problems it will trigger.&lt;/p&gt;</description>
        <pubDate>Tue, 21 Apr 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-great-ai-fallacy.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-great-ai-fallacy.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Intellectual Debt and the Death of the Programmer</title>
        <description>Technical debt is incurred when complex systems are rapidly deployed without due thought as to how they will be &lt;em&gt;maintained&lt;/em&gt;. Intellectual debt is incurred when complex systems are rapidly deployed without due thought to how they’ll be &lt;em&gt;explained&lt;/em&gt;. Both problems are pervasive in the design and deployment of large scale algorithmic decision making engines. In this talk we’ll review the origin of the problem, and propose a roadmap for obtaining solutions. It’s a journey that will require collaboration between industry, academia, third sector, and government.</description>
        <pubDate>Mon, 09 Mar 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/intellectual-debt-and-the-death-of-the-programmer-bbc.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/intellectual-debt-and-the-death-of-the-programmer-bbc.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Intellectual Debt and the Death of the Programmer</title>
        <description>Technical debt is incurred when complex systems are rapidly deployed without due thought as to how they will be &lt;em&gt;maintained&lt;/em&gt;. Intellectual debt is incurred when complex systems are rapidly deployed without due thought to how they’ll be &lt;em&gt;explained&lt;/em&gt;. Both problems are pervasive in the design and deployment of large scale algorithmic decision making engines. In this talk we’ll review the origin of the problem, and propose a roadmap for obtaining solutions. It’s a journey that will require collaboration between industry, academia, third sector, and government.</description>
        <pubDate>Fri, 14 Feb 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/intellectual-debt-and-the-death-of-the-programmer.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/intellectual-debt-and-the-death-of-the-programmer.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>R250: GP Intro</title>
        <description>In this talk we give an introduction to Gaussian processes for students who are interested in working with GPs for the the R250 module.</description>
        <pubDate>Fri, 24 Jan 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/r250-gp-intro.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/r250-gp-intro.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Communication and Remote Working</title>
        <description></description>
        <pubDate>Thu, 23 Jan 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/communication-and-remote-working.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/communication-and-remote-working.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Coconut Science and the Supply Chain of Ideas</title>
        <description>&lt;p&gt;Ideas help companies innovate. Different businesses have different approaches to innovation. Some companies centralise their innovation, other companies deploy scientists close to the business. There are two types of business, those where the demand for ideas is driven by customer needs (customer led), and those where ideas are being imposed by a business on the population (technology led). The focus in companies is on the generation of ideas, but this is an error. The focus should be on the supply chain of ideas. The process by which ideas are translated from their point of origin to solving a business task.&lt;/p&gt;</description>
        <pubDate>Wed, 22 Jan 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/coconut-science-and-the-supply-chain-of-ideas.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/coconut-science-and-the-supply-chain-of-ideas.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and Emergency Medicine</title>
        <description></description>
        <pubDate>Thu, 09 Jan 2020 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-and-emergency-medicine.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-and-emergency-medicine.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>From Innovation to Deployment</title>
        <description>In this talk we introduce a five year project funded by the UK’s Turing Institute to shift the focus from developing AI systems to deploying AI systems that are safe and reliable. The AI systems we are developing and deploying are based on interconnected machine learning components. There is a need for AI-assisted design and monitoring of these systems to ensure they perform robustly, safely and accurately in their deployed environment. We address the entire pipeline of AI system development, from data acquisition to decision making. Data Oriented Architectures are an ecosystem that includes system monitoring for performance, interpretability and fairness. The will enable us to move from individual component optimisation to full system monitoring and optimisation.</description>
        <pubDate>Wed, 04 Dec 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/from-innovation-to-deployment-turing-2.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/from-innovation-to-deployment-turing-2.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Perspectives on AI</title>
        <description></description>
        <pubDate>Mon, 02 Dec 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/perspectives-on-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/perspectives-on-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Naive Days</title>
        <description></description>
        <pubDate>Mon, 02 Dec 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/guest-lecture.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/guest-lecture.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Real World Machine Learning Challenges</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Thu, 28 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/real-world-machine-learning-challenges.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/real-world-machine-learning-challenges.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Debating IBM’s Project Debater</title>
        <description></description>
        <pubDate>Thu, 21 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/debating-project-debater.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/debating-project-debater.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Post Digital Transformation</title>
        <description>Artificial intelligence promises automated decision making that will alleviate and revolutionise the nature of work. In practice, we know from previous technological solutions, new technologies often take time to percolate through to productivity. Robert Solow’s paradox saw “computers everywhere, except in the productivity statistics”. This session will equip attendees with an understanding of how to establish best practices around automated decision making. In particular, we will focus on the raw material of the AI revolution: the data.</description>
        <pubDate>Tue, 19 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/post-digital-transformation.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/post-digital-transformation.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>R250: GP Intro</title>
        <description>In this talk we give an introduction to Gaussian processes for students who are interested in working with GPs for the the R250 module.</description>
        <pubDate>Thu, 14 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/r250-gp-intro.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/r250-gp-intro.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What is Artificial Intelligence?</title>
        <description>In this talk we give an introduction to what artificial intelligence technologies are doing today and how they are influencing business and society.</description>
        <pubDate>Mon, 11 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-is-artificial-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-is-artificial-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data First Culture</title>
        <description>Artificial intelligence promises automated decision making that will alleviate and revolutionise the nature of work. In practice, we know from previous technological solutions, new technologies often take time to percolate through to productivity. Robert Solow’s paradox saw “computers everywhere, except in the productivity statistics”. This session will equip attendees with an understanding of how to establish best practices around automated decision making. In particular, we will focus on the raw material of the AI revolution: the data.</description>
        <pubDate>Thu, 07 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-first-culture.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-first-culture.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning Systems Design</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Tue, 05 Nov 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-systems-design-cambridge-ai-group-seminar.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-systems-design-cambridge-ai-group-seminar.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>AutoAI</title>
        <description>Deployed artificial intelligence solutions consist of interacting components often trained as the result of &lt;em&gt;supervised machine learning&lt;/em&gt;. Automatic training of these sub-components is known as AutoML. But the real world challenges of deployment consist of the monitoring of system performance in the real world, in terms of accuracy but also for fairness and bias. To make such systems easily maintainable there is a need for automation of the process of monitoring and redeploying models as well as checking the quality of the overall system decomposition. In contrast to AutoML, we call this system-wide approach “Auto AI”. This is the subject of my Turing Fellowship</description>
        <pubDate>Wed, 30 Oct 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/auto-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/auto-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>From Data Subject to Data Citizen</title>
        <description>Resolutely complementary to top-down regulation, bottom-up data trusts aim to ‘give a voice’ to data subjects whose choices when it comes to data governance are often reduced to binary, ill-informed consent. While the rights granted by instruments like the GDPR can be used as tools in a bit to shape possible data-reliant futures - such as better use of natural resources, medical care etc., their exercise is both demanding and unlikely to be as impactful when leveraged individually. The power that stems from aggregated data should be returned to individuals through the legal mechanism of trusts.</description>
        <pubDate>Mon, 28 Oct 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/from-data-subject-to-data-citizen.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/from-data-subject-to-data-citizen.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>From Innovation to Deployment</title>
        <description>In this talk we introduce a five year project funded by the UK’s Turing Institute to shift the focus from developing AI systems to deploying AI systems that are safe and reliable. The AI systems we are developing and deploying are based on interconnected machine learning components. There is a need for AI-assisted design and monitoring of these systems to ensure they perform robustly, safely and accurately in their deployed environment. We address the entire pipeline of AI system development, from data acquisition to decision making. Data Oriented Architectures are an ecosystem that includes system monitoring for performance, interpretability and fairness. The will enable us to move from individual component optimisation to full system monitoring and optimisation.</description>
        <pubDate>Thu, 24 Oct 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/from-innovation-to-deployment.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/from-innovation-to-deployment.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What is Machine Learning?</title>
        <description>In this talk we will introduce the fundamental ideas in machine learning. We’ll develop our exposition around the ideas of prediction function and the objective function. We don’t so much focus on the derivation of particular algorithms, but more the general principles involved to give an idea of the machine learning &lt;em&gt;landscape&lt;/em&gt;.</description>
        <pubDate>Mon, 21 Oct 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-is-machine-learning-ashesi.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-is-machine-learning-ashesi.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Future of AI</title>
        <description>Waves of automation have driven human advance, and each wave requires humans to The promise of AI is to launch new systems of automated intellectual endeavour that will be the first systems to adapt to us. In reality, the systems we have will not achieve this, and it is the biological sciences that teach us this lesson most starkly. In this talk I will review some of the successes and challenges of AI and its deployment and propose practical visions for the future based on approaches that have worked in the past.</description>
        <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/health/the-future-of-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/health/the-future-of-ai.html</guid>
        
        
        <category>health</category>
        
      </item>
    
      <item>
        <title>Machine Learning Systems Design</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Fri, 20 Sep 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-systems-design.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-systems-design.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Introduction to Deep Gaussian Processes</title>
        <description>In this talk we introduce deep Gaussian processes, describe what they are and what they are good for. Deep Gaussian process models make use of stochastic process composition to combine Gaussian processes together to form new models which are non-Gaussian in structure. They serve both as a theoretical model for deep learning and a functional model for regression, classification and unsupervised learning. The challenge in these models is propagating the uncertainty through the process.</description>
        <pubDate>Tue, 10 Sep 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/introduction-to-deep-gps.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/introduction-to-deep-gps.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Interpretable End-to-End Learning</title>
        <description>Practical artificial intelligence systems can be seen as algorithmic decision makers. The fractal nature of decision making implies that this involves interacting systems of components where decisions are made multiple times across different time frames. This affects the decomposability of an artificial intelligence system. Classical systems design relies on decomposability for efficient maintenance and deployment of machine learning systems, in this talk we consider the challenges of optimizing and maintaining such systems.</description>
        <pubDate>Wed, 26 Jun 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/interpretable-end-to-end-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/interpretable-end-to-end-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and Data Science</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Wed, 19 Jun 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-and-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-and-data-science.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning Systems Design</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Thu, 06 Jun 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-three-ds-of-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-three-ds-of-machine-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What is Machine Learning?</title>
        <description>In this talk we will introduce the fundamental ideas in machine learning. We’ll develop our exposition around the ideas of prediction function and the objective function. We don’t so much focus on the derivation of particular algorithms, but more the general principles involved to give an idea of the machine learning &lt;em&gt;landscape&lt;/em&gt;.</description>
        <pubDate>Mon, 03 Jun 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-is-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-is-machine-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Narrowing the Intelligence Gap</title>
        <description>How are we making computers do the things we used to associate only with humans? Have we made a breakthrough in understanding human intelligence? While recent achievements might give the sense that the answer is yes, the short answer is that we are nowhere near. All we’ve achieved for the moment is a breakthrough in emulating intelligence. In this talk we discuss two differences between the artificial intelligence we’ve deployed and the natural intelligence we exhibit. Resolving one is a challenge of changing the way we do systems design, the other, we argue, is a more fundamental difference that may never be overcome.</description>
        <pubDate>Thu, 30 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/narrowing-the-intelligence-gap.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/narrowing-the-intelligence-gap.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Meta-Modelling and Deploying ML Software</title>
        <description>Data is not so much the new oil, it is the new software. Data driven algorithms are increasingly present in continuously deployed production software. What challenges does this present and how can the mathematical sciences help?</description>
        <pubDate>Thu, 23 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/meta-modelling-and-deploying-ml-software.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/meta-modelling-and-deploying-ml-software.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Modern Data Oriented Programming</title>
        <description>There has been a great deal of interest in probabilistic programs: placing modeling at the heart of programming language. In this talk we set the scene for data oriented programming. Data is a fundamental component of machine learning, yet the availability, quality and discoverability of data are often ignored in formal computer science. While languages for data manipulation exist (for example SQL), they are not suitable for the modern world of machine learning data. Modern data oriented languages should place data at the center of modern digital systems design and provide an infrastructure in which monitoring of data quality and model decision making are automaticaly available. We provide the context for Modern Data Oriented Programming, and give some insight into our initial ideas in this space.</description>
        <pubDate>Tue, 21 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/modern-data-oriented-programming.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/modern-data-oriented-programming.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What is AI and What are the Implications of Advances in AI for Religion?</title>
        <description>What is artificial intelligence and what are the implications of advances in artificial intelligence for religion? How does artificial intelligences differ from natural intelligences. We consider these ideas from the perspective of information theory. In the context of these differences we then consider parallels between the perspectives on religion and AI both in today’s popular culture, but also with a more optimistic perspective looking forward.</description>
        <pubDate>Fri, 17 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/what-is-ai-and-what-are-the-implications-of-advances-in-ai-for-religion.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/what-is-ai-and-what-are-the-implications-of-advances-in-ai-for-religion.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Towards Machine Learning Systems Design</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning for the modern artificial intelligence revolution that has dominated popular press headlines and is having a strong influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. Many of these lessons were first formed in computational biology, throughout the talk I’ll highlight connections I see, emphasizing the relevance of biological data analysis to real world data analysis.</description>
        <pubDate>Tue, 14 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/genomics/towards-ml-systems-design-lessons-from-comp-bio.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/genomics/towards-ml-systems-design-lessons-from-comp-bio.html</guid>
        
        
        <category>genomics</category>
        
      </item>
    
      <item>
        <title>Digital Disruption</title>
        <description>We look towards the future of digital disruption by considering the past of disruption, with a particular focus on the production and movement of goods. We introduce the notion of the ‘smith’, and consider how, by localizing the provision, or supply, a ‘smith’ can ensure high added value for their skills. Using analogies from &lt;em&gt;pull&lt;/em&gt; and &lt;em&gt;push&lt;/em&gt; supply chains, We argue that our future economy needs to include an environment where &lt;em&gt;smiths&lt;/em&gt; prosper. From craft coffee to craft software, to add value in a global marketplace we argue that we need to exploit localization.</description>
        <pubDate>Mon, 13 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/digital-disruption.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/digital-disruption.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Readiness Levels</title>
        <description>In this talk we consider data readiness levels and how they may be deployed.</description>
        <pubDate>Wed, 01 May 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-readiness-levels.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-readiness-levels.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Faith and AI: Introduction to Machine Learning</title>
        <description>What is artificial intelligence and what are the implications of advances in artificial intelligence for religion? In this talk we give a short introduction to the technology that&apos;s underpinning advances in artificial intelligence, machine learning. We then develop those ideas with a particular focus on how artificial intelligences differ from &lt;em&gt;natural&lt;/em&gt; intelligences. Next, we consider parallels between the perspectives on religion and AI in popular culture, initially with a &apos;cartoon view&apos;, but then diving deeper and reflecting on the shared drive for introspection that a mature approach to artificial intelligence and religion might bring.</description>
        <pubDate>Fri, 29 Mar 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/faith-and-ai-introduction-to-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/faith-and-ai-introduction-to-machine-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Readiness Levels</title>
        <description>&lt;p&gt;In this brief talk we motivate Data Readiness Levels, an attempt to develop a language around data quality that can bridge the gap between technical solutions and decision makers such as managers and project planners.&lt;/p&gt;</description>
        <pubDate>Mon, 25 Feb 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-readiness-levels.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-readiness-levels.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Towards Machine Learning Systems Design</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning for the modern artificial intelligence revolution that has dominated popular press headlines and is having a strong influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, and dealing with large unstructured datasets.</description>
        <pubDate>Fri, 22 Feb 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/glasgow2019/towards-ml-systems-design.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/glasgow2019/towards-ml-systems-design.html</guid>
        
        
        <category>glasgow2019</category>
        
      </item>
    
      <item>
        <title>Data Science and Digital Systems</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Tue, 19 Feb 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-science-and-digital-systems.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-science-and-digital-systems.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>&lt;p&gt;Gaussian process models provide a flexible, non-parametric approach to modelling that sustains uncertainty about the function. However, computational demands and the joint Gaussian assumption make them inappropriate for some applications. In this talk we review low rank approximations for Gaussian processes and use stochastic process composition to create non-Gaussian processes. We illustrate the models on simple regression tasks to give a sense of how uncertainty propagates through the model. We end will demonstrations on unsupervised learning of digits and motion capture data.&lt;/p&gt;</description>
        <pubDate>Fri, 11 Jan 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deep-gaussian-processes.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes</title>
        <description>&lt;p&gt;Classical machine learning and statistical approaches to learning, such as neural networks and linear regression, assume a parametric form for functions. Gaussian process models are an alternative approach that assumes a probabilistic prior over functions. This brings benefits, in that uncertainty of function estimation is sustained throughout inference, and some challenges: algorithms for fitting Gaussian processes tend to be more complex than parametric models. In this sessions I will introduce Gaussian processes and explain why sustaining uncertainty is important.&lt;/p&gt;</description>
        <pubDate>Wed, 09 Jan 2019 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/gaussian-processes.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Physical World</title>
        <description>&lt;p&gt;Machine learning is a data driven endeavour, but real world systems are physical and mechanistic. In this talk we will review approaches to integrating machine learning with real world systems. Our focus will be on emulation (otherwise known as surrogate modeling).&lt;/p&gt;</description>
        <pubDate>Mon, 10 Dec 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-and-the-physical-world.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-and-the-physical-world.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Science and Digital Systems</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Fri, 30 Nov 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-science-and-digital-systems.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-science-and-digital-systems.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Three Ds of Machine Learning</title>
        <description>&lt;p&gt;Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.&lt;/p&gt;</description>
        <pubDate>Thu, 15 Nov 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-three-ds-of-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-three-ds-of-machine-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Bayesian Methods</title>
        <description>In this session we review the &lt;em&gt;probabilistic&lt;/em&gt; approach to machine learning. We start with a review of probability, and introduce the concepts of probabilistic modelling. We then apply the approach in practice to Naive Bayesian classification. In this session we review the probabilistic formulation of a classification model, reviewing initially maximum likelihood and the naive Bayes model.</description>
        <pubDate>Wed, 14 Nov 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/bayesian-methods-abuja.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/bayesian-methods-abuja.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Fairness and Diversity of Decision Making</title>
        <description>&lt;p&gt;Mathematical definitions of fairness insist on clearly categorized groups and clear mathematical interpretations of fairness. In law this arises through the concept of &lt;em&gt;unlawful&lt;/em&gt; descrimination. There is no such thing as a correct model. We must accept that our predictions will sometimes be wrong. In the face of this certainty we have a choice: how we should be wrong. We can choose to be wrong by over-simplifying or we can choose to be wrong by over-complicating (given the available data). In machine learning this is known as the bias-variance dilemma. In this talk we consider the implications of the bias-variance dilemma for fairness of decision making.&lt;/p&gt;</description>
        <pubDate>Thu, 08 Nov 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/fairness-and-diversity-of-decision-making.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/fairness-and-diversity-of-decision-making.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Physical World</title>
        <description>&lt;p&gt;Machine learning is a data driven endeavour, but real world systems are physical and mechanistic. In this talk we will review approaches to integrating machine learning with real world systems. Our focus will be on emulation (otherwise known as surrogate modeling).&lt;/p&gt;</description>
        <pubDate>Tue, 06 Nov 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-and-the-physical-world.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-and-the-physical-world.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Mind and Machine Intelligence</title>
        <description>What is the nature of machine intelligence and how does it differ from humans? In this talk we introduce embodiment factors. They represent the extent to which our intelligence is locked inside us. The locked in nature of our intelligence makes us fundamentally different from the machine intelligences we are creating around us. Having summarized these differences we consider the Three Ds of machine learning system design: a set of considerations to take into acount when building machine intelligences.</description>
        <pubDate>Tue, 30 Oct 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/mind-and-machine-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/mind-and-machine-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>You and AI Panel Debate</title>
        <description></description>
        <pubDate>Sun, 28 Oct 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/you-and-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/you-and-ai.html</guid>
        
        
      </item>
    
      <item>
        <title>Natural and Artificial Intelligence</title>
        <description>What is the nature of machine intelligence and how does it differ from humans? In this talk we explore embodiment factors, the extent to which our intelligence is locked in and how this makes us fundamentally different form the machine intelligences we are creating around us.</description>
        <pubDate>Thu, 18 Oct 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/natural-and-artificial-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/natural-and-artificial-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>fAIth</title>
        <description>What is artificial intelligence and what are the implications of advances in artificial intelligence for society? In this talk we give a short introduction to the technology that’s underpinning advances in artificial intelligence, machine learning. We then develop those ideas with a particular focus on how artificial intelligences differ from &lt;em&gt;natural&lt;/em&gt; intelligences. Finally, we reflect on what the existence of different intelligences might mean for our experiences as humans.</description>
        <pubDate>Wed, 12 Sep 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/faith.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/faith.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Science and the Professions</title>
        <description>Machine learning methods and software are becoming widely deployed. But as we deploy algorithms that operate on individual data, how do we account for their effect on society? In terms of the practice of data science, we seem to be at a similar point today as software engineering was in the early 1980s. Best practice is not widely understood or deployed. One aspect of professions is trust. How can we bring trust to the data-sphere?</description>
        <pubDate>Wed, 05 Sep 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-science-and-the-professions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-science-and-the-professions.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description>&lt;p&gt;In this talk we introduce Gaussian process models. Motivating the representation of uncertainty through probability distributions we review Laplace&apos;s approach to understanding uncertainty and how uncertainty in functions can be represented through a multivariate Gaussian density.&lt;/p&gt;</description>
        <pubDate>Mon, 03 Sep 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/gpss-session-1.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/gpss-session-1.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Probabilistic Machine Learning</title>
        <description>In this talk we review the &lt;em&gt;probabilistic&lt;/em&gt; approach to machine learning. We start with a review of probability, and introduce the concepts of probabilistic modelling. We then apply the approach in practice to Naive Bayesian classification. In this session we review the Bayesian formalism in the context of linear models, reviewing initially maximum likelihood and introducing basis functions as a way of driving non-linearity in the model.</description>
        <pubDate>Sat, 25 Aug 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/probabilistic-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/probabilistic-machine-learning.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Faith and AI</title>
        <description>What is artificial intelligence and what are the implications of advances in artificial intelligence for religion? In this talk we give a short introduction to the technology that’s underpinning advances in artificial intelligence, machine learning. We then develop those ideas with a particular focus on how artificial intelligences differ from &lt;em&gt;natural&lt;/em&gt; intelligences. Next, we consider parallels between the perspectives on religion and AI in popular culture, initially with a ‘cartoon view’, but then diving deeper and reflecting on the shared drive for introspection that a mature approach to artificial intelligence and religion might bring.</description>
        <pubDate>Thu, 31 May 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/faith-and-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/faith-and-ai.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Uncertainty in Loss Functions</title>
        <description>Bayesian formalisms deal with uncertainty in parameters, frequentist formalisms deal with the &lt;em&gt;risk&lt;/em&gt; of a data set, uncertainty in the data sample. In this talk, we consider uncertainty in the &lt;em&gt;loss function&lt;/em&gt;. Uncertainty in the loss function. We introduce uncertainty through linear weightings of terms in the loss function and show how a distribution over the loss can be maintained through the &lt;em&gt;maximum entropy principle&lt;/em&gt;. This allows us minimize the expected loss under our maximum entropy distribution of the loss function. We recover weighted least squares and a LOESS-like regression from the formalism.</description>
        <pubDate>Tue, 29 May 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/uncertainty-in-loss-functions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/uncertainty-in-loss-functions.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Outlook for UK AI and Machine Learning</title>
        <description>Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.</description>
        <pubDate>Fri, 11 May 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/outlook-for-uk-ai-and-ml.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/outlook-for-uk-ai-and-ml.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Towards Machine Learning Systems Design</title>
        <description></description>
        <pubDate>Wed, 02 May 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/towards-machine-learning-systems-design.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/towards-machine-learning-systems-design.html</guid>
        
        
      </item>
    
      <item>
        <title>Decision Making and Diversity</title>
        <description></description>
        <pubDate>Mon, 30 Apr 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/decision-making-and-diversity.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/decision-making-and-diversity.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Challenges for Data Science in Healthcare</title>
        <description></description>
        <pubDate>Wed, 18 Apr 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/challenges-for-data-science-in-healthcare.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/challenges-for-data-science-in-healthcare.html</guid>
        
        
      </item>
    
      <item>
        <title>Natural and Artificial Intelligence</title>
        <description>What is the nature of machine intelligence and how does it differ from humans? In this talk we explore some of the differences between natural and machine intelligence.</description>
        <pubDate>Thu, 29 Mar 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/on-natural-and-artificial-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/on-natural-and-artificial-intelligence.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Science: Time for Professionalisation?</title>
        <description>&lt;p&gt;Machine learning methods and software are becoming widely deployed. But how are we sharing expertise about bottlenecks and pain points in deploying solutions? In terms of the practice of data science, we seem to be at a similar point today as software engineering was in the early 1980s. Best practice is not widely understood or deployed. In this talk we will focus on two particular components of data science solutions: the preparation of data snd the deployment of machine learning systems.&lt;/p&gt;</description>
        <pubDate>Tue, 27 Mar 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-science-time-for-professionalisation-lse.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-science-time-for-professionalisation-lse.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Machine Learning and Data Readiness Levels</title>
        <description>In this talk we will look at the challenges facing deployment of machine learning, with a particular focus on the reuse of data and data quality. We suggest data readiness levels as a mechanism for monitoring data quality.</description>
        <pubDate>Thu, 25 Jan 2018 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/machine-learning-and-data-readiness-levels.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/machine-learning-and-data-readiness-levels.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Deep Probabilistic Modelling with with Gaussian Processes</title>
        <description>&lt;p&gt;Neural network models are algorithmically simple, but mathematically complex. Gaussian process models are mathematically simple, but algorithmically complex. In this tutorial we will explore Deep Gaussian Process models. They bring advantages in their mathematical simplicity but are challenging in their algorithmic complexity. We will give an overview of Gaussian processes and highlight the algorithmic approximations that allow us to stack Gaussian process models: they are based on variational methods. In the last part of the tutorial will explore a use case exemplar: uncertainty quantification. We end with open questions.&lt;/p&gt;</description>
        <pubDate>Mon, 04 Dec 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/deep-probabilistic-modelling-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/deep-probabilistic-modelling-with-gaussian-processes.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Personalized Health: Challenges in Data Science</title>
        <description>The promise of personalized health is driven by the wide availability of data, but we don&apos;t need to talk so much about where we want to be, rather how we should get there. What are the challenges that need to be bridged technologically to unlock the potential in the much greater availability of data we now have? In this talk we&apos;ll consider three challenges of data science in the context of personalized health, the three challenges each need to be bridged to bring the era of true precision, or personalized, medicine within the reach of an affordable health care service.
</description>
        <pubDate>Thu, 23 Nov 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cwiml17/personalized-health-challenges-in-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cwiml17/personalized-health-challenges-in-data-science.html</guid>
        
        
        <category>Lawrence-cwiml17</category>
        
      </item>
    
      <item>
        <title>Embodiment Factors and Privacy</title>
        <description>In this talk we will explore a fundamental limitation of human intelligence which, we argue, makes privacy absolutely critical. We will relate this to our machine intelligences and speculate about how there may be challenges at the interace. Finally we propose Data Trusts as a solution for these challenges.
</description>
        <pubDate>Thu, 26 Oct 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/embodiment-factors-and-privacy.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/embodiment-factors-and-privacy.html</guid>
        
        
      </item>
    
      <item>
        <title>Embodiment Factors and Privacy</title>
        <description>In this talk we will explore a fundamental limitation of human intelligence which, we argue, makes privacy absolutely critical. We will relate this to our machine intelligences and speculate about how there may be challenges at the interace. Finally we propose Data Trusts as a solution for these challenges.</description>
        <pubDate>Thu, 26 Oct 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/embodiment-factors-and-privacy.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/embodiment-factors-and-privacy.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Data Science: Time for Professionalisation?</title>
        <description>&lt;p&gt;Machine learning methods and software are becoming widely deployed. But how are we sharing expertise about bottlenecks and pain points in deploying solutions? In terms of the practice of data science, we seem to be at a similar point today as software engineering was in the early 1980s. Best practice is not widely understood or deployed. In this talk we will focus on two particular components of data science solutions: the preparation of data snd the deployment of machine learning systems.&lt;/p&gt;</description>
        <pubDate>Fri, 13 Oct 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/data-science-time-for-professionalisation-odsc.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/data-science-time-for-professionalisation-odsc.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Living Together</title>
        <description>What is the nature of machine intelligence and how does it differ from humans? In this talk we explore embodiment factors, the extent to which our intelligence is locked in and how this makes us fundamentally different form the machine intelligences we are creating around us.
</description>
        <pubDate>Fri, 06 Oct 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tedx17/living-together.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tedx17/living-together.html</guid>
        
        
        <category>Lawrence-tedx17</category>
        
      </item>
    
      <item>
        <title>Where Next for AI?</title>
        <description>&lt;p&gt;Our current generation of artificial intelligence techniques are driven by data. But also we expect to be able to deploy artificial intelligence techniques on data. What does that mean, is it a contradiction? How will this effect the wider technology landscape? Is it simply a matter of refining deep neural nets? Or are more disruptive technologies needed? What will be the challenges of deploying AI systems?&lt;/p&gt;</description>
        <pubDate>Tue, 03 Oct 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cwtec17/where-next-for-ai.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cwtec17/where-next-for-ai.html</guid>
        
        
        <category>Lawrence-cwtec17</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description>In this talk I will give a brief and intuitive introduction to Gaussian process models.
</description>
        <pubDate>Mon, 11 Sep 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpss17/gpss-session-1.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpss17/gpss-session-1.html</guid>
        
        
        <category>Lawrence-gpss17</category>
        
      </item>
    
      <item>
        <title>Cloaking Functions: Differential Privacy with Gaussian Processes</title>
        <description>&lt;p&gt;Processing of personally sensitive information should respect an individual’s privacy. One promising framework is Differential Privacy (DP). In this talk I’ll present work led by Michael Smith at the University of Sheffield on the use of cloaking functions to make Gaussian process (GP) predictions differentially private. Gaussian process models are flexible models with particular advantages in handling missing and noisy data. Our hope is that advances in DP for GPs will make it easier to ‘learn without looking,’ i.e. gain the advantages of prediction from patient data without impinging on their privacy.&lt;/p&gt; &lt;p&gt;Joint work with &lt;strong&gt;Michael T. Smith&lt;/strong&gt;, Max Zwiessele and Mauricio Alvarez&lt;/p&gt;</description>
        <pubDate>Wed, 30 Aug 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/cloaking-functions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/cloaking-functions.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What is Machine Learning?</title>
        <description>In this talk we provide an introduction and an overview to the field of machine learning. We emphasise the importance of data and the nature of modelling we carry out in machine learning. We briefly review the different challenges such as supervised, unsupervised and reinforcement learning. 
</description>
        <pubDate>Mon, 17 Jul 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-dsa17/what-is-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-dsa17/what-is-machine-learning.html</guid>
        
        
        <category>Lawrence-dsa17</category>
        
      </item>
    
      <item>
        <title>Once Upon a Universal Standard Time: Embodiment and AI Narratives</title>
        <description>In this talk we consider a fundamental difference between human and machine intelligence, a ratio between their ability to compute and their ability to communicate we refer to as the embodiment factor. Having suggested why this makes us fundamentally different we speculate on implications for developing &lt;em&gt;narrative&lt;/em&gt; structure from data.</description>
        <pubDate>Thu, 13 Jul 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cfi17/once-upon-a-universal-standard-time.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cfi17/once-upon-a-universal-standard-time.html</guid>
        
        
        <category>Lawrence-cfi17</category>
        
      </item>
    
      <item>
        <title>Data Analytics Perspectives: Machine Learning</title>
        <description>In this talk we will firstly set the landscape of machine learning, artificial intelligence and data science by describing what characteristics they share, and how they differ. We&apos;ll then shift focus to the promise and challenges associated with both Data Science and Artficial Intelligence, with particular attention paid to the potential for a &quot;data crisis&quot; and challenges in &quot;machine learning systems design&quot;.
</description>
        <pubDate>Thu, 29 Jun 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/data-analytics-perspectives.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/data-analytics-perspectives.html</guid>
        
        
      </item>
    
      <item>
        <title>Machine Learning, Technology and the Future of Intelligence</title>
        <description>The Leverhulme Centre for the Future of Intelligence is a fulcrum around which debate in intelligence technology can be joined across the wide range of intereted experts. In this talk I&apos;ll give some perspectives on machine learning and my interactions with CFI.
</description>
        <pubDate>Mon, 26 Jun 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/machine-learning-technology-and-the-future-of-intelligence.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/machine-learning-technology-and-the-future-of-intelligence.html</guid>
        
        
      </item>
    
      <item>
        <title>Probabilistic Dimensionality Reduction</title>
        <description></description>
        <pubDate>Tue, 06 Jun 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-peppercorns17/probabilistic-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-peppercorns17/probabilistic-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-peppercorns17</category>
        
      </item>
    
      <item>
        <title>Peppercorns and Machine Learning System Design</title>
        <description>Machine learning is fundamental to two important technological domains, artificial intelligence and data science. In this talk we will attempt to make a simple definition to distinguish between the two, then we will focus on the challenges of machine learning in &lt;em&gt;application&lt;/em&gt; to artificial intelligence particularly from the perspective of systems design. We expect a particular challenge to be the deployment of such systems in real environment, where unforeseen consequences of interaction with real world environments will produce embarrassing failures. Because these failures are not bugs, in that the system will be performing as designed, but failures of imagination of the designers we introduce a new term for them: ‘peppercorns’.</description>
        <pubDate>Fri, 02 Jun 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/peppercorns-and-machine-learning-systems-design.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/peppercorns-and-machine-learning-systems-design.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Peppercorns and Machine Learning System Design</title>
        <description>Machine learning is fundamental to two important technological domains, artificial intelligence and data science. In this talk we will attempt to make a simple definition to distinguish between the two, then we will focus on the challenges of machine learning in *application* to artificial intelligence particularly from the perspective of systems design. We expect a particular challenge to be the deployment of such systems in real environment, where unforeseen consequences of interaction with real world environments will produce embarrassing failures. Because these failures are not bugs, in that the system will be performing as designed, but failures of imagination of the designers we introduce a new term for them: &apos;peppercorns&apos;.</description>
        <pubDate>Fri, 02 Jun 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-peppercorns17/peppercorns-and-machine-learning-system-design.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-peppercorns17/peppercorns-and-machine-learning-system-design.html</guid>
        
        
        <category>Lawrence-peppercorns17</category>
        
      </item>
    
      <item>
        <title>The Data Science Process</title>
        <description>In this talk we will focus on challenges in facilitating the data science pipeline. Drawing on experience from projects in computational biology, the developing world and Amazon I’ll propose different ideas for facilitating the data science process including analogies that help software engineers understand the challenges for data science and formalizations, such as data readiness levels, which allow management to reason about the obstacles in the process.</description>
        <pubDate>Wed, 10 May 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-dsp17/the-data-science-process.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-dsp17/the-data-science-process.html</guid>
        
        
        <category>Lawrence-dsp17</category>
        
      </item>
    
      <item>
        <title>The Data Science Process</title>
        <description>In this talk we will focus on challenges in facilitating the data science pipeline. Drawing on experience from projects in computational biology, the developing world and Amazon I’ll propose different ideas for facilitating the data science process including analogies that help software engineers understand the challenges for data science and formalizations, such as data readiness levels, which allow management to reason about the obstacles in the process.</description>
        <pubDate>Tue, 18 Apr 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-dsp17/the-data-science-process.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-dsp17/the-data-science-process.html</guid>
        
        
        <category>Lawrence-dsp17</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Data Science Process</title>
        <description>The current generation of machine learning technologies is powering new applications in artificial intelligence. This is presenting challenges and opportunities. In this talk we will focus on the challenge of constructing and deploying machine learning algorithms with a particular focus on two aspects: machine learning systems design and data readiness. We will also discuss implications and opportunities, with speculative thoughts on the nature of artificial intelligence in future devices and what new opportunities and challenges this may present.</description>
        <pubDate>Thu, 30 Mar 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxwasp17/machine-learning-and-the-data-science-process.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxwasp17/machine-learning-and-the-data-science-process.html</guid>
        
        
        <category>Lawrence-oxwasp17</category>
        
      </item>
    
      <item>
        <title>The rise of the algorithm - artificial intelligence, ethics, trust and tech development</title>
        <description>How do notions of human trust extend to the digital world? And what challenges could that present for our society?</description>
        <pubDate>Thu, 16 Mar 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/the-rise-of-the-algorithm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/the-rise-of-the-algorithm.html</guid>
        
        
      </item>
    
      <item>
        <title>Ethics, Computer Systems and the Professions</title>
        <description>A discussion with Sylvie Delacroix, Jonathan Price, Burkhard Schafer hosted by Anthony Finkelstein.</description>
        <pubDate>Wed, 15 Mar 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/ethics-computer-systems-and-the-professions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/ethics-computer-systems-and-the-professions.html</guid>
        
        
      </item>
    
      <item>
        <title>Challenges and Opportunities in Machine Learning and Artificial Intelligence</title>
        <description>The current generation of machine learning technologies is powering new applications in artificial intelligence. This is presenting challenges and opportunities. In this talk we will focus on the challenge of constructing and deploying machine learning algorithms with a particular focus on two aspects: machine learning systems design and data readiness. We will also discuss implications and opportunities, with speculative thoughts on the nature of artificial intelligence in future devices and what new opportunities and challenges this may present. </description>
        <pubDate>Mon, 13 Mar 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-arm17/challenges-in-ml-and-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-arm17/challenges-in-ml-and-data-science.html</guid>
        
        
        <category>Lawrence-arm17</category>
        
      </item>
    
      <item>
        <title>Challenges for Delivering Machine Learning in Health</title>
        <description>The wealth of data availability presents new opportunities
in health but also challenges. In this talk we will focus on
challenges for machine learning in health: 1. Paradoxes of the Data
Society, 2. Quantifying the Value of Data, 3. Privacy, loss of
control, marginalization. Each of these challenges has particular
implications for machine learning. The paradoxes relate to our
evolving relationship with data and our changing
expectations. Quantifying value is vital for accounting for the
influence of data in our new digital economies and issues of privacy
and loss of control are fundamental to how our pre-existing rights
evolve as the digital world encroaches more closely on the
physical. One of the goals of research community should be to
provide the technological tooling to address these challenges ensure
that we are empowered to avoid the pitfalls of the data driven
society, allowing us to reap the benefits of machine learning in
applications from personalized health to health in the developing
world.
</description>
        <pubDate>Tue, 28 Feb 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchester17/challenges-for-delivering-machine-learning-in-health.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchester17/challenges-for-delivering-machine-learning-in-health.html</guid>
        
        
        <category>Lawrence-manchester17</category>
        
      </item>
    
      <item>
        <title>Three Challenges in Data Science</title>
        <description>Data science presents new opportunities but also new challenges. In this talk we will focus on three separate challenges for data science: 1. Paradoxes of the Data Society, 2. Quantifying the Value of Data, 3. Privacy, loss of control, marginalization. Each of these challenges has particular implications for data science.  The paradoxes relate to our evolving relationship with data and our changing expectations.  Quantifying value is vital for accounting for the influence of data in our new digital economies and issues of privacy and loss of control are fundamental to how our pre-existing rights evolve as the digital world encroaches more closely on the physical. One of the goals of open data science should be to address these challenges to ensure that we can avoid the pitfalls of the data driven society, allowing us to reap the benefits of data science in applications from personalized health to the developing world.
</description>
        <pubDate>Tue, 21 Feb 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchester17/data-science-challenges.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchester17/data-science-challenges.html</guid>
        
        
        <category>Lawrence-manchester17</category>
        
      </item>
    
      <item>
        <title>Latent Variable Models with Gaussian Processes</title>
        <description>Gaussian process models are flexible non parametric probabilistic models for functions. In this talk we will show how they can be incorporated into latent variable models to form probabilistic latent variable models. The resulting approaches have some unusual properties. In particular, they express conditional independencies across features, rather than data. This implies that rather than a curse of dimensionality they exhibit a blessing of dimensionality. We will give background of the model and show some exemplar applications.</description>
        <pubDate>Mon, 06 Feb 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gplvm17/latent-variable-models-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gplvm17/latent-variable-models-with-gaussian-processes.html</guid>
        
        
        <category>Lawrence-gplvm17</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description>In this master class we will give a short introduction to Gaussian process models, and then explore their use in the domain of Bayesian Optimization. Gaussian process models are flexible models which allow us to place probability distributions over functions. In Bayesian Optimization, the Gaussian process is used as a surrogate for the process of interest. Rather than directly optimizing the process, the surrogate is optimized. This leads to an efficient approach for improving efficiency in a wide range of physical systems. The seminar will introduce lab classes which will make use of the python software GPy and GPyOpt (https://github.com/sheffieldml/GPy,  https://github.com/sheffieldml/GPyOpt).
This first talk will be an introduction to Gaussian process models that will assume knowledge of probability, linear algebra and the multivariate Gaussian.</description>
        <pubDate>Mon, 06 Feb 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpbo17/introduction-to-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpbo17/introduction-to-gaussian-processes.html</guid>
        
        
        <category>Lawrence-gpbo17</category>
        
      </item>
    
      <item>
        <title>Covariance Functions and the Marginal Likelihood</title>
        <description>In this master class we will give a short introduction to Gaussian process models, and then explore their use in the domain of Bayesian Optimization. Gaussian process models are flexible models which allow us to place probability distributions over functions. In Bayesian Optimization, the Gaussian process is used as a surrogate for the process of interest. Rather than directly optimizing the process, the surrogate is optimized. This leads to an efficient approach for improving efficiency in a wide range of physical systems. The seminar will introduce lab classes which will make use of the python software GPy and GPyOpt (https://github.com/sheffieldml/GPy,  https://github.com/sheffieldml/GPyOpt).
This talk will develop the idea of the covariance function and give intutions as to how the marginal likelihood can be maximized. Given time we willl also develop the idea of multiple output Gaussian process models.</description>
        <pubDate>Mon, 06 Feb 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpbo17b/covariance-functions-and-the-marginal-likelihood.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpbo17b/covariance-functions-and-the-marginal-likelihood.html</guid>
        
        
        <category>Lawrence-gpbo17b</category>
        
      </item>
    
      <item>
        <title>Personalized Health: Challenges in Data Science</title>
        <description>The promise of personalized health is driven by the wide availability of data, but we don&apos;t need to talk so much about where we want to be, rather how we should get there. What are the challenges that need to be bridged technologically to unlock the potential in the much greater availability of data we now have? In this talk we&apos;ll consider three challenges of data science in the context of personalized health, the three challenges each need to be bridged to bring the era of true precision, or personalized, medicine within the reach of an affordable health care service.
</description>
        <pubDate>Thu, 12 Jan 2017 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-smpgd17/personalized-health-challenges-in-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-smpgd17/personalized-health-challenges-in-data-science.html</guid>
        
        
        <category>Lawrence-smpgd17</category>
        
      </item>
    
      <item>
        <title>The Data Landscape</title>
        <description>In this talk I&apos;ll give an overview of the challenges in the data landscape, both institutional and societal.</description>
        <pubDate>Thu, 15 Dec 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-defra16/the-data-landscape.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-defra16/the-data-landscape.html</guid>
        
        
        <category>Lawrence-defra16</category>
        
      </item>
    
      <item>
        <title>Personalized Health: Challenges in Data Science</title>
        <description>The promise of personalized health is driven by the wide availability of data, but we don&apos;t need to talk so much about where we want to be, rather how we should get there. What are the challenges that need to be bridged technologically to unlock the potential in the much greater availability of data we now have? In this talk we&apos;ll consider three challenges of data science in the context of personalized health, the three challenges each need to be bridged to bring the era of true precision, or personalized, medicine within the reach of an affordable health care service.</description>
        <pubDate>Fri, 09 Dec 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ml4hc16b/personalized-health-challenges-in-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ml4hc16b/personalized-health-challenges-in-data-science.html</guid>
        
        
        <category>Lawrence-ml4hc16b</category>
        
      </item>
    
      <item>
        <title>Computational Perspectives: Fairness and Awareness in the Analysis of Data</title>
        <description>What is data science? An new name for something old perhaps. Nevertheless there is something new happening. Data is being acquired in ways that coudl never have been envisaged 100 years ago. This is presenting new challenges, and ones that no single field is equipped to face. As well as the need for new methodologies and theoretical underpinnings, modern data processing is having a direct effect on our citizens in real time. In this talk I’ll suggest that data science provides a banner under which the computational and statistical sciences can unite to provide an unified response.</description>
        <pubDate>Thu, 27 Oct 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rss16b/computational-perspectives-fairness-and-awareness-in-the-analysis-of-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rss16b/computational-perspectives-fairness-and-awareness-in-the-analysis-of-data.html</guid>
        
        
        <category>Lawrence-rss16b</category>
        
      </item>
    
      <item>
        <title>Three Challenges for Open Data Science</title>
        <description>Data science presents new opportunities but also new challenges. In this talk we will focus on three separate challenges for data science: 1. Paradoxes of the Data Society, 2. Quantifying the Value of Data, 3. Privacy, loss of control, marginalization. Each of these challenges has particular implications for data science.  The paradoxes relate to our evolving relationship with data and our changing expectations.  Quantifying value is vital for accounting for the influence of data in our new digital economies and issues of privacy and loss of control are fundamental to how our pre-existing rights evolve as the digital world encroaches more closely on the physical. One of the goals of open data science should be to address these challenges to ensure that we can avoid the pitfalls of the data driven society, allowing us to reap the benefits of data science in applications from personalized health to the developing world.
</description>
        <pubDate>Sat, 08 Oct 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-osdc16/three-challenges-for-open-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-osdc16/three-challenges-for-open-data-science.html</guid>
        
        
        <category>Lawrence-osdc16</category>
        
      </item>
    
      <item>
        <title>The Data Delusion</title>
        <description>The widespread success of deep learning in a variety of domains is being hailed as a new revolution in artificial intelligence. It has taken 20 years to go from defeating Kasparov at Chess to Lee Sedol at Go. But what have the real advances been across this time? The fundamental change has been in terms of data availability and compute availability. The underlying technology has not changed much in the last 20 years. So what does that mean for areas like medicine and health? Significant challenges remain, improving the data efficiency of these algorithms and retaining the balance between individual privacy and predictive power of the models. In this talk we will review these challenges and propose some ways forward. Bio: Neil Lawrence is a Professor of Machine Learning and Computational Biology at the University of Sheffield. His main research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular focus on applications in personalized health and applications in the developing world. He is well known for his work with Gaussian processes, and has proposed Gaussian process variants of many of the succesful deep learning architectures. He is highly active in the machine learning community, most recently Program Chairing the NIPS conference in 2014 and General Chairing (alongside Corinna Cortes) in 2015.</description>
        <pubDate>Thu, 22 Sep 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/the-data-delusion-democratising.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/the-data-delusion-democratising.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Challenges of Data Science</title>
        <description>Data science presents new opportunities but also new challenges. In this talk we will focus on three separate challenges for data science: 1. Paradoxes of the Data Society, 2. Quantifying the Value of Data, 3. Privacy, loss of control, marginalization. Each of these challenges has particular implications for data science. The paradoxes relate to our evolving relationship with data and our changing expectations. Quantifying value is vital for accounting for the influence of data in our new digital economies and issues of privacy and loss of control are fundamental to how our pre-existing rights evolve as the digital world encroaches more closely on the physical. By addressing these challenges now we can ensure that the pitfalls of the data driven society are overcome allowing us to reap the benefits of data science in applications from personalized health to the developing world.</description>
        <pubDate>Wed, 14 Sep 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-enbis16/the-challenges-of-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-enbis16/the-challenges-of-data-science.html</guid>
        
        
        <category>Lawrence-enbis16</category>
        
      </item>
    
      <item>
        <title>Fitting Covariance and Multioutput Gaussian Processes</title>
        <description>In this second session we will talk about fitting covariance matrices and look at multiple output processes.</description>
        <pubDate>Tue, 13 Sep 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpss16b/fitting-covariance-and-multioutput-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpss16b/fitting-covariance-and-multioutput-gaussian-processes.html</guid>
        
        
        <category>Lawrence-gpss16b</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description>In this first session we will introduce Gaussian process models, non parametric Bayesian models that allow for principled propagation of uncertainty in regression analysis. We will assume a background in parametric models, linear algebra and probability.</description>
        <pubDate>Mon, 12 Sep 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpss16a/introduction-to-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpss16a/introduction-to-gaussian-processes.html</guid>
        
        
        <category>Lawrence-gpss16a</category>
        
      </item>
    
      <item>
        <title>Data Science: Where Computation and Statistics Meet?</title>
        <description>What is data science? An new name for something old perhaps. Nevertheless there is something new happening. Data is being acquired in ways that could never have been envisaged 100 years ago. This is presenting new challenges, and ones that no single field is equipped to face. In this talk we will focus on three separate challenges for data science: 1. Paradoxes of the Data Society, 2. Quantifying the Value of Data, 3. Privacy, loss of control, marginalization. Each of these challenges has particular implications for data science and the interface between computation and statistics. By addressing these challenges now we can ensure that the pitfalls of the data driven society are overcome allowing to reap the benefits.</description>
        <pubDate>Tue, 06 Sep 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rss16a/data-science-where-computation-and-statistics-meet.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rss16a/data-science-where-computation-and-statistics-meet.html</guid>
        
        
        <category>Lawrence-rss16a</category>
        
      </item>
    
      <item>
        <title>Communicating Machine Learning</title>
        <description>As machine learning approaches become more widely adopted their societal impact is increasing. This raises issues in public understanding of science. In this talk I will give an overview of my own approach to addressing this challenge, mixing thoughts and experience into an approach to communicating machine learning.</description>
        <pubDate>Wed, 31 Aug 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-edinburgh16/communicating-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-edinburgh16/communicating-machine-learning.html</guid>
        
        
        <category>Lawrence-edinburgh16</category>
        
      </item>
    
      <item>
        <title>Variational Compression and Deep Gaussian Processes</title>
        <description>In this fourth sesssion we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Thu, 04 Aug 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss16biv/variational-compression-and-deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss16biv/variational-compression-and-deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-mlss16bIV</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensionality Reduction with Gaussian Processes</title>
        <description>In the third session we will look at latent variable models from a Gaussian process perspective with a particular focus on dimensionality reduction.</description>
        <pubDate>Wed, 03 Aug 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss16biii/probabilistic-dimensionality-reduction-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss16biii/probabilistic-dimensionality-reduction-with-gaussian-processes.html</guid>
        
        
        <category>Lawrence-mlss16bIII</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description>In this first session we will introduce Gaussian process models, non parametric Bayesian models that allow for principled propagation of uncertainty in regression analysis. We will assume a background in parametric models, linear algebra and probability.</description>
        <pubDate>Tue, 02 Aug 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss16bi/introduction-to-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss16bi/introduction-to-gaussian-processes.html</guid>
        
        
        <category>Lawrence-mlss16bI</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes II</title>
        <description>In the second session we will look at how Gaussian process models are related to Kalman filters and how they may be extended to deal with multiple outputs and mechanistic models.</description>
        <pubDate>Tue, 02 Aug 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss16bii/introduction-to-gaussian-processes-ii.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss16bii/introduction-to-gaussian-processes-ii.html</guid>
        
        
        <category>Lawrence-mlss16bII</category>
        
      </item>
    
      <item>
        <title>Privacy and Learning</title>
        <description>Absolute security of information locks it down and exposes it to only those who are granted access. Social privacy can be seen as a continuum where we expose different information to different parties according to levels of trust. In this talk we will briefly introduce our efforts on integrating privacy into learning algorithms to ensure a more equitable and free data society.</description>
        <pubDate>Thu, 14 Jul 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-security16/privacy-and-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-security16/privacy-and-learning.html</guid>
        
        
        <category>Lawrence-security16</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Professions</title>
        <description>As part of the Royal Society Working Group on Machine Learning this talk is a short introduction to machine learning for members of the professions followed by a provocation on what machine learning might mean for the future of the professions.</description>
        <pubDate>Wed, 13 Jul 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-professions16/machine-learning-and-the-professions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-professions16/machine-learning-and-the-professions.html</guid>
        
        
        <category>Lawrence-professions16</category>
        
      </item>
    
      <item>
        <title>New Directions in Data Science</title>
        <description>Data science presents new opportunities for Africa but also new challenges. In this talk we will focus on three separate challenges for data science: 1. Paradoxes of the Data Society, 2. Quantifying the Value of Data, 3. Privacy, loss of control, marginalization. Each of these challenges has particular implications for data science in the developing world. By addressing these challenges now we can ensure that the pitfalls of the data driven society are overcome allowing to reap the benefits.</description>
        <pubDate>Fri, 01 Jul 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-dsa16b/new-directions-in-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-dsa16b/new-directions-in-data-science.html</guid>
        
        
        <category>Lawrence-dsa16b</category>
        
      </item>
    
      <item>
        <title>Introduction to Data Science and Machine Learning</title>
        <description></description>
        <pubDate>Mon, 27 Jun 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-dsa16a/introduction-to-data-science-and-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-dsa16a/introduction-to-data-science-and-machine-learning.html</guid>
        
        
        <category>Lawrence-dsa16a</category>
        
      </item>
    
      <item>
        <title>System Zero: What Kind of AI Have We Created?</title>
        <description>Machine learning technologies have evolved to the extent that they are now considered the principle underlying technology for our advances in artificial intelligence. Artificial intelligence is an emotive term, given the implications for replacing qualities that humans consider specific to ourselves. In this talk we’ll consider what kind of artificial intelligence we’ve created and what possible implications are for our society.</description>
        <pubDate>Thu, 09 Jun 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-futureofhumanity16/system-zero-what-kind-of-ai-have-we-created.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-futureofhumanity16/system-zero-what-kind-of-ai-have-we-created.html</guid>
        
        
        <category>Lawrence-futureofhumanity16</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Future of Work</title>
        <description>Machine learning technologies have evolved to the extent that they are now considered the principle underlying technology for our advances in artificial intelligence. Artificial intelligence is an emotive term, given the implications for replacing qualities that humans consider specific to ourselves. As always new technology has a significant disruptive effect on existing markets, jobs and economies. In this talk we’ll explore where the advances are coming from and speculate about how our machine learning future is likely to pan out with a particular focus on work.</description>
        <pubDate>Fri, 27 May 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-futureofwork16/machine-learning-and-the-future-of-work.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-futureofwork16/machine-learning-and-the-future-of-work.html</guid>
        
        
        <category>Lawrence-futureofwork16</category>
        
      </item>
    
      <item>
        <title>What Kind of AI Have We Created?</title>
        <description>There have been fears voiced by Elon Musk and Stephen Hawking about the direction of artificial intelligent research. They worry about the creation of a sentient AI, one that might outwit us. However, the nature of the AI we have actually created is a long way distant from this. In this talk we will try and relate our models of artificial intelligence to models that have been proposed for the way humans think. The AI that Hawking and Musk fear is not yet here, but is the AI we have actually developed more or less disturbing than the vision they project?</description>
        <pubDate>Tue, 24 May 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-pintofscience16/what-kind-of-ai-have-we-created.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-pintofscience16/what-kind-of-ai-have-we-created.html</guid>
        
        
        <category>Lawrence-pintofscience16</category>
        
      </item>
    
      <item>
        <title>Data Efficiency and Machine Learning</title>
        <description>Entropy is a key component of information and probability, and may provide the key to *data efficient* learning. While we’ve seen great success with the AlphaGo computer program and strides forward in image and speech recognition our current machine learning systems are incredibly data inefficient. Better understanding of entropy with in these systems may provide the key to data efficient learning.</description>
        <pubDate>Mon, 23 May 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-entropyday16/data-efficiency-and-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-entropyday16/data-efficiency-and-machine-learning.html</guid>
        
        
        <category>Lawrence-entropyday16</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes II</title>
        <description></description>
        <pubDate>Fri, 13 May 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss16ii/introduction-to-gaussian-processes-ii.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss16ii/introduction-to-gaussian-processes-ii.html</guid>
        
        
        <category>Lawrence-mlss16II</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description></description>
        <pubDate>Thu, 12 May 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss16i/introduction-to-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss16i/introduction-to-gaussian-processes.html</guid>
        
        
        <category>Lawrence-mlss16I</category>
        
      </item>
    
      <item>
        <title>Beyond Backpropagation: Uncertainty Propagation</title>
        <description>Deep learning is founded on composable functions that are structured to capture regularities in data and can have their parameters optimized by backpropagation (differentiation via the chain rule). Their recent success is founded on the increased availability of data and computational power. However, they are not very data efficient. In low data regimes parameters are not well determined and severe overfitting can occur. The solution is to explicitly handle the indeterminacy by converting it to parameter uncertainty and propagating it through the model. Uncertainty propagation is more involved than backpropagation because it involves convolving the composite functions with probability distributions and integration is more challenging than differentiation. We will present one approach to fitting such models using Gaussian processes. The resulting models perform very well in both supervised and unsupervised learning on small data sets. The remaining challenge is to scale the algorithms to much larger data.</description>
        <pubDate>Tue, 03 May 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-iclr16/beyond-backpropagation-uncertainty-propagation.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-iclr16/beyond-backpropagation-uncertainty-propagation.html</guid>
        
        
        <category>Lawrence-iclr16</category>
        
      </item>
    
      <item>
        <title>Machine Learning with Gaussian Processes</title>
        <description>Gaussian processes (GPs) provide a principled probabilistic approach to prior probability distributions for functions. In this talk we will give an overview of some uses of GPs and their extensions. In particular we will introduce mechanistic models alongside GPs and also use GPs within a structured framework of latent variable models.</description>
        <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-amazon16/machine-learning-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-amazon16/machine-learning-with-gaussian-processes.html</guid>
        
        
        <category>Lawrence-amazon16</category>
        
      </item>
    
      <item>
        <title>Beyond Backpropagation: Uncertainty Propagation</title>
        <description>Deep learning is founded on composable functions that are structured to capture regularities in data and can have their parameters optimized by backpropagation (differentiation via the chain rule). Their recent success is founded on the increased availability of data and computational power. However, they are not very data efficient. In low data regimes parameters are not well determined and severe overfitting can occur. The solution is to explicitly handle the indeterminacy by converting it to parameter uncertainty and propagating it through the model. Uncertainty propagation is more involved than backpropagation because it involves convolving the composite functions with probability distributions and integration is more challenging than differentiation. We will present one approach to fitting such models using Gaussian processes. The resulting models perform very well in both supervised and unsupervised learning on small data sets. The remaining challenge is to scale the algorithms to much larger data.</description>
        <pubDate>Tue, 26 Apr 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msrne16b/beyond-backpropagation-uncertainty-propagation.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msrne16b/beyond-backpropagation-uncertainty-propagation.html</guid>
        
        
        <category>Lawrence-msrne16b</category>
        
      </item>
    
      <item>
        <title>Variational Inference in Deep GPs</title>
        <description></description>
        <pubDate>Thu, 21 Apr 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msrne16a/variational-inference-in-deep-gps.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msrne16a/variational-inference-in-deep-gps.html</guid>
        
        
        <category>Lawrence-msrne16a</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensionality Reduction</title>
        <description>In this talk I give a quick overview of probabilistic interpretations of dimensionality reduction, starting with probabilistic principal component analysis and generalising to non-linear approaches such as the Gaussian Process Latent variable model.</description>
        <pubDate>Thu, 14 Apr 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-facebook16/probabilistic-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-facebook16/probabilistic-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-facebook16</category>
        
      </item>
    
      <item>
        <title>The Data Delusion: Challenges for Democratising Deep Learning</title>
        <description>The widespread success of deep learning in a variety of domains is being hailed as a new revolution in artificial intelligence. It has taken 20 years to go from defeating Kasparov at Chess to Lee Sedol at Go. But what have the real advances been across this time? The fundamental change has been in terms of data availability and compute availability. The underlying technology has not changed much in the last 20 years. So what does that mean for areas like medicine and health? Significant challenges remain, improving the data efficiency of these algorithms and retaining the balance between individual privacy and predictive power of the models. In this talk we will review these challenges and propose some ways forward. Bio: Neil Lawrence is a Professor of Machine Learning and Computational Biology at the University of Sheffield. His main research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular focus on applications in personalized health and applications in the developing world. He is well known for his work with Gaussian processes, and has proposed Gaussian process variants of many of the succesful deep learning architectures. He is highly active in the machine learning community, most recently Program Chairing the NIPS conference in 2014 and General Chairing (alongside Corinna Cortes) in 2015.
</description>
        <pubDate>Thu, 07 Apr 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-deepsummit16/the-data-delusion-challenges-for-democratising-deep-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-deepsummit16/the-data-delusion-challenges-for-democratising-deep-learning.html</guid>
        
        
        <category>Lawrence-deepSummit16</category>
        
      </item>
    
      <item>
        <title>The Data Delusion</title>
        <description>The race on to develop the next generation of artificially intelligent algorithms, recent successes in hitherto unmanageable problems have somewhat blinded us to our own capabilities. Despite the commercial success of the current generation of learning algorithms, the time has come for the academic community to take stock. Have we really got the tools in place to solve the next generation of learning problems? Or is our current confidence in our toolsets misplaced? In this talk we&apos;ll develop at least one direction where our capabilities are lacking.
</description>
        <pubDate>Mon, 21 Mar 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mars16/the-data-delusion.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mars16/the-data-delusion.html</guid>
        
        
        <category>Lawrence-mars16</category>
        
      </item>
    
      <item>
        <title>What Kind of AI have we Created?</title>
        <description>There have been fears voiced by Elon Musk and Stephen Hawking about the direction of artificial intelligent research. They worry about the creation of a sentient AI, one that might outwit us. However, the nature of the AI we have actually created is a long way distant from this. In this talk we will try and relate our models of artificial intelligence to models that have been proposed for the way humans think. The AI that Hawking and Musk fear is not yet here, but is the AI we have actually developed more or less disturbing than the vision they project?</description>
        <pubDate>Thu, 17 Mar 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-notre15/what-kind-of-ai-public.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-notre15/what-kind-of-ai-public.html</guid>
        
        
        <category>Lawrence-notre15</category>
        
      </item>
    
      <item>
        <title>What Kind of AI have we Created?</title>
        <description>There have been fears voiced by Elon Musk and Stephen Hawking about the direction of artificial intelligent research. They worry about the creation of a sentient AI, one that might outwit us. However, the nature of the AI we have actually created is a long way distant from this. In this talk we will try and relate our models of artificial intelligence to models that have been proposed for the way humans think. The AI that Hawking and Musk fear is not yet here, but is the AI we have actually developed more or less disturbing than the vision they project?</description>
        <pubDate>Thu, 10 Mar 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-birley15/what-kind-of-ai-have-we-created.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-birley15/what-kind-of-ai-have-we-created.html</guid>
        
        
        <category>Lawrence-birley15</category>
        
      </item>
    
      <item>
        <title>Future Debates: This House Believes an Artificial Intelligence will Benefit Society</title>
        <description>The British Science Association hosts a series of debates to encourage constructive debate about science’s role in people’s lives, economy and the UK’s future. This debate was hosted by the Sheffield association and was focussed on artificial intelligence. The debate was led by two speakers from Sheffield’s Debating Society with Tony Dodd supporting the ’against’ and myself supporting the ’for’.</description>
        <pubDate>Mon, 29 Feb 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-futuredebates16/future-debates-this-house-believes-an-artificial-intelligence-will-benefit-society.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-futuredebates16/future-debates-this-house-believes-an-artificial-intelligence-will-benefit-society.html</guid>
        
        
        <category>Lawrence-futuredebates16</category>
        
      </item>
    
      <item>
        <title>Machine Learning with Gaussian Processes</title>
        <description>Gaussian processes (GPs) provide a principled probabilistic approach to prior probability distributions for functions. In this talk we will give an overview of some uses of GPs and their extensions. In particular we will introduce mechanistic models alongside GPs and also use GPs within the framework of latent variable models.</description>
        <pubDate>Fri, 29 Jan 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxwasp16/machine-learning-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxwasp16/machine-learning-with-gaussian-processes.html</guid>
        
        
        <category>Lawrence-oxwasp16</category>
        
      </item>
    
      <item>
        <title>What kind of AI have we created?</title>
        <description>The media is full of concerns about our data and how algorithms are affecting us. We worry about personal information becoming public, we worry about what intelligent machines have in store for us. This talk will be about the state of the art in terms of Artificial Intelligence. It will consider what it can do and what it can’t do. We are a long way away from implementing a ’sentient intelligence’, but what do we have in its place? This talk will explore current technology and speculate on what futures it may lead to.</description>
        <pubDate>Tue, 26 Jan 2016 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-phd16/what-kind-of-ai-have-we-created.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-phd16/what-kind-of-ai-have-we-created.html</guid>
        
        
        <category>Lawrence-phd16</category>
        
      </item>
    
      <item>
        <title>The Open Data Science Initiative</title>
        <description>The Open Data Science Initiative is founded on the idea that there are a set of core principles that are restricting our ability, as a society, to exploit the large quantity of data we are now generating. In this talk we identify the challenges across the range of industry, science, health and the developing world. We then review the principles of open data science which we hope will address these challenges.</description>
        <pubDate>Wed, 16 Dec 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-odsi15/the-open-data-science-initiative.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-odsi15/the-open-data-science-initiative.html</guid>
        
        
        <category>Lawrence-odsi15</category>
        
      </item>
    
      <item>
        <title>Special Topics: Gaussian Processes</title>
        <description></description>
        <pubDate>Tue, 15 Dec 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/gaussian-processes.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Mechanistic Fallacy and Modelling How We Think</title>
        <description>&lt;p&gt;In this talk we will discuss how our current set of modelling solutions relates to dual process models from psychology. By analogising with layered models of networks we first address the danger of focussing purely on mechanism (or biological plausibility) when discussion modelling in the brain. We term this idea the mechanistic fallacy. In an attempt to operate at a higher level of abstraction, we then take a conceptual approach and attempt to map the broader domain of mechanistic and phenomological models to dual process ideas from psychology. It seems that System 1 is closer to phenomological and System 2 is closer to mechanistic ideas. We will draw connections to surrogate modelling (also known as emmulation) and speculate that one role of System 2 may be to provide additional simulation data for System 1.&lt;/p&gt;</description>
        <pubDate>Fri, 11 Dec 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mechanistic15/the-mechanistic-fallacy-and-modelling-how-we-think.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mechanistic15/the-mechanistic-fallacy-and-modelling-how-we-think.html</guid>
        
        
        <category>Lawrence-mechanistic15</category>
        
      </item>
    
      <item>
        <title>Logistic Regression and GLMs</title>
        <description>Naive Bayes assumptions allow us to specify class conditional densities through assuming that the data are conditionally independent given parameters. A logistic regression is an approach to classification which extends the linear basis function models we’ve already explored. Rather than modeling the output of the function directly the assumption is that we model the &lt;em&gt;log-odds&lt;/em&gt; with the basis functions.</description>
        <pubDate>Tue, 01 Dec 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/logistic-and-glm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/logistic-and-glm.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Probabilistic Classification: Naive Bayes</title>
        <description>In the last lecture we looked at unsupervised learning. We introduced latent variables, dimensionality reduction and clustering. In this lecture we’re going to look at clustering, specifically the probabilistic approach to clustering. We’ll focus on a simple but often effective algorithm known as &lt;em&gt;naive Bayes&lt;/em&gt;.</description>
        <pubDate>Tue, 24 Nov 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/naive-bayes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/naive-bayes.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Information Infrastructure for Health</title>
        <description>In this talk we will address challenges in information infrastructure for health. Personalized health care is one of the promises of the information revolution. However, there are major challenges in the curation, collection and management of the data. These are not currently being properly addressed. The care.data fiasco demonstrated the high sensitivity of the public to this data regime. Data leaks from the Pentagon, TalkTalk, Carphone Warehouse have demonstrated the inability of major institutions to keep our data secure. healthcare data is purportedly worth ten times credit card information on international black markets. Machine learning techniques are currently part of the problem, not the solution, they require centralised assimilation of data in a repository that can be easily accessed. A more robust information infrastructure would distribute data and contain afar greater degree of patient control over access. Such user-centric models may offer greater opportunity in terms of obtaining the necessary data-liquidity to fulfill the full potential of personalized health in effecting individuals’ health outcomes.</description>
        <pubDate>Wed, 18 Nov 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-atiscope15/information-infrastructure-for-health.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-atiscope15/information-infrastructure-for-health.html</guid>
        
        
        <category>Lawrence-atiscope15</category>
        
      </item>
    
      <item>
        <title>Bayesian Regression</title>
        <description>Bayesian formalisms deal with uncertainty in parameters,</description>
        <pubDate>Tue, 03 Nov 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/bayesian-regression.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/bayesian-regression.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Generalization: Model Validation</title>
        <description>Generalization is the main objective of a machine learning algorithm. The models we design should work on data they have not seen before. Confirming whether a model generalizes well or not is the domain of &lt;em&gt;model validation&lt;/em&gt;. In this lecture we introduce approaches to model validation such as hold out validation and cross validation.</description>
        <pubDate>Tue, 27 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/generalization.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/generalization.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>What Kind of Artificial Intelligence are we Creating?</title>
        <description>The media is full of concerns about our data and how algorithms are affecting us. We worry about personal information becoming public, we worry about what intelligent machines have in store for us. This talk will be about the state of the art in terms of Artificial Intelligence. It will consider what it can do and what it can’t do. We are a long way away from implementing a ’sentient intelligence’, but what do we have in its place? This talk will explore current technology and speculate on what futures it may lead to.</description>
        <pubDate>Fri, 23 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rise15a/what-kind-of-artificial-intelligence-are-we-creating.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rise15a/what-kind-of-artificial-intelligence-are-we-creating.html</guid>
        
        
        <category>Lawrence-rise15a</category>
        
      </item>
    
      <item>
        <title>Machine Learning Tutorial: Probabilistic Dimensionality Reduction II</title>
        <description>In the second part of this tutorial we will develop non linear approaches to dimensionality reduction from the probabilistic perspective. Firstly we will briefly review a probabilistic perspectives on spectral approaches, and then we will build on the non-linear approaches we derived using Gaussian processes in the first part of the tutorial.</description>
        <pubDate>Wed, 21 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-imperial15b/machine-learning-tutorial-probabilistic-dimensionality-reduction-span-ii-span.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-imperial15b/machine-learning-tutorial-probabilistic-dimensionality-reduction-span-ii-span.html</guid>
        
        
        <category>Lawrence-imperial15b</category>
        
      </item>
    
      <item>
        <title>What Kind of Artificial Intelligence have we Created?</title>
        <description>The media is full of concerns about our data and how algorithms are affecting us. We worry about personal information becoming public, we worry about what intelligent machines have in store for us. This talk will be about the state of the art in terms of Artificial Intelligence. It will consider what it can do and what it can’t do. We are a long way away from implementing a ’sentient intelligence’, but what do we have in its place? This talk will explore current technology and speculate on what futures it may lead to.</description>
        <pubDate>Tue, 20 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rise15/what-kind-of-artificial-intelligence-have-we-created.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rise15/what-kind-of-artificial-intelligence-have-we-created.html</guid>
        
        
        <category>Lawrence-rise15</category>
        
      </item>
    
      <item>
        <title>Basis Functions</title>
        <description>&lt;p&gt;In the last session we explored least squares for univariate and multivariate &lt;em&gt;regression&lt;/em&gt;. We introduced &lt;em&gt;matrices&lt;/em&gt;, &lt;em&gt;linear algebra&lt;/em&gt; and &lt;em&gt;derivatives&lt;/em&gt;.&lt;/p&gt; In this session we will introduce &lt;em&gt;basis functions&lt;/em&gt; which allow us to implement &lt;em&gt;non-linear regression models&lt;/em&gt;.</description>
        <pubDate>Tue, 20 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/basis-functions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/basis-functions.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Personalised Health and Gaussian Processes</title>
        <description></description>
        <pubDate>Wed, 14 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-benevolent15/personalised-health-and-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-benevolent15/personalised-health-and-gaussian-processes.html</guid>
        
        
        <category>Lawrence-benevolent15</category>
        
      </item>
    
      <item>
        <title>Linear Algebra and Linear Regression</title>
        <description>In this session we combine the objective function perspective and the probabilistic perspective on &lt;em&gt;linear regression&lt;/em&gt;. We motivate the importance of &lt;em&gt;linear algebra&lt;/em&gt; by showing how much faster we can complete a linear regression using linear algebra.</description>
        <pubDate>Tue, 13 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/linear-regression.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/linear-regression.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>The Digital Oligarchy: Information, Knowledge and the Internet Era</title>
        <description>The data revolution is among us and the technical press is filled with stories of big data and artificial intelligence. What is driving this progress? In this talk we will argue that collection of data on its own is of little utility, it is interconnection of data that allows information to become knowledge. Businesses need to place data at the core of what they do to benefit from these techniques. The talk will be grounded in academic ideas of what information, knowledge and data are. But these concepts have practical utility that can influence decision making on where data sits within an organisation.</description>
        <pubDate>Thu, 08 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-impact15/the-digital-oligarchy-information-knowledge-and-the-internet-era.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-impact15/the-digital-oligarchy-information-knowledge-and-the-internet-era.html</guid>
        
        
        <category>Lawrence-impact15</category>
        
      </item>
    
      <item>
        <title>Objective Functions: A Simple Example with Matrix Factorisation</title>
        <description>In this session we introduce the notion of objective functions and show how they can be used in a simple recommender system based on &lt;em&gt;matrix factorisation&lt;/em&gt;.</description>
        <pubDate>Tue, 06 Oct 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/matrix-factorization.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/matrix-factorization.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Probability and an Introduction to Jupyter, Python and Pandas</title>
        <description>In this first session we will introduce &lt;em&gt;machine learning&lt;/em&gt;, review &lt;em&gt;probability&lt;/em&gt; and begin familiarization with the Jupyter notebook, python and pandas.</description>
        <pubDate>Tue, 29 Sep 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/intro-probability.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/intro-probability.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Peer Review and The NIPS Experiment</title>
        <description>The peer review process can be difficult to navigate for newcomers. In this informal talk we will review the results of the NIPS experiment, an experiment on the repeatability of peer review conducted for the 2014 conference. We will try to keep the presentation information to ensure questions can be asked. With luck it will give more insight into the processes that a program committee goes through when selecting papers.</description>
        <pubDate>Mon, 21 Sep 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-peer15/peer-review-and-the-nips-experiment.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-peer15/peer-review-and-the-nips-experiment.html</guid>
        
        
        <category>Lawrence-peer15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description></description>
        <pubDate>Thu, 20 Aug 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-harvard15/deep-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-harvard15/deep-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-harvard15</category>
        
      </item>
    
      <item>
        <title>Personalized Health with Gaussian Processes</title>
        <description>Modern data connectivity gives us different views of the patient which need to be unified for truly personalized health care. I’ll give an personal perspective on the type of methodological and social challenges we expect to arise in this this domain and motivate Gaussian process models as one approach to dealing with the explosion of data.</description>
        <pubDate>Wed, 19 Aug 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msrne15/personalized-health-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msrne15/personalized-health-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-msrne15</category>
        
      </item>
    
      <item>
        <title>Latent Force Models: Bridging the Divide between Mechanistic and Data Modelling Paradigms</title>
        <description></description>
        <pubDate>Tue, 21 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss15bc/latent-force-models-bridging-the-divide-between-mechanistic-and-data-modelling-paradigm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss15bc/latent-force-models-bridging-the-divide-between-mechanistic-and-data-modelling-paradigm.html</guid>
        
        
        <category>Lawrence-mlss15bc</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes (Part III)</title>
        <description></description>
        <pubDate>Sat, 18 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss15biii/gaussian-processes-part-iii.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss15biii/gaussian-processes-part-iii.html</guid>
        
        
        <category>Lawrence-mlss15bIII</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes (Part II)</title>
        <description></description>
        <pubDate>Fri, 17 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss15bii/gaussian-processes-part-ii.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss15bii/gaussian-processes-part-ii.html</guid>
        
        
        <category>Lawrence-mlss15bII</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes (Part I)</title>
        <description></description>
        <pubDate>Thu, 16 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss15bi/gaussian-processes-part-i.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss15bi/gaussian-processes-part-i.html</guid>
        
        
        <category>Lawrence-mlss15bI</category>
        
      </item>
    
      <item>
        <title>Panel Discussion</title>
        <description></description>
        <pubDate>Sat, 11 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/welling-deeppanel15/panel-discussion.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/welling-deeppanel15/panel-discussion.html</guid>
        
        
        <category>Welling-deeppanel15</category>
        
      </item>
    
      <item>
        <title>Large Scale Learning in Gaussian Processes</title>
        <description>Gaussian process models view the kernel matrix as representing the covariance between data points. In a Gaussian process, the RKHS function is a mean of a posterior distribution over possible functions. Gaussian processes sustain uncertainty around this means and this leads to a posterior \*covariance\* function (or kernel) associated with the process. A complication for large scale Gaussian process models is the need to sustain the estimate for this covariance function. In this talk we’ll review how this can be done probabilistically through a variational approach we know as ’variational compression’.</description>
        <pubDate>Sat, 11 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-largeicml15/large-scale-learning-in-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-largeicml15/large-scale-learning-in-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-largeicml15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Sat, 11 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-deepicml15/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-deepicml15/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-deepicml15</category>
        
      </item>
    
      <item>
        <title>Personalized Health</title>
        <description></description>
        <pubDate>Thu, 18 Jun 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nyeri15c/personalized-health.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nyeri15c/personalized-health.html</guid>
        
        
        <category>Lawrence-nyeri15c</category>
        
      </item>
    
      <item>
        <title>Regression</title>
        <description></description>
        <pubDate>Mon, 15 Jun 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nyeri15b/regression.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nyeri15b/regression.html</guid>
        
        
        <category>Lawrence-nyeri15b</category>
        
      </item>
    
      <item>
        <title>Introduction to Machine Learning and Data Science</title>
        <description></description>
        <pubDate>Mon, 15 Jun 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nyeri15a/introduction-to-machine-learning-and-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nyeri15a/introduction-to-machine-learning-and-data-science.html</guid>
        
        
        <category>Lawrence-nyeri15a</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Tue, 09 Jun 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-edinburgh15/deep-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-edinburgh15/deep-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-edinburgh15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Mon, 11 May 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nyu15/deep-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nyu15/deep-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-nyu15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Thu, 30 Apr 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-kth15/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-kth15/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-kth15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Wed, 29 Apr 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-linkoping15/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-linkoping15/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-linkoping15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities.  In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.</description>
        <pubDate>Wed, 08 Apr 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mascotnum15/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mascotnum15/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-mascotnum15</category>
        
      </item>
    
      <item>
        <title>Modelling in the Context of Massively Missing Data</title>
        <description>In the age of large streaming data it seems appropriate to revisit the foundations of what we think of as data modelling. In this talk I’ll argue that traditional statistical approaches based on parametric models and i.i.d. assumptions are inappropriate for the type of large scale machine learning we need to do in the age of massive streaming data sets. Particularly when we realise that regardless of the size of data we have, it pales in comparison to the data we could have. This is the domain of *massively missing data*. I’ll be arguing for flexible non-parametric models as the answer. This presents a particular challenge, non parametric models require data storage of the entire data set, which presents problems for massive, streaming data. I will present a potential solution, but perhaps end with more questions than we started with.</description>
        <pubDate>Wed, 18 Mar 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mpi15/modelling-in-the-context-of-massively-missing-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mpi15/modelling-in-the-context-of-massively-missing-data.html</guid>
        
        
        <category>Lawrence-mpi15</category>
        
      </item>
    
      <item>
        <title>The Data Farm</title>
        <description>Like Hansel and Gretel’s breadcrumbs into the forest we leave a data trail of data-crumbs wherever we go: social networks, mobile 
phones, hospital visits, credit cards and loyalty cards. Our every move is being watched! The data-crumbs are seeds of information but 
what results from them... is it a jungle with dangers lurking or a productive farmyard? And if our data is being farmed, where does all 
the produce go?

&lt;p&gt;This edition of the talk was given to an age group between 8 and 10.</description>
        <pubDate>Fri, 13 Mar 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-datafarm15a/the-data-farm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-datafarm15a/the-data-farm.html</guid>
        
        
        <category>Lawrence-datafarm15a</category>
        
      </item>
    
      <item>
        <title>Data Science: A New Field or Just a Rebadging Exercise?</title>
        <description>Scientific fields don’t necessarily emerge because fundamental new knowledge is being generated, but often because a shift in the key questions that are facing us, and the tools that we have to answer them. The current information revolution is causing us to reassess our approach to data. Our mathematical and computational toolsets are co-evolving. The potential of very large interconnected data is placing urgent demands on our methodologies. In this talk, inspired by these challenges, I will give a personal perspective on what this means for those of us at the interface of Computer Science/Mathematics and Statistics. I’ll attempt to do this not only in the context of modelling and analysis, but also in the context of how we deploy our conclusions for the benefit of wider society. Many of our current suite of methodologies were motivated by different needs, and I’ll argue that it may now be time to return to the fundamental ideas from where these methodologies were inspired, but with a contemporary slant on the nature of data. My own perspective is that if what I describe \*is\* data science, then it does not stand as a field alone, but it represents a new and pressing set of questions that bridge the computational and mathematical sciences. Regardless of its phylogeny, exploring this interface through these questions will be mutually beneficial.</description>
        <pubDate>Thu, 12 Mar 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nottingham15/data-science-a-new-field-or-just-a-rebadging-exercise.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nottingham15/data-science-a-new-field-or-just-a-rebadging-exercise.html</guid>
        
        
        <category>Lawrence-nottingham15</category>
        
      </item>
    
      <item>
        <title>Machine Learning Tutorial: Probabilistic Dimensionality Reduction</title>
        <description>In this tutorial we will present probabilistic approaches to dimensionality reduction based on latent variable models. We will motivate dimensionality reduction and then start with principal component analysis and extend it to include non linear approaches to reducing the dimension of data.</description>
        <pubDate>Wed, 11 Mar 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-imperial15/machine-learning-tutorial-probabilistic-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-imperial15/machine-learning-tutorial-probabilistic-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-imperial15</category>
        
      </item>
    
      <item>
        <title>The Data Farm</title>
        <description>Like Hansel and Gretel’s breadcrumbs into the forest we leave a data trail of data-crumbs wherever we go: social networks, mobile phones, hospital visits, credit cards and loyalty cards. Our every move is being watched! The data-crumbs are seeds of information but what results from them... is it a jungle with dangers lurking or a productive farmyard? And if our data is being farmed, where does all the produce go?</description>
        <pubDate>Thu, 05 Mar 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-fest15/the-data-farm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-fest15/the-data-farm.html</guid>
        
        
        <category>Lawrence-fest15</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description></description>
        <pubDate>Sat, 21 Feb 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss15/introduction-to-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss15/introduction-to-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-mlss15</category>
        
      </item>
    
      <item>
        <title>The NIPS Experiment</title>
        <description>The peer review process can be difficult to navigate for newcomers. In this informal talk we will review the results of the NIPS experiment, an experiment on the repeatability of peer review conducted for the 2014 conference. We will try to keep the presentation information to ensure questions can be asked. With luck it will give more insight into the processes that a program committee goes through when selecting papers.</description>
        <pubDate>Fri, 30 Jan 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-radiant15/the-nips-experiment-examining-the-repeatability-of-peer-review.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-radiant15/the-nips-experiment-examining-the-repeatability-of-peer-review.html</guid>
        
        
        <category>Lawrence-radiant15</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to solve these models efficiently for massive data sets. That challenge is in reach through a new class of variational approximations known as variational compression. The underlying variational bounds are very similar to the objective functions for deep neural networks, giving the promise of efficient approaches to deep learning that are constructed from components with very well understood analytical properties.</description>
        <pubDate>Fri, 23 Jan 2015 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-iit15/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-iit15/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-iit15</category>
        
      </item>
    
      <item>
        <title>Data Science: A New Field or Just a Rebadging Exercise?</title>
        <description>Scientific fields don’t necessarily emerge because fundamental new knowledge is being generated, but often because a shift in the key questions that are facing us, and the tools that we have to answer them. The current information revolution is causing us to reassess our approach to data. Our mathematical and computational toolsets are co-evolving. The potential of very large interconnected data is placing urgent demands on our methodologies. In this talk, inspired by these challenges, I will give a personal perspective on what this means for those of us at the interface of Computer Science/Mathematics and Statistics. I’ll attempt to do this not only in the context of modelling and analysis, but also in the context of how we deploy our conclusions for the benefit of wider society. Many of our current suite of methodologies were motivated by different needs, and I’ll argue that it may now be time to return to the fundamental ideas from where these methodologies were inspired, but with a contemporary slant on the nature of data. My own perspective is that if what I describe \*is\* data science, then it does not stand as a field alone, but it represents a new and pressing set of questions that bridge the computational and mathematical sciences. Regardless of its phylogeny, exploring this interface through these questions will be mutually beneficial.</description>
        <pubDate>Wed, 26 Nov 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-warwick14/data-science-a-new-field-or-just-a-rebadging-exercise.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-warwick14/data-science-a-new-field-or-just-a-rebadging-exercise.html</guid>
        
        
        <category>Lawrence-warwick14</category>
        
      </item>
    
      <item>
        <title>Statistical Computing: Python</title>
        <description></description>
        <pubDate>Fri, 21 Nov 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rss14/statistical-computing-python.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rss14/statistical-computing-python.html</guid>
        
        
        <category>Lawrence-rss14</category>
        
      </item>
    
      <item>
        <title>Approximate Inference in Deep GPs</title>
        <description>In this talk we will review deep Gaussian process models and relate them to neural network models. We will then consider the details of how variational inference may be performed in these models. The approach is centred on &apos;variational compression&apos;, an approach to variational inference that compresses information into an augmented variable space. The aim of the deep Gaussian process framework is to enable probabilistic learning of multi-modal data. We will therefore end by highlighting directions for future research and discussing application of these models in domains such as personalised health.</description>
        <pubDate>Thu, 23 Oct 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucl14c/approximate-inference-in-deep-gps.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucl14c/approximate-inference-in-deep-gps.html</guid>
        
        
        <category>Lawrence-ucl14c</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we describe how deep neural networks can be modified to produce
deep Gaussian process models. The framework of deep Gaussian processes allow for
unsupervised learning, transfer learning, semi-supervised learning, multi-task learning
and principled handling of different data types (count data, binary data, heavy
tailed noise distributions). The main challenge is to solve these models efficiently
for massive data sets. That challenge is in reach through a new class of variational
approximations known as variational compression. The underlying variational bounds
are very similar to the objective functions for deep neural networks, giving the
promise of efficient approaches to deep learning that are constructed from components
with very well understood analytical properties.
</description>
        <pubDate>Thu, 04 Sep 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucl14b/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucl14b/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-ucl14b</category>
        
      </item>
    
      <item>
        <title>Big Data and Open Data Science</title>
        <description>In this talk we will focus on the challenges that are arising through big data and focussing on potential solutions, both from a methodological side, but also in terms of the way that statistics and computer science need to respond to the challenges culturally.</description>
        <pubDate>Wed, 02 Jul 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-uclid14/big-data-and-open-data-science.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-uclid14/big-data-and-open-data-science.html</guid>
        
        
        <category>Lawrence-uclid14</category>
        
      </item>
    
      <item>
        <title>Flexible Parametric Representations of Non Parametric Models</title>
        <description>In the age of large streaming data it seems appropriate to revisit the foundations of what we think of as data modelling. In this talk I’ll argue that traditional statistical approaches based on parametric models and i.i.d. assumptions are inappropriate for the type of large scale machine learning we need to do in the age of massive streaming data sets. I’ll be arguing for flexible non-parametric models as the answer. This presents a particular challenge, non parametric models require data storage of the entire data set, which presents problems for massive, streaming data. I’ll argue that recently proposed variational approximations allow us to retain the advantages of both non-parametric and parametric models within a consistent framework that performs an optimal compression of our data from an information gain perspective.</description>
        <pubDate>Mon, 19 May 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-edinburgh14/flexible-parametric-representations-of-non-parametric-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-edinburgh14/flexible-parametric-representations-of-non-parametric-models.html</guid>
        
        
        <category>Lawrence-edinburgh14</category>
        
      </item>
    
      <item>
        <title>Visualizing Biological Data with Gaussian Processes</title>
        <description></description>
        <pubDate>Tue, 13 May 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ebi14b/visualizing-biological-data-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ebi14b/visualizing-biological-data-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-ebi14b</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes for Dynamic Modelling</title>
        <description></description>
        <pubDate>Tue, 13 May 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ebi14a/gaussian-processes-for-dynamic-modelling.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ebi14a/gaussian-processes-for-dynamic-modelling.html</guid>
        
        
        <category>Lawrence-ebi14a</category>
        
      </item>
    
      <item>
        <title>What is Machine Learning? A Probabilistic Perspective (Part II)</title>
        <description></description>
        <pubDate>Sat, 26 Apr 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss14b/what-is-machine-learning-a-probabilistic-perspective-part-ii.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss14b/what-is-machine-learning-a-probabilistic-perspective-part-ii.html</guid>
        
        
        <category>Lawrence-mlss14b</category>
        
      </item>
    
      <item>
        <title>What is Machine Learning? A Probabilistic Perspective (Part I)</title>
        <description></description>
        <pubDate>Sat, 26 Apr 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlss14/what-is-machine-learning-a-probabilistic-perspective-part-i.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlss14/what-is-machine-learning-a-probabilistic-perspective-part-i.html</guid>
        
        
        <category>Lawrence-mlss14</category>
        
      </item>
    
      <item>
        <title>Flexible Parametric Representations of Non Parametric Models</title>
        <description>In the age of large streaming data it seems appropriate to revisit the foundations of what we think of as data modelling. In this talk I’ll argue that traditional statistical approaches based on parametric models and i.i.d. assumptions are inappropriate for the type of large scale machine learning we need to do in the age of massive streaming data sets. I’ll be arguing for flexible non-parametric models as the answer. This presents a particular challenge, non parametric models require data storage of the entire data set, which presents problems for massive, streaming data. I’ll argue that recently proposed variational approximations allow us to retain the advantages of both non-parametric and parametric models within a consistent framework that performs an optimal compression of our data from an information gain perspective.</description>
        <pubDate>Thu, 03 Apr 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-smile14/flexible-parametric-representations-of-non-parametric-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-smile14/flexible-parametric-representations-of-non-parametric-models.html</guid>
        
        
        <category>Lawrence-smile14</category>
        
      </item>
    
      <item>
        <title>Applications of Gaussian Processes in Computational Biology</title>
        <description>In this talk we will give a brief overview of Gaussian processes and a quick review of how they can be applied to solve questions in computational biology. In particular we will show how we can construct covariance functions to solve simple tasks (like differential expression) or more complex tasks (like unpicking regulatory networks).</description>
        <pubDate>Thu, 03 Apr 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-curie14/applications-of-span-g-span-aussian-processes-in-computational-biology.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-curie14/applications-of-span-g-span-aussian-processes-in-computational-biology.html</guid>
        
        
        <category>Lawrence-curie14</category>
        
      </item>
    
      <item>
        <title>Modelling with Massively Missing Data</title>
        <description>Supervised deep learning techniques now dominate in terms of performance for complex classification tasks such as ImageNet. For these, the set of inputs (features) and targets (labels) are typically well defined in advance. However, for many tasks in artificial intelligence the questions that need to be answered evolve, alongside the features that we can acquire. For example, imagine we wish to infer the health status of individuals by building population scale models based on clinical data. For most people in the population most of the data will be missing because clinical tests are not applied to patients as a matter of course. Indeed, some of the features we may wish to use in our model may not even exist when our model is first designed (e.g. emerging clinical tests and treatments). We refer to this scenario as ’massively missing data’. It is a scenario humans are faced with every day. Almost all of the time we are missing almost all of the data. And yet we have no difficulty assimilating disparate pieces of information from a wide range of sources to draw inferences about our world. Implementing machine learning systems that can replicate this characteristic requires model architectures that can be adapted at ’runtime’ as the data evolves, we don’t want to be limited by decisions made at ’design time’ when perhaps a more limited feature set existed. This poses particular challenges that we will address in this talk.</description>
        <pubDate>Thu, 20 Mar 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-facebook14/modelling-with-massively-missing-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-facebook14/modelling-with-massively-missing-data.html</guid>
        
        
        <category>Lawrence-facebook14</category>
        
      </item>
    
      <item>
        <title>Flexible Parametric Representations of Non Parametric Models</title>
        <description>In the age of large streaming data it seems appropriate to revisit the foundations of what we think of as data modelling. In this talk I’ll argue that traditional statistical approaches based on parametric models and i.i.d. assumptions are inappropriate for the type of large scale machine learning we need to do in the age of massive streaming data sets. I’ll be arguing for flexible non-parametric models as the answer. This presents a particular challenge, non parametric models require data storage of the entire data set, which presents problems for massive, streaming data. I’ll argue that recently proposed variational approximations allow us to retain the advantages of both non-parametric and parametric models within a consistent framework that performs an optimal compression of our data from an information gain perspective.</description>
        <pubDate>Wed, 26 Feb 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucl14/flexible-parametric-representations-of-non-parametric-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucl14/flexible-parametric-representations-of-non-parametric-models.html</guid>
        
        
        <category>Lawrence-ucl14</category>
        
      </item>
    
      <item>
        <title>Personalized Health with Gaussian Processes</title>
        <description>Modern data connectivity gives us different views of the patient which need to be unified for truly personalized health care. I’ll give an personal perspective on the type of methodological challenges we expect to arise in this this domain and motivate Gaussian process models as one approach to dealing with the explosion of data.</description>
        <pubDate>Wed, 19 Feb 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manizales14/personalized-health-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manizales14/personalized-health-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-manizales14</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep probabilistic model based on Gaussian process mappings. The data is modelled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We will motivate these models by considering applications in personalized health.
We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will briefly review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.
</description>
        <pubDate>Thu, 06 Feb 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxford14/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxford14/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-oxford14</category>
        
      </item>
    
      <item>
        <title>New Perspectives on Variational Approximations in Gaussian Processes: Modelling Data</title>
        <description>In this talk I’ll introduce new perspectives on variational approximations. Many of the ideas may be widely applicable, but we will try to instantiate them in the context of Gaussian process models.\
\
Although the variational material itself is reasonably technical, I’ll try and start the talk by making general statements about data modelling. Then, in an effort to make the talk seem coherent, I’ll make claims that the technical material which follows was inspired by the wider perspective I’ve given. Of course in practice, the technical material really emerged across a number of years during discussions with many people, and the general perspective has been retrofitted. Still, I’ll be giving the talk amongst friends, so no one will mind too much if the story doesn’t really fit together, and in fact it might be a good trigger for discussion. Speaking of which, I’ll be looking forward to lots of audience participation, and such participation may take the talk in previously unplanned directions.\
\
The talk will be given without the use of electronic aids.</description>
        <pubDate>Tue, 21 Jan 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cued14/new-perspectives-on-variational-approximations-in-span-g-span-aussian-processes-modelli.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cued14/new-perspectives-on-variational-approximations-in-span-g-span-aussian-processes-modelli.html</guid>
        
        
        <category>Lawrence-cued14</category>
        
      </item>
    
      <item>
        <title>Latent Variable Models with Gaussian Processes</title>
        <description></description>
        <pubDate>Wed, 15 Jan 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpwsthree14/latent-variable-models-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpwsthree14/latent-variable-models-with-gaussian-processes.html</guid>
        
        
        <category>Lawrence-gpwsThree14</category>
        
      </item>
    
      <item>
        <title>Fitting Covariance and Multi-output Gaussian Processes</title>
        <description></description>
        <pubDate>Tue, 14 Jan 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpwstwo14/fitting-covariance-and-multi-output-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpwstwo14/fitting-covariance-and-multi-output-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-gpwsTwo14</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description></description>
        <pubDate>Mon, 13 Jan 2014 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpwsone14/introduction-to-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpwsone14/introduction-to-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-gpwsOne14</category>
        
      </item>
    
      <item>
        <title>Unravelling the Big Data Revolution</title>
        <description>Modern data connectivity gives us massive uncurated data sets which present enormous challenges for modelling and inference. I’ll review where I think this is taking mathematics and speculate on the methodological and social challenges that this revolution will entail with some final reflections on how it might effect the teaching curriculum.</description>
        <pubDate>Wed, 18 Dec 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-leeds13/unravelling-the-big-data-revolution.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-leeds13/unravelling-the-big-data-revolution.html</guid>
        
        
        <category>Lawrence-leeds13</category>
        
      </item>
    
      <item>
        <title>Unravelling the Data Revolution with Machine Learning</title>
        <description>Modern data connectivity gives us different views of the patient which need to be unified for truly personalized health care. I’ll review where I think this is taking medicine and speculate on the methodological and social challenges that this revolution will entail.</description>
        <pubDate>Thu, 14 Nov 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-necs13/unravelling-the-data-revolution-with-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-necs13/unravelling-the-data-revolution-with-machine-learning.html</guid>
        
        
        <category>Lawrence-necs13</category>
        
      </item>
    
      <item>
        <title>Personalized Health with Gaussian Processes</title>
        <description>Modern data connectivity gives us different views of the patient which need to be unified for truly personalized health care. I’ll give an personal perspective on the type of methodological challenges we expect to arise in this this domain and motivate Gaussian process models as one approach to dealing with the explosion of data.</description>
        <pubDate>Mon, 04 Nov 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-leahurst13/personalized-health-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-leahurst13/personalized-health-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-leahurst13</category>
        
      </item>
    
      <item>
        <title>Deep Health: Machine Learning for Personalized Medicine</title>
        <description>I’ll give an overview of the methodological challenges we see arising in personalized medicine. These are associated with the explosion of data giving us different views of the patient which need to be unified for truly personalized health care.</description>
        <pubDate>Thu, 03 Oct 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-e4l13/deep-health-machine-learning-for-personalized-medicine.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-e4l13/deep-health-machine-learning-for-personalized-medicine.html</guid>
        
        
        <category>Lawrence-e4l13</category>
        
      </item>
    
      <item>
        <title>Probabilistic Approaches for Computational Biology and Medicine</title>
        <description>In this talk I’ll discuss some of the challenges in personalized medicine and consider some of the implications for machine learning models. I’ll introduce the probabilistic approach to machine learning, with a particular focus on Gaussian models. Giving some examples of applications I’ll discuss Bayesian approaches to regression modelling and lead into Gaussian process models.</description>
        <pubDate>Wed, 25 Sep 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlpm13/probabilistic-approaches-for-computational-biology-and-medicine.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlpm13/probabilistic-approaches-for-computational-biology-and-medicine.html</guid>
        
        
        <category>Lawrence-mlpm13</category>
        
      </item>
    
      <item>
        <title>A Unifying Probabilistic Perspective on Spectral Approaches to Dimensionality Reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding. Other spectral approaches such as locally linear embeddings and Laplacian eigenmaps also turn out to be closely related. Each method can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. This also suggests optimization via sparse inverse covariance techniques such as the graphical LASSO. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Thu, 05 Sep 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msr13b/a-unifying-probabilistic-perspective-on-spectral-approaches-to-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msr13b/a-unifying-probabilistic-perspective-on-spectral-approaches-to-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-msr13b</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will briefly review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.</description>
        <pubDate>Tue, 03 Sep 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msr13a/deep-span-gaussian-span-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msr13a/deep-span-gaussian-span-processes.html</guid>
        
        
        <category>Lawrence-msr13a</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will briefly review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.</description>
        <pubDate>Thu, 04 Jul 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ncaf13/deep-span-gaussian-span-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ncaf13/deep-span-gaussian-span-processes.html</guid>
        
        
        <category>Lawrence-ncaf13</category>
        
      </item>
    
      <item>
        <title>Deep Health</title>
        <description></description>
        <pubDate>Mon, 17 Jun 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchester13/deep-health.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchester13/deep-health.html</guid>
        
        
        <category>Lawrence-manchester13</category>
        
      </item>
    
      <item>
        <title>Latent Force Models: Introduction</title>
        <description></description>
        <pubDate>Thu, 13 Jun 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-lfmintro13/latent-force-models-introduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-lfmintro13/latent-force-models-introduction.html</guid>
        
        
        <category>Lawrence-lfmIntro13</category>
        
      </item>
    
      <item>
        <title>Unsupervised Learning with Gaussian Processes</title>
        <description></description>
        <pubDate>Wed, 12 Jun 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpssthree13/unsupervised-learning-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpssthree13/unsupervised-learning-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-gpssThree13</category>
        
      </item>
    
      <item>
        <title>Multioutput Gaussian Processes</title>
        <description></description>
        <pubDate>Tue, 11 Jun 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpsstwo13/multioutput-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpsstwo13/multioutput-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-gpssTwo13</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description></description>
        <pubDate>Mon, 10 Jun 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpssone13/introduction-to-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpssone13/introduction-to-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-gpssOne13</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will briefly review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.</description>
        <pubDate>Wed, 01 May 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cambridge13/deep-span-gaussian-span-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cambridge13/deep-span-gaussian-span-processes.html</guid>
        
        
        <category>Lawrence-cambridge13</category>
        
      </item>
    
      <item>
        <title>How the Planets Affect Our Daily Lives: A Brief History of Uncertainty</title>
        <description>Within the last 400 years scientists became able to predict the future. Crystal balls were replaced with computation. Uncertainty met mathematics. This talk gives a brief history of uncertainty and prediction. You will find out how planets affect who your Facebook friends are.</description>
        <pubDate>Thu, 21 Mar 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-scienceweek_edwards13/how-the-planets-affect-our-daily-lives-a-brief-history-of-uncertainty.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-scienceweek_edwards13/how-the-planets-affect-our-daily-lives-a-brief-history-of-uncertainty.html</guid>
        
        
        <category>Lawrence-scienceweek_edwards13</category>
        
      </item>
    
      <item>
        <title>Deep Learning: What is it and What are We doing About it?</title>
        <description>In November last year, deep learning algorithms made the front page of the New York Times. What’s special about these learning 
algorithms? What are they being used for and how are we using them in Sheffield? In this talk I’ll explain what deep learning is, why 
it’s considered exciting, and what the success stories are. I’ll also explain what the problems with these learning systems and how we 
are trying to address these problems with our own class of deep architectures been developed in our group in Sheffield.</description>
        <pubDate>Wed, 20 Mar 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-sheffield13/deep-learning-what-is-it-and-what-are-we-doing-about-it.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-sheffield13/deep-learning-what-is-it-and-what-are-we-doing-about-it.html</guid>
        
        
        <category>Lawrence-sheffield13</category>
        
      </item>
    
      <item>
        <title>How the Planets Affect Our Daily Lives: A Brief History of Uncertainty</title>
        <description>Within the last 400 years scientists became able to predict the future. Crystal balls were replaced with computation. Uncertainty met mathematics. This talk gives a brief history of uncertainty and prediction. You will find out how planets affect who your Facebook friends are.</description>
        <pubDate>Tue, 19 Mar 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-scienceweek_wilfrids13/how-the-planets-affect-our-daily-lives-a-brief-history-of-uncertainty.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-scienceweek_wilfrids13/how-the-planets-affect-our-daily-lives-a-brief-history-of-uncertainty.html</guid>
        
        
        <category>Lawrence-scienceweek_wilfrids13</category>
        
      </item>
    
      <item>
        <title>How the Planets Affect Our Daily Lives: A Brief History of Uncertainty</title>
        <description>Within the last 400 years scientists became able to predict the future. Crystal balls were replaced with computation. Uncertainty met mathematics. This talk gives a brief history of uncertainty and prediction. You will find out how planets affect who your Facebook friends are.</description>
        <pubDate>Mon, 18 Mar 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-scienceweek_birley13/how-the-planets-affect-our-daily-lives-a-brief-history-of-uncertainty.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-scienceweek_birley13/how-the-planets-affect-our-daily-lives-a-brief-history-of-uncertainty.html</guid>
        
        
        <category>Lawrence-scienceweek_birley13</category>
        
      </item>
    
      <item>
        <title>Variational Gaussian Processes</title>
        <description>In this talk we will review the variational approximation to Gaussian processes which enables Bayesian learning of latent variables. We will focus in particular on a new explanation of the variational approach that also leads to stochastic variational inference for GPs.</description>
        <pubDate>Mon, 11 Mar 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tuebingen_var13/variational-span-gaussian-span-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tuebingen_var13/variational-span-gaussian-span-processes.html</guid>
        
        
        <category>Lawrence-tuebingen_var13</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will briefly review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.</description>
        <pubDate>Mon, 11 Mar 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tuebingen13/deep-span-gaussian-span-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tuebingen13/deep-span-gaussian-span-processes.html</guid>
        
        
        <category>Lawrence-tuebingen13</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will briefly review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.</description>
        <pubDate>Wed, 30 Jan 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucl13/deep-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucl13/deep-gaussian-processes.html</guid>
        
        
        <category>Lawrence-ucl13</category>
        
      </item>
    
      <item>
        <title>Deep Gaussian Processes</title>
        <description>In this talk we will introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GPLVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. In the seminar we will first review dimensionality reduction via Gaussian processes, before showing how this framework can be extended to build deep models.</description>
        <pubDate>Thu, 24 Jan 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-aalto13/deep-span-gaussian-span-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-aalto13/deep-span-gaussian-span-processes.html</guid>
        
        
        <category>Lawrence-aalto13</category>
        
      </item>
    
      <item>
        <title>Reproducible Research: Lessons from Machine Learning</title>
        <description></description>
        <pubDate>Tue, 15 Jan 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-reproducible13/reproducible-research-span-lessons-span-from-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-reproducible13/reproducible-research-span-lessons-span-from-machine-learning.html</guid>
        
        
        <category>Lawrence-reproducible13</category>
        
      </item>
    
      <item>
        <title>Machine Learning and the Life Sciences: from Modelling to Medicine</title>
        <description></description>
        <pubDate>Fri, 11 Jan 2013 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-infection13/machine-learning-and-the-life-sciences-from-modelling-to-medicine.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-infection13/machine-learning-and-the-life-sciences-from-modelling-to-medicine.html</guid>
        
        
        <category>Lawrence-infection13</category>
        
      </item>
    
      <item>
        <title>Life, the Universe and Machine Learning</title>
        <description>What is Machine Learning? Why is it useful for us? Machine learning algorithms are the engines that are driving forward an intelligent internet. They are allowing us to uncover the causes of cancer and helping us understand the way the universe is put together. They are suggesting who your friends are on facebook, enabling driverless cars and causing flagging potentially fraudulent transactions on your credit card. To put it simply, machine learning is about understanding data. In this lecture I will try and give a sense of the challenges we face in machine learning, with a particular focus on those that have inspired my research. We will look at applications of data modelling from the early 18th century to the present, and see how they relate to modern machine learning. There will be a particular focus on dealing with &lt;i&gt;uncertainty&lt;/i&gt;: something humans are good at, but an area where computers have typically struggled. We will emphasize the role of uncertainty in data modelling and hope to persuade the audience that correct handling of uncertainty may be one of the keys to intelligent systems.</description>
        <pubDate>Thu, 06 Sep 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-inaugural12/life-the-universe-and-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-inaugural12/life-the-universe-and-machine-learning.html</guid>
        
        
        <category>Lawrence-inaugural12</category>
        
      </item>
    
      <item>
        <title>Model Based Target Identification from Expression Data</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Fri, 27 Jul 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucla12b/model-based-target-identification-from-expression-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucla12b/model-based-target-identification-from-expression-data.html</guid>
        
        
        <category>Lawrence-ucla12b</category>
        
      </item>
    
      <item>
        <title>A Brief Introduction to Gaussian Processes</title>
        <description>Gaussian processes are non-parametric probabilistic models for function representation. In this tutorial we give a brief introduction to Gaussian process models. Using simple examples we show how, with particular choices for covariance functions (analagous to a kernel matrix in kernel methods), we can perform inference about functions using only data sampled from those functions. We give overview how the probabilistic interpretation allows us to fit the parameters of the covariance function.</description>
        <pubDate>Fri, 27 Jul 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucla12a/a-brief-introduction-to-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucla12a/a-brief-introduction-to-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-ucla12a</category>
        
      </item>
    
      <item>
        <title>Bridging the Gap Between Computational Biology and Systems Biology</title>
        <description></description>
        <pubDate>Wed, 04 Jul 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-pathsoc12/bridging-the-gap-between-computational-biology-and-systems-biology.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-pathsoc12/bridging-the-gap-between-computational-biology-and-systems-biology.html</guid>
        
        
        <category>Lawrence-pathSoc12</category>
        
      </item>
    
      <item>
        <title>Kernels for Vector Valued Functions</title>
        <description>In this talk we review kernels for vector valued functions from the perspective of Gaussian processes. Deriving a multiple output Gaussian process from the perspective of a linear dynamical system (Kalman Filter) we introduce the Intrinsic Coregionalization Model and the Linear Model of Coregionalization. We discuss how they relate to multi-task learning with GPs and the Semi Parametric Latent Factor model. Finally, we will introduce convolutional process models from the perspective of the latent force model.</description>
        <pubDate>Sat, 30 Jun 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-icmlvector12/kernels-for-vector-valued-functions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-icmlvector12/kernels-for-vector-valued-functions.html</guid>
        
        
        <category>Lawrence-icmlVector12</category>
        
      </item>
    
      <item>
        <title>Everything You Want to Know About Gaussian Processes: Gaussian Process Regression</title>
        <description></description>
        <pubDate>Sat, 16 Jun 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cvpr12_1/everything-you-want-to-know-about-span-g-span-aussian-processes-span-g-span-aussian-pro.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cvpr12_1/everything-you-want-to-know-about-span-g-span-aussian-processes-span-g-span-aussian-pro.html</guid>
        
        
        <category>Lawrence-cvpr12_1</category>
        
      </item>
    
      <item>
        <title>Everything You Want to Know About Gaussian Processes: Multioutput Covariances and Mechanistic Models</title>
        <description></description>
        <pubDate>Sat, 16 Jun 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cvpr12_2/everything-you-want-to-know-about-span-g-span-aussian-processes-multioutput-covariances.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cvpr12_2/everything-you-want-to-know-about-span-g-span-aussian-processes-multioutput-covariances.html</guid>
        
        
        <category>Lawrence-cvpr12_2</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes in Computational Biology Tutorial: Session 2</title>
        <description></description>
        <pubDate>Tue, 12 Jun 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-biopredyn12_2/span-g-span-aussian-processes-in-computational-biology-tutorial-session-2.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-biopredyn12_2/span-g-span-aussian-processes-in-computational-biology-tutorial-session-2.html</guid>
        
        
        <category>Lawrence-biopredyn12_2</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes in Computational Biology Tutorial: Multioutput Gaussian Processes and Mechanistic Models</title>
        <description></description>
        <pubDate>Tue, 12 Jun 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-biopredyn12_1/span-g-span-aussian-processes-in-computational-biology-tutorial-multioutput-span-g-spa.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-biopredyn12_1/span-g-span-aussian-processes-in-computational-biology-tutorial-multioutput-span-g-spa.html</guid>
        
        
        <category>Lawrence-biopredyn12_1</category>
        
      </item>
    
      <item>
        <title>Latent Force Models: Bridging the Divide between Mechanistic and Data Modelling Paradigms</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning and statistical approaches are typically data driven—perhaps through regularized function approximation. These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take. In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms.</description>
        <pubDate>Wed, 02 May 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-liverpool12/latent-force-models-bridging-the-divide-between-mechanistic-and-data-modelling-paradigm.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-liverpool12/latent-force-models-bridging-the-divide-between-mechanistic-and-data-modelling-paradigm.html</guid>
        
        
        <category>Lawrence-liverpool12</category>
        
      </item>
    
      <item>
        <title>What is Machine Learning?</title>
        <description></description>
        <pubDate>Sun, 15 Apr 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlssfour12/what-is-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlssfour12/what-is-machine-learning.html</guid>
        
        
        <category>Lawrence-mlssFour12</category>
        
      </item>
    
      <item>
        <title>Nonlinear Probabilistic Dimensionality Reduction</title>
        <description></description>
        <pubDate>Fri, 13 Apr 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlssthree12/nonlinear-probabilistic-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlssthree12/nonlinear-probabilistic-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-mlssThree12</category>
        
      </item>
    
      <item>
        <title>Spectral Approaches to Dimensionality Reduction</title>
        <description></description>
        <pubDate>Thu, 12 Apr 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlsstwo12/spectral-approaches-to-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlsstwo12/spectral-approaches-to-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-mlssTwo12</category>
        
      </item>
    
      <item>
        <title>Dimensionality Reduction: Motivation and Linear Models</title>
        <description></description>
        <pubDate>Wed, 11 Apr 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlssone12/dimensionality-reduction-motivation-and-linear-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlssone12/dimensionality-reduction-motivation-and-linear-models.html</guid>
        
        
        <category>Lawrence-mlssOne12</category>
        
      </item>
    
      <item>
        <title>Latent Force Models: Combining the Mechanistic and Data Driven Modelling Paradigms</title>
        <description>The main focus of machine learning is to combine data with assumptions that reflect our belief about the regularity of the world. This, then, allows us to generalize and make new predictions for ‘test data’. Relative to other modelling paradigms such as those found in physics that are based on mechanistic understandings of the world, models in machine learning typically make only weak assumptions about data.\
\
In this talk, we argue that these weak assumptions are also mechanistic in nature. In particular, a very common assumption is smoothness, which can arise through the heat equation or other models of diffusion. Our assumption of smoothness reflects our belief in an underlying physical world in which smoothness is the norm. Strong mechanistic models, such as those used in computational fluid dynamics, climate etc. typically impose much more rigid constraints on the data and are often inappropriate for machine learning tasks where the model needs to be adaptive and should still perform well even when our mechanistic assumptions are not completely fulfilled. These strong mechanistic frameworks can, however, incorporate regularities beyond smoothness. Systems with inertia exhibit resonance and oscillation and these can be easily incorporated with strong mechanistic assumptions.\
\
We believe that the area between the strong and weak mechanistic paradigms should be a focus for much more research. For many interesting datasets we need adaptive models which include mechanistic assumptions. The latent force modeling paradigm is one way of approaching this which relies on the combination of differential equation systems which are driven, or have their initial or boundary conditions set, by Gaussian processes. The Gaussian processes provide the necessary adaptability and the differential equation encodes mechanistic assumptions. In this talk we introduce the model and demonstrate results in motion capture date and, given time, computational biology.</description>
        <pubDate>Wed, 28 Mar 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rank12/latent-force-models-combining-the-mechanistic-and-data-driven-modelling-paradigms.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rank12/latent-force-models-combining-the-mechanistic-and-data-driven-modelling-paradigms.html</guid>
        
        
        <category>Lawrence-rank12</category>
        
      </item>
    
      <item>
        <title>Latent force models: Combining Probabilistic and Mechanistic Modelling</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Statistical and machine learning approaches are typically data driven-perhaps through regularized function approximation.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take. Physics based approaches can be seen as strongly mechanistic, the mechanistic assumptions are hard encoded into the model. Data-driven approaches do incorporate assumptions that might be seen as being derived from some underlying mechanism, such as smoothness. In this sense they are weakly mechanistic.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. A moderately mechanistic approach. We show an application in modelling of human motion capture data.</description>
        <pubDate>Mon, 13 Feb 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxfordlatent12/latent-force-models-combining-probabilistic-and-mechanistic-modelling.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxfordlatent12/latent-force-models-combining-probabilistic-and-mechanistic-modelling.html</guid>
        
        
        <category>Lawrence-oxfordLatent12</category>
        
      </item>
    
      <item>
        <title>A Unifying Review of Spectral Methods for Dimensionality Reduction</title>
        <description>In this tutorial we will review spectral approaches to dimensionality reduction, introducing a unifying probabilistic perspective. Our unifying perspective is based on the maximum entropy principle and the resulting probabilistic models are based on GRFs. We will review maximum variance unfolding, Laplacian eigenmaps, locally linear embeddings and Isomap. Under the framework, these approaches can be divided into those that preserve local distances and those that don’t. For two small data sets we show that local distance preserving methods tend to perform better. Finally we use the unifying framework to relate these approaches to the Gaussian process latent variable model.</description>
        <pubDate>Mon, 13 Feb 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxfordunify12/a-unifying-review-of-spectral-methods-for-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxfordunify12/a-unifying-review-of-spectral-methods-for-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-oxfordUnify12</category>
        
      </item>
    
      <item>
        <title>Model Based Target Identification from Expression Data</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Mon, 06 Feb 2012 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cruk12/model-based-target-identification-from-expression-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cruk12/model-based-target-identification-from-expression-data.html</guid>
        
        
        <category>Lawrence-cruk12</category>
        
      </item>
    
      <item>
        <title></title>
        <description></description>
        <pubDate>Mon, 12 Dec 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-biopredyn11/.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-biopredyn11/.html</guid>
        
        
        <category>Lawrence-BioPreDyn11</category>
        
      </item>
    
      <item>
        <title>A Maximum Entropy Perspective on Spectral Dimensionality Reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding. Other spectral approaches, e.g. locally linear embeddings, turn out to also be closely related. These methods can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Wed, 16 Nov 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cambridge11/a-maximum-entropy-perspective-on-spectral-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cambridge11/a-maximum-entropy-perspective-on-spectral-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-cambridge11</category>
        
      </item>
    
      <item>
        <title>Model Based Target Identification from Expression Data</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Wed, 12 Oct 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-liverpool11/model-based-target-identification-from-expression-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-liverpool11/model-based-target-identification-from-expression-data.html</guid>
        
        
        <category>Lawrence-liverpool11</category>
        
      </item>
    
      <item>
        <title>Between Systems and Data-driven Modeling for Computational Biology: Target Identification with &lt;span&gt;Gaussian&lt;/span&gt; Processes</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Sat, 10 Sep 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-abcd11/between-systems-and-data-driven-modeling-for-computational-biology-target-identificatio.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-abcd11/between-systems-and-data-driven-modeling-for-computational-biology-target-identificatio.html</guid>
        
        
        <category>Lawrence-abcd11</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Statistical and machine learning approaches are typically data driven—perhaps through regularized function approximation.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take. Physics based approaches can be seen as *strongly mechanistic*, the mechanistic assumptions are hard encoded into the model. Data-driven approaches do incorporate assumptions that might be seen as being derived from some underlying mechanism, such as smoothness. In this sense they are *weakly mechanistic*.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. A *moderately mechanistic* approach. We show an application in modelling of human motion capture data.</description>
        <pubDate>Tue, 06 Sep 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-bayes11/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-bayes11/latent-force-models.html</guid>
        
        
        <category>Lawrence-bayes11</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes and Probabilistic Models for Dimensionality Reduction</title>
        <description>In this talk we present an overview of probabilistic approaches to dimensionality reduction and probabilistic interpretations of dimensionality reduction. We start by reviewing spectral methods and then turn to probabilistic PCA and the Gaussian process latent variable model.</description>
        <pubDate>Thu, 25 Aug 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-dagstuhl11/gaussian-processes-and-probabilistic-models-for-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-dagstuhl11/gaussian-processes-and-probabilistic-models-for-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-dagstuhl11</category>
        
      </item>
    
      <item>
        <title>Model Based Target Identification from Expression Data</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Tue, 07 Jun 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-krebs11/model-based-target-identification-from-expression-data.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-krebs11/model-based-target-identification-from-expression-data.html</guid>
        
        
        <category>Lawrence-krebs11</category>
        
      </item>
    
      <item>
        <title>A unifying probabilistic perspective on spectral approaches to dimensionality reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding. Other spectral approaches such as locally linear embeddings and Laplacian eigenmaps also turn out to be closely related. Each method can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. This also suggests optimization via sparse inverse covariance techniques such as the graphical LASSO. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Tue, 31 May 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-bonn11/a-unifying-probabilistic-perspective-on-spectral-approaches-to-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-bonn11/a-unifying-probabilistic-perspective-on-spectral-approaches-to-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-bonn11</category>
        
      </item>
    
      <item>
        <title>Advanced Use of Gaussian Processes</title>
        <description></description>
        <pubDate>Thu, 07 Apr 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-siena11b/advanced-use-of-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-siena11b/advanced-use-of-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-siena11b</category>
        
      </item>
    
      <item>
        <title>Introduction to Gaussian Processes</title>
        <description></description>
        <pubDate>Wed, 06 Apr 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-siena11a/introduction-to-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-siena11a/introduction-to-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-siena11a</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven—perhaps through regularized function approximation. These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take. In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and (given time) modelling of human motion capture data.</description>
        <pubDate>Wed, 16 Mar 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-exeter11/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-exeter11/latent-force-models.html</guid>
        
        
        <category>Lawrence-exeter11</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm. Given time we will review dynamical extensions, Bayesian approaches to dimensionality determination, learning of large data sets. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, speech data and video.</description>
        <pubDate>Wed, 09 Mar 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-loughborough11/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-loughborough11/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-loughborough11</category>
        
      </item>
    
      <item>
        <title>A Unifying Probabilistic Perspective on Spectral Approaches to Dimensionality Reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding. Other spectral approaches such as locally linear embeddings and Laplacian eigenmaps also turn out to be closely related. Each method can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. This also suggests optimization via sparse inverse covariance techniques such as the graphical LASSO. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Tue, 01 Mar 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-edinburgh11/a-unifying-probabilistic-perspective-on-spectral-approaches-to-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-edinburgh11/a-unifying-probabilistic-perspective-on-spectral-approaches-to-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-edinburgh11</category>
        
      </item>
    
      <item>
        <title>Between Systems and Data-driven Modeling for Computational Biology: Target Identification with &lt;span&gt;Gaussian&lt;/span&gt; Processes</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Thu, 27 Jan 2011 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-smpgd11/between-systems-and-data-driven-modeling-for-computational-biology-target-identificatio.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-smpgd11/between-systems-and-data-driven-modeling-for-computational-biology-target-identificatio.html</guid>
        
        
        <category>Lawrence-smpgd11</category>
        
      </item>
    
      <item>
        <title>A Probabilistic Perspective on Spectral Dimensionality Reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding and other spectral approaches such as locally linear embeddings and Laplacian eigenmaps also turn out to be closely related. Each method can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. This also suggests optimization via sparse inverse covariance techniques such as the graphical LASSO. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Sat, 11 Dec 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nipsw10/a-probabilistic-perspective-on-spectral-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nipsw10/a-probabilistic-perspective-on-spectral-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-nipsw10</category>
        
      </item>
    
      <item>
        <title>A Probabilistic Perspective on Spectral Dimensionality Reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding and other spectral approaches such as locally linear embeddings and Laplacian eigenmaps also turn out to be closely related. Each method can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. This also suggests optimization via sparse inverse covariance techniques such as the graphical LASSO. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Thu, 11 Nov 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-aaai10/a-probabilistic-perspective-on-spectral-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-aaai10/a-probabilistic-perspective-on-spectral-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-aaai10</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven—perhaps through regularized function approximation. These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take. In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and (given time) modelling of human motion capture data.</description>
        <pubDate>Thu, 04 Nov 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-lfmsheffield10/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-lfmsheffield10/latent-force-models.html</guid>
        
        
        <category>Lawrence-lfmSheffield10</category>
        
      </item>
    
      <item>
        <title>A Probabilistic Perspective on Spectral Dimensionality Reduction</title>
        <description>Spectral approaches to dimensionality reduction typically reduce the dimensionality of a data set through taking the eigenvectors of a Laplacian or a similarity matrix. Classical multidimensional scaling also makes use of the eigenvectors of a similarity matrix. In this talk we introduce a maximum entropy approach to designing this similarity matrix. The approach is closely related to maximum variance unfolding and other spectral approaches such as locally linear embeddings and Laplacian eigenmaps also turn out to be closely related. Each method can be seen as a sparse Gaussian graphical model where correlations between data points (rather than across data features) are specified in the graph. This also suggests optimization via sparse inverse covariance techniques such as the graphical LASSO. The hope is that this unifying perspective will allow the relationships between these methods to be better understood and will also provide the groundwork for further research.</description>
        <pubDate>Wed, 20 Oct 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-aalto10/a-probabilistic-perspective-on-spectral-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-aalto10/a-probabilistic-perspective-on-spectral-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-aalto10</category>
        
      </item>
    
      <item>
        <title>Bayesian approaches to Transcription Factor Target Identification</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high.\
\
In this lecture we introduce Bayesian approaches to target identification which make use of sampling approaches to rank candidate lists of targets. We will begin with an introduction to the target identification problem and an overview of the power of Bayesian approaches in solving it. We will then consider how probabilistic models such as Gaussian processes can be used for ranking potential targets of a transcription factor. These models are simple enough to allow genome wide target identification, but rich enough to encode dynamical behavior that, allowing us to identify putative targets even when decay rates are low.</description>
        <pubDate>Sun, 10 Oct 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-eurogene10/bayesian-approaches-to-transcription-factor-target-identification.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-eurogene10/bayesian-approaches-to-transcription-factor-target-identification.html</guid>
        
        
        <category>Lawrence-eurogene10</category>
        
      </item>
    
      <item>
        <title>Making Implementations Available for the Research Community</title>
        <description>Machine learning research is either inspired by a particular application, or by a general desire to make technology more “inteligent”. In modern machine learning most methodological development is mathematically inspired and results in an algorithm for optimization or fitting of a model to data. Design choices in implementation of an algorithm can have a significant effect on the quality of results. Decisions such as model initializaiton and data pre-processing are all part of the implementation. Necessarily, space constraints sometimes mean that such details are not included in the associated paper. It seems clear that the paper only tells part of the story. Implementations need to be made available at the time of submission of the paper, so that the full story may be followed. In our research group we have done this since 2001. In this talk I will make the arguments in favour of doing this universally and give personal experiences of the results.</description>
        <pubDate>Wed, 06 Oct 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-validation10/making-implementations-available-for-the-research-community.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-validation10/making-implementations-available-for-the-research-community.html</guid>
        
        
        <category>Lawrence-validation10</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven—perhaps through regularized function approximation. These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take. In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and (given time) modelling of human motion capture data.</description>
        <pubDate>Mon, 27 Sep 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-phylogenetics10/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-phylogenetics10/latent-force-models.html</guid>
        
        
        <category>Lawrence-phylogenetics10</category>
        
      </item>
    
      <item>
        <title>PRIB Tutorial: Gaussian Processes and Gene Regulation</title>
        <description>Computational biology models are often missing information, such as the concentration of biochemical species of interest. One approach to dealing with this missing information is to place a probabilistic prior over the missing data. One possible choice for such a prior is a Gaussian process.\
\
In this tutorial we will give an introduction to Gaussian processes. We will give simple examples of Gaussian processes in regression and interpolation. We will then show how Gaussian processes can be incorporated with differential equation models to give probabilistic models for transcription. Such models can then be used to rank potential targets of given transcription factors.</description>
        <pubDate>Wed, 22 Sep 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tutorialprib10/span-prib-span-tutorial-span-g-span-aussian-processes-and-gene-regulation.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tutorialprib10/span-prib-span-tutorial-span-g-span-aussian-processes-and-gene-regulation.html</guid>
        
        
        <category>Lawrence-tutorialPRIB10</category>
        
      </item>
    
      <item>
        <title>Between Systems and Data-driven Modeling for Computational Biology: Target Identification with &lt;span&gt;Gaussian&lt;/span&gt; Processes</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high. Typically researchers either use a data driven approach (such as clustering) or a model based approach (such as differential equations). In this talk we advocate hybrid techniques which have aspects of the mechanistic and data driven models. We combine simple differential equation models with Gaussian process priors to make probabilistic models with mechanistic underpinnings. We show applications in target identification from mRNA measurements.</description>
        <pubDate>Tue, 27 Jul 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ibsb10/between-systems-and-data-driven-modeling-for-computational-biology-target-identificatio.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ibsb10/between-systems-and-data-driven-modeling-for-computational-biology-target-identificatio.html</guid>
        
        
        <category>Lawrence-ibsb10</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven— perhaps through regularized function approximation.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and modelling of human motion capture data.</description>
        <pubDate>Mon, 01 Mar 2010 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-inference10/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-inference10/latent-force-models.html</guid>
        
        
        <category>Lawrence-inference10</category>
        
      </item>
    
      <item>
        <title>Transfer Learning and Multiple Output Kernel Functions</title>
        <description>A standard Bayesian approach to transfer learning is to construct hierarchical probabilistic models. Learning tasks are typically related in the model through conditional independencies of the variables/parameters. Many of the variables are unobserved. Marginalization of the unobserved variables and Bayesian treatment of parameters induces structure and correlations between the tasks. Gaussian processes are prior distributions over functions: kernel functions are the covariances associated with these priors. A Gaussian process can be set up to have multiple outputs. However, for these outputs to have correlation between them a covariance function that models correlations between outputs is required. Equivalently we need to develop multiple output kernel functions (also known as multitask kernel functions, or structured output kernels). In this talk we will briefly review work in creating multiple output kernels before focusing on models represented by a convolution processes. We will arrive at convolutional processes through physical interpretations of our models. We will try to illustrate these models with a range of real world examples of both transfer learning and other applications.</description>
        <pubDate>Sat, 12 Dec 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tlsd09/transfer-learning-and-multiple-output-kernel-functions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tlsd09/transfer-learning-and-multiple-output-kernel-functions.html</guid>
        
        
        <category>Lawrence-tlsd09</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven— perhaps through regularized function approximation.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and modelling of human motion capture data.</description>
        <pubDate>Wed, 25 Nov 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-kcl09/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-kcl09/latent-force-models.html</guid>
        
        
        <category>Lawrence-kcl09</category>
        
      </item>
    
      <item>
        <title>Nonlinear Response in Gaussian Process Models of Transcription</title>
        <description></description>
        <pubDate>Thu, 29 Oct 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tigem09/nonlinear-response-in-span-g-span-aussian-process-models-of-transcription.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tigem09/nonlinear-response-in-span-g-span-aussian-process-models-of-transcription.html</guid>
        
        
        <category>Lawrence-tigem09</category>
        
      </item>
    
      <item>
        <title>Model Based Target Identification from Gene Expression with Gaussian Processes</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high. In this talk we advocate model based target identification. We develop a simple probabilistic models of transcription (and translation) which encode mRNA (or Transcription Factor) production and decay. Our models are simple enough to allow genome wide target identification, but rich enough to encode dynamical behavior that, allowing us to identify putative targets even when decay rates are low.</description>
        <pubDate>Wed, 28 Oct 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-napoli09/model-based-target-identification-from-gene-expression-with-span-g-span-aussian-process.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-napoli09/model-based-target-identification-from-gene-expression-with-span-g-span-aussian-process.html</guid>
        
        
        <category>Lawrence-napoli09</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven— perhaps through regularized function approximation.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and modelling of human motion capture data.</description>
        <pubDate>Fri, 23 Oct 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-nyu09/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-nyu09/latent-force-models.html</guid>
        
        
        <category>Lawrence-nyu09</category>
        
      </item>
    
      <item>
        <title>Latent Force Models</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning approaches are typically data driven— perhaps through regularized function approximation.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and modelling of human motion capture data.</description>
        <pubDate>Wed, 21 Oct 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-google09/latent-force-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-google09/latent-force-models.html</guid>
        
        
        <category>Lawrence-google09</category>
        
      </item>
    
      <item>
        <title>Model Based Target Identification from Gene Expression with Gaussian Processes</title>
        <description>A simple approach to target identification through gene expression studies has been to cluster the expression profiles and look for coregulated genes within clusters. Within systems biology mechanistic models of gene expression are typically constructed through differential equations. mRNA’s production is taken to be proportional to transcription factor activity (with the proportionality given by the sensitivity) and the mRNA is assumed to decay at a particular rate. The assumption that coregulated genes have similar profiles is equivalent to assuming both the decay and the sensitivity are high. In this talk we advocate model based target identification. We develop a simple probabilistic models of transcription (and translation) which encode mRNA (or Transcription Factor) production and decay. Our models are simple enough to allow genome wide target identification, but rich enough to encode dynamical behavior that, allowing us to identify putative targets even when decay rates are low.</description>
        <pubDate>Mon, 19 Oct 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-jhu09/model-based-target-identification-from-gene-expression-with-span-g-span-aussian-process.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-jhu09/model-based-target-identification-from-gene-expression-with-span-g-span-aussian-process.html</guid>
        
        
        <category>Lawrence-jhu09</category>
        
      </item>
    
      <item>
        <title>Latent Force Modelling with Gaussian Processes</title>
        <description>Physics based approaches to data modeling involve constructing an accurate mechanistic model of data, often based on differential equations. Machine learning typically focuses on data driven approaches—perhaps through regularized function approximations.\
\
These two approaches to data modeling are often seen as polar opposites, but in reality they are two different ends to a spectrum of approaches we might take.\
\
In this talk we introduce latent force models. Latent force models are a new approach to data representation that model data through unknown forcing functions that drive differential equation models. By treating the unknown forcing functions with Gaussian process priors we can create probabilistic models that exhibit particular physical characteristics of interest, for example, in dynamical systems resonance and inertia. This allows us to perform a synthesis of the data driven and physical modeling paradigms. We will show applications of these models in systems biology and modelling of human motion capture data.</description>
        <pubDate>Fri, 09 Oct 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-newcastle09/latent-force-modelling-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-newcastle09/latent-force-modelling-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-newcastle09</category>
        
      </item>
    
      <item>
        <title>Latent Force Models with Gaussian Processes</title>
        <description></description>
        <pubDate>Thu, 24 Sep 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-inspire09/latent-force-models-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-inspire09/latent-force-models-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-inspire09</category>
        
      </item>
    
      <item>
        <title>Efficient Multiple Output Convolution Processes for Multiple Task Learning</title>
        <description></description>
        <pubDate>Wed, 16 Sep 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-warwick09/efficient-multiple-output-convolution-processes-for-multiple-task-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-warwick09/efficient-multiple-output-convolution-processes-for-multiple-task-learning.html</guid>
        
        
        <category>Lawrence-warwick09</category>
        
      </item>
    
      <item>
        <title>Dealing with High Dimensional Data with Dimensionality Reduction</title>
        <description></description>
        <pubDate>Sun, 06 Sep 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-interspeech09/dealing-with-high-dimensional-data-with-dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-interspeech09/dealing-with-high-dimensional-data-with-dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-interspeech09</category>
        
      </item>
    
      <item>
        <title>Latent Force Models and Multiple Output Gaussian Processes</title>
        <description>We are used to dealing with the situation where we have a latent variable. Often we assume this latent variable to be independently drawn from a distribution, e.g. probabilistic PCA or factor analysis. This simplification is often extended for temporal data where tractable Markovian independence assumptions are used (e.g. Kalman filters or hidden Markov models). In this talk we will consider the more general case where the latent variable is a forcing function in a differential equation model. We will show how for some simple ordinary differential equations the latent variable can be dealt with analytically for particular Gaussian process priors over the latent force. In this talk we will introduce the general framework and present results in systems biology and motion capture.</description>
        <pubDate>Thu, 23 Jul 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-lfm_slim09/latent-force-models-and-multiple-output-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-lfm_slim09/latent-force-models-and-multiple-output-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-lfm_slim09</category>
        
      </item>
    
      <item>
        <title>Latent Force Models with Gaussian Processes</title>
        <description>We are used to dealing with the situation where we have a latent variable. Often we assume this latent variable to be independently drawn from a distribution, e.g. probabilistic PCA or factor analysis. This simplification is often extended for temporal data where tractable Markovian independence assumptions are used (e.g. Kalman filters or hidden Markov models). In this talk we will consider the more general case where the latent variable is a forcing function in a differential equation model. We will firstly give a brief introduction to Gaussian processes, then we will show how for some simple ordinary differential equations the latent variable can be dealt with analytically for particular Gaussian process priors over the latent force. In this talk we will introduce the general framework, present results in systems biology and motion capture.</description>
        <pubDate>Mon, 13 Jul 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-lfm_cagliary09/latent-force-models-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-lfm_cagliary09/latent-force-models-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-lfm_cagliary09</category>
        
      </item>
    
      <item>
        <title>Non-linear Matrix Facorization with Gaussian Proceses</title>
        <description>A popular approach to collaborative filtering is matrix factorization. In this talk we consider the “probabilistic matrix factorization” and by taking a latent variable model perspective we show its equivalence to Bayesian PCA. This inspires us to consider probabilistic PCA and its non-linear extension, the Gaussian process latent variable model (GP-LVM) as an approach for probabilistic non-linear matrix factorization. We apply out approach to benchmark movie recommender data sets. The results show better than previous state-of-the-art performance.</description>
        <pubDate>Fri, 03 Jul 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-emmds09/non-linear-matrix-facorization-with-span-g-span-aussian-proceses.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-emmds09/non-linear-matrix-facorization-with-span-g-span-aussian-proceses.html</guid>
        
        
        <category>Lawrence-emmds09</category>
        
      </item>
    
      <item>
        <title>An Introduction to Systems Biology from a Machine Learning Perspective II</title>
        <description></description>
        <pubDate>Tue, 23 Jun 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tutii09/an-introduction-to-systems-biology-from-a-machine-learning-perspective-span-ii-span.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tutii09/an-introduction-to-systems-biology-from-a-machine-learning-perspective-span-ii-span.html</guid>
        
        
        <category>Lawrence-tutII09</category>
        
      </item>
    
      <item>
        <title>An Introduction to Systems Biology from a Machine Learning Perspective</title>
        <description></description>
        <pubDate>Mon, 22 Jun 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tut09/an-introduction-to-systems-biology-from-a-machine-learning-perspective.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tut09/an-introduction-to-systems-biology-from-a-machine-learning-perspective.html</guid>
        
        
        <category>Lawrence-tut09</category>
        
      </item>
    
      <item>
        <title>Non-linear Matrix Factorization with Gaussian Processes</title>
        <description></description>
        <pubDate>Tue, 14 Apr 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-learning09/non-linear-matrix-factorization-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-learning09/non-linear-matrix-factorization-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-learning09</category>
        
      </item>
    
      <item>
        <title>Estimation of Multiple Transcription Factor Activities using ODEs and Gaussian Processes</title>
        <description></description>
        <pubDate>Wed, 01 Apr 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-licsb09/estimation-of-multiple-transcription-factor-activities-using-odes-and-span-g-span-aussi.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-licsb09/estimation-of-multiple-transcription-factor-activities-using-odes-and-span-g-span-aussi.html</guid>
        
        
        <category>Lawrence-licsb09</category>
        
      </item>
    
      <item>
        <title>Python in Machine Learning</title>
        <description>Is Python a viable replacement for MATLAB?\
\
We are incredibly reliant on MATLAB, but should we be looking elsewhere for our ML programming needs? In this ML lunch I will try and share my recent experiences with Python and machine learning: good and bad. The main questions I think we should be considering are:\
\
Should we be trying to move to Python for our research?\
\
Should we be using Python in our teaching?\
\
I don’t know the answer, but I’ll try and use this MLO lunch to start the debate!</description>
        <pubDate>Wed, 25 Mar 2009 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-python09/python-in-machine-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-python09/python-in-machine-learning.html</guid>
        
        
        <category>Lawrence-python09</category>
        
      </item>
    
      <item>
        <title>GP-LVM for Data Consolidation</title>
        <description></description>
        <pubDate>Sat, 20 Dec 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpdc08/span-gp-lvm-span-for-data-consolidation.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpdc08/span-gp-lvm-span-for-data-consolidation.html</guid>
        
        
        <category>Lawrence-gpdc08</category>
        
      </item>
    
      <item>
        <title>Latent Force Models with Gaussian Processes</title>
        <description>We are used to dealing with the situation where we have a latent variable. Often we assume this latent variable to be independently drawn from a distribution, e.g. probabilistic PCA or factor analysis. This simplification is often extended for temporal data where tractable Markovian independence assumptions are used (e.g. Kalman filters or hidden Markov models). In this talk we will consider the more general case where the latent variable is a forcing function in a differential equation model. We will firstly give a brief introduction to Gaussian processes, then we will show how for some simple ordinary differential equations the latent variable can be dealt with analytically for particular Gaussian process priors over the latent force. In this talk we will introduce the general framework, present results in systems biology.\
\
Joint work with Magnus Rattray, Mauricio Álvarez, Pei Gao, Antti Honkela, David Luengo, Guido Sanguinetti and Michalis K. Titsias.</description>
        <pubDate>Thu, 16 Oct 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-bristol08/latent-force-models-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-bristol08/latent-force-models-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-bristol08</category>
        
      </item>
    
      <item>
        <title>Inference in Ordinary Differential Equations with Latent Functions through &lt;span&gt;G&lt;/span&gt;aussian Processes</title>
        <description>In biochemical interaction networks is a key problem in estimation of the structure and parameters of the genetic, metabolic and protein interaction networks that underpin all biological processes. We present a framework for Bayesian marginalisation of these latent chemical species through Gaussian process priors. We demonstrate our general approach on three different biological examples of single input motifs, including both activation and repression of transcription. We focus in particular on the problem of inferring transcription factor activity when the concentration of active protein cannot easily be measured. The uncertainty in the inferred transcription factor activity can be integrated out in order to derive a likelihood function that can be used for the estimation of regulatory model parameters.</description>
        <pubDate>Wed, 08 Oct 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-rss08/inference-in-ordinary-differential-equations-with-latent-functions-through-span-g-span.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-rss08/inference-in-ordinary-differential-equations-with-latent-functions-through-span-g-span.html</guid>
        
        
        <category>Lawrence-rss08</category>
        
      </item>
    
      <item>
        <title>Dynamics with Gaussian Processes</title>
        <description>We are used to dealing with the situation where we have a latent variable. Often we assume this latent variable to be independently drawn from a distribution, *e.g.* probabilistic PCA or factor analysis. This simplification is often extended for temporal data where tractable Markovian independence assumptions are used (*e.g.* Kalman filters or hidden Markov models). In this talk we will consider the more general case where the latent variable is a forcing function in a differential equation model. We will firstly give a brief introduction to Gaussian processes, then we will show how for some simple ordinary differential equations the latent variable can be dealt with analytically for particular Gaussian process priors over the latent force. In this talk we will introduce the general framework, present results in systems biology.\
\
Joint work with Magnus Rattray, Mauricio Álvarez, Pei Gao, Antti Honkela, David Luengo, Guido Sanguinetti and Michalis K. Titsias.</description>
        <pubDate>Wed, 10 Sep 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ncaf08/dynamics-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ncaf08/dynamics-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-ncaf08</category>
        
      </item>
    
      <item>
        <title>Ambiguity Modelling in Latent Spaces</title>
        <description>We are interested in the situation where we have two or more representations of an underlying phenomenon. In particular we are interested in the scenario where the representation are complementary. This implies that a single individual representation is not sufficient to fully discriminate a specific instance of the underlying phenomenon, it also means that each representation is an ambiguous representation of the other complementary spaces. In this paper we present a latent variable model capable of consolidating multiple complementary representations. Our method extends canonical correlation analysis by introducing additional latent spaces that are specific to the different representations, thereby explaining the full variance of the observations. These additional spaces, explaining representation specific variance, separately model the variance in a representation ambiguous to the other. We develop a spectral algorithm for fast computation of the embeddings and a probabilistic model (based on Gaussian processes) for validation and inference. The proposed model has several potential application areas, we demonstrate its use for multi-modal regression on a benchmark human pose estimation data set.</description>
        <pubDate>Mon, 08 Sep 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mlmi08/ambiguity-modelling-in-latent-spaces.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mlmi08/ambiguity-modelling-in-latent-spaces.html</guid>
        
        
        <category>Lawrence-mlmi08</category>
        
      </item>
    
      <item>
        <title>Latent Force Models with Gaussian Processes</title>
        <description>We are used to dealing with the situation where we have a latent variable. Often we assume this latent variable to be independently drawn from a distribution, *e.g.* probabilistic PCA or factor analysis. This simplification is often extended for temporal data where tractable Markovian independence assumptions are used (*e.g.* Kalman filters or hidden Markov models).\
\
In this talk we will consider the more general case where the latent variable is a forcing function in a differential equation model. We will show how for some simple ordinary differential equations the latent variable can be dealt with analytically for particular Gaussian process priors over the latent force. In this talk we will introduce the general framework, present results in systems biology and preview extensions.\
\
Joint work with Magnus Rattray, Mauricio Álvarez, Pei Gao, Antti Honkela, David Luengo, Guido Sanguinetti and Michalis K. Titsias.</description>
        <pubDate>Sat, 06 Sep 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-bark08/latent-force-models-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-bark08/latent-force-models-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-bark08</category>
        
      </item>
    
      <item>
        <title>Dimensionality Reduction the Probabilistic Way</title>
        <description>&lt;p&gt;ICML Tutorial 2008&lt;/p&gt;</description>
        <pubDate>Sat, 05 Jul 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/notes/dimensionality-reduction-the-probaiblistic-way.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/notes/dimensionality-reduction-the-probaiblistic-way.html</guid>
        
        
        <category>notes</category>
        
      </item>
    
      <item>
        <title>Statistical inference in systems biology through Gaussian processes and ordinary differential equations</title>
        <description>In this talk we will summarise recent work from our group in Manchester on inferring latent biochemical species in biological systems using Gaussian processes and differential equations. A key problem in biological data is when particular biochemical species of interest are not directly measurable. We will show how the framework of Gaussian processes can be brought to bear on the problem and values of latent chemical species can be inferred given data and a differential equation model.</description>
        <pubDate>Tue, 17 Jun 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-warwick08/statistical-inference-in-systems-biology-through-span-g-span-aussian-processes-and-ordi.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-warwick08/statistical-inference-in-systems-biology-through-span-g-span-aussian-processes-and-ordi.html</guid>
        
        
        <category>Lawrence-warwick08</category>
        
      </item>
    
      <item>
        <title>Statistical Inference in Systems Biology through Gaussian Processes and Ordinary Differential Equations</title>
        <description>In this talk we will summarise recent work from our group in Manchester on inferring ‘latent biochemical species’ in biological systems using Gaussian processes and differential equations. A key problem in biological data is when particular biochemical species of interest are not directly measurable. We will show how the framework of Gaussian processes can be brought to bear on the problem and values of latent chemical species can be inferred given data and a differential equation model.</description>
        <pubDate>Wed, 07 May 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-sysbiointrob08/statistical-inference-in-systems-biology-through-span-g-span-aussian-processes-and-ordi.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-sysbiointrob08/statistical-inference-in-systems-biology-through-span-g-span-aussian-processes-and-ordi.html</guid>
        
        
        <category>Lawrence-sysbioIntroB08</category>
        
      </item>
    
      <item>
        <title>An Introduction to Systems Biology from a Machine Learning Perspective</title>
        <description>In this talk we will introduce some of the challenges in systems biology and discuss the efforts being made to address them using statistical inference. General biological background will be interlaced with case studies that illustrate the salient issues in systems biology from the perspective of a machine learning researcher.</description>
        <pubDate>Mon, 05 May 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-sysbiointroa08/an-introduction-to-systems-biology-from-a-machine-learning-perspective.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-sysbiointroa08/an-introduction-to-systems-biology-from-a-machine-learning-perspective.html</guid>
        
        
        <category>Lawrence-sysbioIntroA08</category>
        
      </item>
    
      <item>
        <title>Inferring Latent Functions with Gaussian Processes in Differential Equations</title>
        <description>In this talk we will present recent work from Manchester in inference of latent functions in differential equations. Simple computational models for systems biology make use of ordinary differential equations that are driven from an often unobserved input function. We will describe how probabilistic inference over these latent functions may be performed through Gaussian process prior distributions. We will describe the algorithms and show results on toy problems and real biological systems.</description>
        <pubDate>Wed, 30 Apr 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-latentfunc08/inferring-latent-functions-with-span-g-span-aussian-processes-in-differential-equations.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-latentfunc08/inferring-latent-functions-with-span-g-span-aussian-processes-in-differential-equations.html</guid>
        
        
        <category>Lawrence-latentFunc08</category>
        
      </item>
    
      <item>
        <title>Learning and Inference with Gaussian Processes: An Overview of &lt;span&gt;B&lt;/span&gt;ayesian Inference and &lt;span&gt;G&lt;/span&gt;aussian Processes</title>
        <description></description>
        <pubDate>Tue, 01 Apr 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gpbayes08/learning-and-inference-with-span-g-span-aussian-processes-an-overview-of-span-b-span-ay.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gpbayes08/learning-and-inference-with-span-g-span-aussian-processes-an-overview-of-span-b-span-ay.html</guid>
        
        
        <category>Lawrence-gpbayes08</category>
        
      </item>
    
      <item>
        <title>Human Motion Modelling with Gaussian Processes</title>
        <description></description>
        <pubDate>Thu, 07 Feb 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-newton08/human-motion-modelling-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-newton08/human-motion-modelling-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-newton08</category>
        
      </item>
    
      <item>
        <title>Human Motion Modelling through Dimensional Reduction with Gaussian Processes</title>
        <description></description>
        <pubDate>Tue, 29 Jan 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-human08/human-motion-modelling-through-dimensional-reduction-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-human08/human-motion-modelling-through-dimensional-reduction-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-human08</category>
        
      </item>
    
      <item>
        <title>TP1: Leveraging Complex Prior Knowledge in Learning</title>
        <description></description>
        <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-thematic08/span-tp1-span-leveraging-complex-prior-knowledge-in-learning.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-thematic08/span-tp1-span-leveraging-complex-prior-knowledge-in-learning.html</guid>
        
        
        <category>Lawrence-thematic08</category>
        
      </item>
    
      <item>
        <title>Dimensionality Reduction</title>
        <description>We approach dimensionality reduction from the perspective of multidimensional scaling. Starting from the basics, we draw the relationship between multidimensional scaling and principal component analysis. From this background we briefly review kernel PCA and Isomap. Finally, we consider the problem of model selection using Gaussian processes.</description>
        <pubDate>Thu, 24 Jan 2008 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-data08/dimensionality-reduction.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-data08/dimensionality-reduction.html</guid>
        
        
        <category>Lawrence-data08</category>
        
      </item>
    
      <item>
        <title>Exploiting Dimensional Dreduction In Modelling Of High Dimensional Distributions</title>
        <description></description>
        <pubDate>Sat, 08 Dec 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/exploiting-dimensional-dreduction-in-modelling-of-high-dimensional-distributions.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/exploiting-dimensional-dreduction-in-modelling-of-high-dimensional-distributions.html</guid>
        
        
      </item>
    
      <item>
        <title>Latent Variables, Differential Equations and Gaussian Processes</title>
        <description>We are used to dealing with the situation where we have a latent variable. Often we assume this latent variable to be independently drawn from a distribution, e.g. probabilistic PCA or factor analysis. This simplification is often extended for temporal data where tractable Markovian independence assumptions are used (e.g. Kalman filters or hidden Markov models). In this talk we will consider such models in the context of a biological problem: inferring transcription factor activities in simple transcription networks. We will extend the simpler formalisms described above to consider the case where the latent variable is a ’latent function’ and the relationship with the observed data is described by a linear differential equation. Through the use of a Gaussian process prior over the latent function we can perform inference tractably and learn parameters of interest in the system.</description>
        <pubDate>Mon, 12 Nov 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msr07/latent-variables-differential-equations-and-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msr07/latent-variables-differential-equations-and-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-msr07</category>
        
      </item>
    
      <item>
        <title>Modelling Transcriptional Regulation with Gaussian Processes</title>
        <description></description>
        <pubDate>Wed, 07 Nov 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-param07/modelling-transcriptional-regulation-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-param07/modelling-transcriptional-regulation-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-param07</category>
        
      </item>
    
      <item>
        <title>Towards Computational Systems Biology with a Statistical Analysis Pipeline for Microarray Data</title>
        <description>Since the human genome project began mathematical models have become an integral part of biological data analysis. The growth in data availability has necessitated their use in summarization of the data (e.g. *statistical* approaches such as hierarchical clustering). Simultaneously, as more has become understood about the mechanisms underpinning particular pathways *mechanistic* models of interactions have become more widespread.\
\
The data-driven statistical approach and the mechanistic model approach each have their advantages. Data-driven models can be used in genome wide analyses to ’fish’ for genes that were not known to be relevant but provide a critical role in a pathway. Mechanistic models make real predictions about how systems will respond given particular interventions. The two approaches have interacted only loosely, often not through interaction between the ‘mathematicians’ but through indirect interaction via the biologists.\
\
In this talk we will follow describe a statistical analysis ‘pipeline’ for microarray data which handles the noise in the data. As we proceed down the pipeline we will come closer to mechanistic models of systems. We will finish with some general thoughts about the contribution that a combined statistical/mechanistic modelling approach can make.</description>
        <pubDate>Wed, 31 Oct 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mbb07/towards-computational-systems-biology-with-a-statistical-analysis-pipeline-for-microarr.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mbb07/towards-computational-systems-biology-with-a-statistical-analysis-pipeline-for-microarr.html</guid>
        
        
        <category>Lawrence-mbb07</category>
        
      </item>
    
      <item>
        <title>Latent Variable Modelling with Gaussian Processes</title>
        <description>In this talk we will briefly describe the Gaussian process latent variable model, an approach to probabilistic modelling of data through non-linear dimensional reduction. The model takes a dual approach to statistical inference and can be shown to generalise PCA. We will briefly introduce the model and quickly show some example applications.</description>
        <pubDate>Thu, 13 Sep 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-inverse07/latent-variable-modelling-with-gaussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-inverse07/latent-variable-modelling-with-gaussian-processes.html</guid>
        
        
        <category>Lawrence-inverse07</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm, including dynamics, learning of large data sets and back constraints. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, a vowel data set and a robot mapping problem.</description>
        <pubDate>Sat, 07 Jul 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-uc3mgplvm07/probabilistic-dimensional-reduction-with-the-gaussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-uc3mgplvm07/probabilistic-dimensional-reduction-with-the-gaussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-uc3mgplvm07</category>
        
      </item>
    
      <item>
        <title>Probabilistic Inference for Modelling of Transcription Factor Activity</title>
        <description>Accurate modelling of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. In practice many of them are difficult to measure in vivo. For example, it is very hard to measure the active concentration levels of the transcription factor proteins that drive the process.\
\
In this talk we will show how, by making use of structural information about the interaction network (e.g. arising form ChIP-chip data), transcription factor activities can estimated using probabilistic inference. We propose two different probabilistic models: a simple linear model with Kalman filter based dynamics for genome/transcriptome wide studies and a differential equation based Gaussian process model with a more physically realistic parameterisation for smaller interaction networks.</description>
        <pubDate>Thu, 05 Jul 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-uc3tfa07/probabilistic-inference-for-modelling-of-transcription-factor-activity.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-uc3tfa07/probabilistic-inference-for-modelling-of-transcription-factor-activity.html</guid>
        
        
        <category>Lawrence-uc3tfa07</category>
        
      </item>
    
      <item>
        <title>Fast Sparse Gaussian Process Methods: The Informative Vector Machine</title>
        <description>Gaussian processes are a non parametric approach to learning regression models. In this talk we will given a brief review of the use of Gaussian processes for regression. We will then introduce the informative vector machine approach to learning Gaussian processes for Classification on large scale data sets. We will show extensions of the method including multi-task learning, semi-supervised learning and learning invariances.</description>
        <pubDate>Tue, 03 Jul 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-uc3mivm07/fast-sparse-gaussian-process-methods-the-informative-vector-machine.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-uc3mivm07/fast-sparse-gaussian-process-methods-the-informative-vector-machine.html</guid>
        
        
        <category>Lawrence-uc3mivm07</category>
        
      </item>
    
      <item>
        <title>Hierarchical Gaussian Process Latent Variable Models</title>
        <description>The Gaussian process latent variable model (GP-LVM) is a powerful approach for probabilistic modelling of high dimensional data through dimensional reduction. In this paper we extend the GP-LVM through hierarchies. A hierarchical model (such as a tree) allows us to express conditional independencies in the data as well as the manifold structure. We first introduce Gaussian process hierarchies through a simple dynamical model, we then extend the approach to a more complex hierarchy which is applied to the visualisation of human motion data sets.</description>
        <pubDate>Fri, 22 Jun 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-icml07/hierarchical-span-g-span-aussian-process-latent-variable-models.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-icml07/hierarchical-span-g-span-aussian-process-latent-variable-models.html</guid>
        
        
        <category>Lawrence-icml07</category>
        
      </item>
    
      <item>
        <title>Probabilistic Inference for Modelling of Transcription Factor Activity</title>
        <description>Accurate modelling of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. In practice many of them are difficult to measure in vivo. For example, it is very hard to measure the active concentration levels of the transcription factor proteins that drive the process.\
\
In this talk we will show how, by making use of structural information about the interaction network (e.g. arising form ChIP-chip data), transcription factor activities can estimated using probabilistic inference. We propose two different probabilistic models: a simple linear model with Kalman filter based dynamics for genome/transcriptome wide studies and a differential equation based Gaussian process model with a more physically realistic parameterisation for smaller interaction networks.</description>
        <pubDate>Wed, 13 Jun 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gatsby07/probabilistic-inference-for-modelling-of-transcription-factor-activity.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gatsby07/probabilistic-inference-for-modelling-of-transcription-factor-activity.html</guid>
        
        
        <category>Lawrence-gatsby07</category>
        
      </item>
    
      <item>
        <title>Gaussian Processes for Inference in Biological Interaction Networks</title>
        <description>In many biological applications key functions of interest, such as chemical species concentrations, are unobserved. In this talk we will briefly introduce Gaussian processes, which are probabilistic models of functions. We will show how they can be used, in combination with a simple differential equation model, to estimate the concentration of a transcription factor in a simple single input module network motif.</description>
        <pubDate>Wed, 04 Apr 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-mathbio07/span-g-span-aussian-processes-for-inference-in-biological-interaction-networks.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-mathbio07/span-g-span-aussian-processes-for-inference-in-biological-interaction-networks.html</guid>
        
        
        <category>Lawrence-mathbio07</category>
        
      </item>
    
      <item>
        <title>Modelling Transcriptional Regulation with Gaussian Processes</title>
        <description>Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies.</description>
        <pubDate>Wed, 28 Mar 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-pesb07/modelling-transcriptional-regulation-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-pesb07/modelling-transcriptional-regulation-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-pesb07</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm, including dynamics, learning of large data sets and back constraints. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, a vowel data set and a robot mapping problem.</description>
        <pubDate>Fri, 09 Mar 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ncrg07/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ncrg07/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-ncrg07</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm, including dynamics, learning of large data sets and back constraints. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, a vowel data set and a robot mapping problem.</description>
        <pubDate>Mon, 12 Feb 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-google07/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-google07/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-google07</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm, including dynamics, learning of large data sets and back constraints. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, a vowel data set and a robot mapping problem.</description>
        <pubDate>Fri, 09 Feb 2007 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-csail07/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-csail07/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-csail07</category>
        
      </item>
    
      <item>
        <title>Learning and Inference with Gaussian Processes: An Overview of &lt;span&gt;G&lt;/span&gt;aussian Processes and the &lt;span&gt;GP-LVM&lt;/span&gt;</title>
        <description></description>
        <pubDate>Fri, 03 Nov 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchesterguest06/learning-and-inference-with-span-g-span-aussian-processes-an-overview-of-span-g-span-au.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchesterguest06/learning-and-inference-with-span-g-span-aussian-processes-an-overview-of-span-g-span-au.html</guid>
        
        
        <category>Lawrence-manchesterGuest06</category>
        
      </item>
    
      <item>
        <title>Learning and Inference with Gaussian Processes</title>
        <description>Many application domains of machine learning can be reduced to inference about the values of a function. Gaussian processes are powerful, flexible, probabilistic models that enable us to efficiently perform inference about functions in the presence of uncertainty. In this talk I will introduce Gaussian processes and review a few standard applications of these models. I will then show how Gaussian processes can be used to solve important and diverse real-world problems, including inference of the concentration of transcription factors which regulate gene expression and creating probabilistic models of human motion for animation and tracking.</description>
        <pubDate>Mon, 21 Aug 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-intel06/learning-and-inference-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-intel06/learning-and-inference-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-intel06</category>
        
      </item>
    
      <item>
        <title>PUMA: Propagation of Uncertainty in Microarray Analysis</title>
        <description></description>
        <pubDate>Wed, 02 Aug 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tuebingen06/puma-propagation-of-uncertainty-in-microarray-analysis.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tuebingen06/puma-propagation-of-uncertainty-in-microarray-analysis.html</guid>
        
        
        <category>Lawrence-tuebingen06</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm, including dynamics, learning of large data sets and back constraints. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, a vowel data set and a robot mapping problem.</description>
        <pubDate>Tue, 11 Jul 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-erice06/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-erice06/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-erice06</category>
        
      </item>
    
      <item>
        <title>Local Distance Preservation in the GP-LVM through Back Constraints</title>
        <description></description>
        <pubDate>Tue, 27 Jun 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-icml06/local-distance-preservation-in-the-gp-lvm-through-back-constraints.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-icml06/local-distance-preservation-in-the-gp-lvm-through-back-constraints.html</guid>
        
        
        <category>Lawrence-icml06</category>
        
      </item>
    
      <item>
        <title>Learning and Inference with Gaussian Processes</title>
        <description>Many application domains of machine learning can be reduced to inference about the values of a function. Gaussian processes are powerful, flexible, probabilistic models that enable us to efficiently perform inference about functions in the presence of uncertainty. In this talk I will introduce Gaussian processes and review a few standard applications of these models. I will then show how Gaussian processes can be used to solve important and diverse real-world problems, including inference of the concentration of transcription factors which regulate gene expression and creating probabilistic models of human motion for animation and tracking.</description>
        <pubDate>Thu, 22 Jun 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchester06/learning-and-inference-with-span-g-span-aussian-processes.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchester06/learning-and-inference-with-span-g-span-aussian-processes.html</guid>
        
        
        <category>Lawrence-manchester06</category>
        
      </item>
    
      <item>
        <title>A probabilistic dynamical model for quantitative inference of the regulatory mechanism of transcription</title>
        <description></description>
        <pubDate>Mon, 10 Apr 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/sanguinetti-masamb06/a-probabilistic-dynamical-model-for-quantitative-inference-of-the-regulatory-mechanism.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/sanguinetti-masamb06/a-probabilistic-dynamical-model-for-quantitative-inference-of-the-regulatory-mechanism.html</guid>
        
        
        <category>Sanguinetti-masamb06</category>
        
      </item>
    
      <item>
        <title>Probabilistic Dimensional Reduction with the Gaussian Process Latent Variable Model</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. Having introduced the GP-LVM we will review extensions to the algorithm, including dynamics, learning of large data sets and back constraints. We will demonstrate the application of the model and its extensions to a range of data sets, including human motion data, a vowel data set and a robot mapping problem.</description>
        <pubDate>Tue, 07 Mar 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-cued06/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-cued06/probabilistic-dimensional-reduction-with-the-span-g-span-aussian-process-latent-variabl.html</guid>
        
        
        <category>Lawrence-cued06</category>
        
      </item>
    
      <item>
        <title>Computer Vision Reading Group: The Gaussian Process Latent Variable Model</title>
        <description>The Gaussian process latent variable model (GP-LVM) is a recently
proposed probabilistic approach to obtaining a reduced dimension
representation of a data set. In this tutorial we motivate and
describe the GP-LVM, giving a review of the model itself and some of
the concepts behind it.
</description>
        <pubDate>Fri, 27 Jan 2006 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxford06/computer-vision-reading-group-the-gaussian-process-latent-variable-model.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxford06/computer-vision-reading-group-the-gaussian-process-latent-variable-model.html</guid>
        
        
        <category>Lawrence-oxford06</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Wed, 14 Dec 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence--uw05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence--uw05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence--uw05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Mon, 12 Dec 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence--msrred05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence--msrred05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence--msrred05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Fri, 02 Dec 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence--ubc05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence--ubc05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence--ubc05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Tue, 29 Nov 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence--columbia05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence--columbia05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence--columbia05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Mon, 28 Nov 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence--ibm05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence--ibm05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence--ibm05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Wed, 16 Nov 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-gatsby05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-gatsby05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence-gatsby05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Wed, 02 Nov 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-idiap05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-idiap05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence-idiap05</category>
        
      </item>
    
      <item>
        <title>High Dimensional Probabilistic Modelling through Manifolds</title>
        <description>Density modelling in high dimensions is a very difficult problem. Traditional approaches, such as mixtures of Gaussians, typically fail to capture the structure of data sets in high dimensional spaces. In this talk we will argue that for many data sets of interest, the data can be represented as a lower dimensional manifold immersed in the higher dimensional space. We will then present the Gaussian Process Latent Variable Model (GP-LVM), a non-linear probabilistic variant of principal component analysis (PCA) which implicitly assumes that the data lies on a lower dimensional space. We will demonstrate the application of the model to a range of data sets, but with a particular focus on human motion data. We will show some preliminary work on facial animation and make use of a skeletal motion capture data set to illustrate differences between our model and traditional manifold techniques.</description>
        <pubDate>Mon, 31 Oct 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-epfl05/high-dimensional-probabilistic-modelling-through-manifolds.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-epfl05/high-dimensional-probabilistic-modelling-through-manifolds.html</guid>
        
        
        <category>Lawrence-epfl05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description></description>
        <pubDate>Mon, 15 Aug 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-tuebingen05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-tuebingen05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-tuebingen05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a covariance function we can develop a non-linear probabilistic PCA. In the talk we will introduce the GPLVM and illustrate its application on a range of high dimensional data sets including motion capture data, hand written digits, a medical diagnosis data set and images.</description>
        <pubDate>Wed, 11 May 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-soton05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-soton05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-soton05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a covariance function we can develop a non-linear probabilistic PCA. In the talk we will introduce the GPLVM and illustrate its application on a range of high dimensional data sets including motion capture data, hand written digits, a medical diagnosis data set and images.</description>
        <pubDate>Tue, 15 Mar 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msr05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msr05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-msr05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a covariance function we can develop a non-linear probabilistic PCA. In the talk we will introduce the GPLVM and illustrate its application on a range of high dimensional data sets including motion capture data, hand written digits, a medical diagnosis data set and images.</description>
        <pubDate>Wed, 09 Mar 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchester05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchester05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-manchester05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a covariance function we can develop a non-linear probabilistic PCA. In the talk we will introduce the GPLVM and illustrate its application on a range of high dimensional data sets including motion capture data, hand written digits, a medical diagnosis data set and images.</description>
        <pubDate>Tue, 01 Mar 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-edinburgh05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-edinburgh05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-edinburgh05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a covariance function we can develop a non-linear probabilistic PCA. In the talk we will introduce the GPLVM and illustrate its application on a range of high dimensional data sets including motion capture data, hand written digits, a medical diagnosis data set and images.</description>
        <pubDate>Mon, 21 Feb 2005 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-oxford05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-oxford05/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-oxford05</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a for a covariance function we can develop a non-linear probabilistic PCA.</description>
        <pubDate>Thu, 09 Sep 2004 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-smlwgplvm03/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-smlwgplvm03/probabilistic-non-linear-component-analysis-through-span-g-span-aussian-process-latent.html</guid>
        
        
        <category>Lawrence-smlwgplvm03</category>
        
      </item>
    
      <item>
        <title>Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models</title>
        <description>It is known that Principal Component Analysis has an underlying
probabilistic representation based on a latent variable model. PCA
is recovered when the latent variables are integrated out and the
parameters of the model are optimised by maximum likelihood. It is
less well known that the dual approach of integrating out the
parameters and optimising with respect to the latent variables also
leads to PCA.  The marginalised likelihood in this case takes the
form of Gaussian process mappings, with linear Covariance functions,
from a latent space to an observed space, which we refer to as a
Gaussian Process Latent Variable Model (GPLVM) [@Lawrence:gplvm03]. 
It is straightforward to *non-linearise* this model by
substituting the linear covariance function for a non-linear
one. The result is a non-linear probabilistic PCA model. In this
talk we will present a practical algorithm for optimising the latent
variables in a non-linear GPLVM and discuss some relations with
other models. Finally we will present results from a SIGGRAPH paper
which uses the GPLVM to learn styles in an inverse kinematics
problem [@Grochow:styleik04].
</description>
        <pubDate>Thu, 06 May 2004 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-ucbgplvm03/probabilistic-non-linear-component-analysis-through-gaussian-process-latent.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-ucbgplvm03/probabilistic-non-linear-component-analysis-through-gaussian-process-latent.html</guid>
        
        
        <category>Lawrence-ucbgplvm03</category>
        
      </item>
    
      <item>
        <title>Bayesian Processing of cDNA Microarray Images through the Variational Importance Sampler</title>
        <description>Each cell in the human body contains the same basic code in the form of the genome, however cells have differentiated roles which come about through different cells ‘expressing’ different genes. Key insights into gene interactions can be studied through measuring the level of expression of each gene at different times. Gene expression levels can be obtained from cDNA microarray experiments through the extraction of pixel intensities from a scanned image of a slide. In this talk we will start by briefly reviewing cDNA microarray technology. We will then focus on one problem that arises when processing these images: human error in locating the position of the spots can lead to variabilities in the extracted expression levels. We will present a Bayesian approach to the image processing which alleviates this problem. Our approach makes use of a novel combination of importance sampling and variational approximations. Finally if there is time we will briefly show some examples of the variational importance sampler applied to visual tracking problems.</description>
        <pubDate>Thu, 04 Dec 2003 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-msrb03/bayesian-processing-of-span-cdna-span-microarray-images-through-the-variational-importa.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-msrb03/bayesian-processing-of-span-cdna-span-microarray-images-through-the-variational-importa.html</guid>
        
        
        <category>Lawrence-msrb03</category>
        
      </item>
    
      <item>
        <title>Bayesian Processing of cDNA Microarray Images</title>
        <description>Gene expression levels are obtained from microarray experiments through the extraction of pixel intensities from a scanned image of the slide. It is widely acknowledged that variabilities can occur in expression levels extracted from the same images by different users with the same software packages. These inconsistencies arise due to differences in the refinement of the placement of the microarray ’grids’. We introduce a novel automated approach to the refinement of grid placements that is based upon the use of Bayesian inference for determining the size, shape and positioning of the microarray ’spots’, capturing uncertainty that can be passed to downstream analysis. Our experiments demonstrate that variability between users can be significantly reduced using the approach. The automated nature of the approach also saves hours of researchers’ time normally spent in refining the grid placement.</description>
        <pubDate>Fri, 20 Jun 2003 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-sussex03/bayesian-processing-of-span-cdna-span-microarray-images.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-sussex03/bayesian-processing-of-span-cdna-span-microarray-images.html</guid>
        
        
        <category>Lawrence-sussex03</category>
        
      </item>
    
      <item>
        <title>Bayesian Processing of cDNA Microarray Images</title>
        <description>Gene expression levels are obtained from microarray experiments through the extraction of pixel intensities from a scanned image of the slide. It is widely acknowledged that variabilities can occur in expression levels extracted from the same images by different users with the same software packages. These inconsistencies arise due to differences in the refinement of the placement of the microarray ‘grids’. We introduce a novel automated approach to the refinement of grid placements that is based upon the use of Bayesian inference for determining the size, shape and positioning of the microarray ‘spots’, capturing uncertainty that can be passed to downstream analysis. Our experiments demonstrate that variability between users can be significantly reduced using the approach. The automated nature of the approach also saves hours of researchers’ time normally spent in refining the grid placement. A MATLAB implementation of the algorithm is available from &lt;http://inverseprobability.com/vis&gt;.</description>
        <pubDate>Wed, 21 May 2003 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/lawrence-manchester03/bayesian-processing-of-span-cdna-span-microarray-images.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/lawrence-manchester03/bayesian-processing-of-span-cdna-span-microarray-images.html</guid>
        
        
        <category>Lawrence-manchester03</category>
        
      </item>
    
      <item>
        <title>Particle Filters, Variational methods and Importance Sampling</title>
        <description>Particle filters allow tracking of systems with highly non-linear,
multi-modal posterior distributions, however they are prone to
failure when model likelihoods are sharply peaked or state spaces
are high dimensional. This failure is caused by a mismatch between
the proposal distribution and the true posterior. The number of
particles of samples then required to accurately represent the
posterior increases dramatically and with it the computational
demands of the algorithm. By formulating the problem within the
framework of variational inference we derive an algorithm in which
the proposal naturally adapts to more accurately reflect the true
posterior.  This is achieved by replacing intractable moment
evaluations, arising from the highly non-linear nature of the
likelihood functions, with sample based approximations.  In this
talk we shall first introduce the approach in a static setting:
Bayesian processing of cDNA microarray images. We will then add
dynamics to the model and demonstrate a marked improvement over
standard approaches on both synthetic and real-world tracking
examples.
</description>
        <pubDate>Mon, 24 Mar 2003 00:00:00 +0000</pubDate>
        <link>http://inverseprobability.com/talks/particle-filters-variational-methods-and-importance-sampling.html</link>
        <guid isPermaLink="true">http://inverseprobability.com/talks/particle-filters-variational-methods-and-importance-sampling.html</guid>
        
        
      </item>
    
  </channel>
</rss>
