edit

VibeSafe

$$\newcommand{\tk}[1]{} \newcommand{\Amatrix}{\mathbf{A}} \newcommand{\KL}[2]{\text{KL}\left( #1\,\|\,#2 \right)} \newcommand{\Kaast}{\kernelMatrix_{\mathbf{ \ast}\mathbf{ \ast}}} \newcommand{\Kastu}{\kernelMatrix_{\mathbf{ \ast} \inducingVector}} \newcommand{\Kff}{\kernelMatrix_{\mappingFunctionVector \mappingFunctionVector}} \newcommand{\Kfu}{\kernelMatrix_{\mappingFunctionVector \inducingVector}} \newcommand{\Kuast}{\kernelMatrix_{\inducingVector \bf\ast}} \newcommand{\Kuf}{\kernelMatrix_{\inducingVector \mappingFunctionVector}} \newcommand{\Kuu}{\kernelMatrix_{\inducingVector \inducingVector}} \newcommand{\Kuui}{\Kuu^{-1}} \newcommand{\Qaast}{\mathbf{Q}_{\bf \ast \ast}} \newcommand{\Qastf}{\mathbf{Q}_{\ast \mappingFunction}} \newcommand{\Qfast}{\mathbf{Q}_{\mappingFunctionVector \bf \ast}} \newcommand{\Qff}{\mathbf{Q}_{\mappingFunctionVector \mappingFunctionVector}} \newcommand{\aMatrix}{\mathbf{A}} \newcommand{\aScalar}{a} \newcommand{\aVector}{\mathbf{a}} \newcommand{\acceleration}{a} \newcommand{\bMatrix}{\mathbf{B}} \newcommand{\bScalar}{b} \newcommand{\bVector}{\mathbf{b}} \newcommand{\basisFunc}{\phi} \newcommand{\basisFuncVector}{\boldsymbol{ \basisFunc}} \newcommand{\basisFunction}{\phi} \newcommand{\basisLocation}{\mu} \newcommand{\basisMatrix}{\boldsymbol{ \Phi}} \newcommand{\basisScalar}{\basisFunction} \newcommand{\basisVector}{\boldsymbol{ \basisFunction}} \newcommand{\activationFunction}{\phi} \newcommand{\activationMatrix}{\boldsymbol{ \Phi}} \newcommand{\activationScalar}{\basisFunction} \newcommand{\activationVector}{\boldsymbol{ \basisFunction}} \newcommand{\bigO}{\mathcal{O}} \newcommand{\binomProb}{\pi} \newcommand{\cMatrix}{\mathbf{C}} \newcommand{\cbasisMatrix}{\hat{\boldsymbol{ \Phi}}} \newcommand{\cdataMatrix}{\hat{\dataMatrix}} \newcommand{\cdataScalar}{\hat{\dataScalar}} \newcommand{\cdataVector}{\hat{\dataVector}} \newcommand{\centeredKernelMatrix}{\mathbf{ \MakeUppercase{\centeredKernelScalar}}} \newcommand{\centeredKernelScalar}{b} \newcommand{\centeredKernelVector}{\centeredKernelScalar} \newcommand{\centeringMatrix}{\mathbf{H}} \newcommand{\chiSquaredDist}[2]{\chi_{#1}^{2}\left(#2\right)} \newcommand{\chiSquaredSamp}[1]{\chi_{#1}^{2}} \newcommand{\conditionalCovariance}{\boldsymbol{ \Sigma}} \newcommand{\coregionalizationMatrix}{\mathbf{B}} \newcommand{\coregionalizationScalar}{b} \newcommand{\coregionalizationVector}{\mathbf{ \coregionalizationScalar}} \newcommand{\covDist}[2]{\text{cov}_{#2}\left(#1\right)} \newcommand{\covSamp}[1]{\text{cov}\left(#1\right)} \newcommand{\covarianceScalar}{c} \newcommand{\covarianceVector}{\mathbf{ \covarianceScalar}} \newcommand{\covarianceMatrix}{\mathbf{C}} \newcommand{\covarianceMatrixTwo}{\boldsymbol{ \Sigma}} \newcommand{\croupierScalar}{s} \newcommand{\croupierVector}{\mathbf{ \croupierScalar}} \newcommand{\croupierMatrix}{\mathbf{ \MakeUppercase{\croupierScalar}}} \newcommand{\dataDim}{p} \newcommand{\dataIndex}{i} \newcommand{\dataIndexTwo}{j} \newcommand{\dataMatrix}{\mathbf{Y}} \newcommand{\dataScalar}{y} \newcommand{\dataSet}{\mathcal{D}} \newcommand{\dataStd}{\sigma} \newcommand{\dataVector}{\mathbf{ \dataScalar}} \newcommand{\decayRate}{d} \newcommand{\degreeMatrix}{\mathbf{ \MakeUppercase{\degreeScalar}}} \newcommand{\degreeScalar}{d} \newcommand{\degreeVector}{\mathbf{ \degreeScalar}} \newcommand{\diag}[1]{\text{diag}\left(#1\right)} \newcommand{\diagonalMatrix}{\mathbf{D}} \newcommand{\diff}[2]{\frac{\text{d}#1}{\text{d}#2}} \newcommand{\diffTwo}[2]{\frac{\text{d}^2#1}{\text{d}#2^2}} \newcommand{\displacement}{x} \newcommand{\displacementVector}{\textbf{\displacement}} \newcommand{\distanceMatrix}{\mathbf{ \MakeUppercase{\distanceScalar}}} \newcommand{\distanceScalar}{d} \newcommand{\distanceVector}{\mathbf{ \distanceScalar}} \newcommand{\eigenvaltwo}{\ell} \newcommand{\eigenvaltwoMatrix}{\mathbf{L}} \newcommand{\eigenvaltwoVector}{\mathbf{l}} \newcommand{\eigenvalue}{\lambda} \newcommand{\eigenvalueMatrix}{\boldsymbol{ \Lambda}} \newcommand{\eigenvalueVector}{\boldsymbol{ \lambda}} \newcommand{\eigenvector}{\mathbf{ \eigenvectorScalar}} \newcommand{\eigenvectorMatrix}{\mathbf{U}} \newcommand{\eigenvectorScalar}{u} \newcommand{\eigenvectwo}{\mathbf{v}} \newcommand{\eigenvectwoMatrix}{\mathbf{V}} \newcommand{\eigenvectwoScalar}{v} \newcommand{\entropy}[1]{\mathcal{H}\left(#1\right)} \newcommand{\errorFunction}{E} \newcommand{\expDist}[2]{\left\langle#1\right\rangle_{#2}} \newcommand{\expSamp}[1]{\left\langle#1\right\rangle} \newcommand{\expectation}[1]{\left\langle #1 \right\rangle } \newcommand{\expectationDist}[2]{\left\langle #1 \right\rangle _{#2}} \newcommand{\expectedDistanceMatrix}{\mathcal{D}} \newcommand{\eye}{\mathbf{I}} \newcommand{\fantasyDim}{r} \newcommand{\fantasyMatrix}{\mathbf{ \MakeUppercase{\fantasyScalar}}} \newcommand{\fantasyScalar}{z} \newcommand{\fantasyVector}{\mathbf{ \fantasyScalar}} \newcommand{\featureStd}{\varsigma} \newcommand{\gammaCdf}[3]{\mathcal{GAMMA CDF}\left(#1|#2,#3\right)} \newcommand{\gammaDist}[3]{\mathcal{G}\left(#1|#2,#3\right)} \newcommand{\gammaSamp}[2]{\mathcal{G}\left(#1,#2\right)} \newcommand{\gaussianDist}[3]{\mathcal{N}\left(#1|#2,#3\right)} \newcommand{\gaussianSamp}[2]{\mathcal{N}\left(#1,#2\right)} \newcommand{\uniformDist}[3]{\mathcal{U}\left(#1|#2,#3\right)} \newcommand{\uniformSamp}[2]{\mathcal{U}\left(#1,#2\right)} \newcommand{\given}{|} \newcommand{\half}{\frac{1}{2}} \newcommand{\heaviside}{H} \newcommand{\hiddenMatrix}{\mathbf{ \MakeUppercase{\hiddenScalar}}} \newcommand{\hiddenScalar}{h} \newcommand{\hiddenVector}{\mathbf{ \hiddenScalar}} \newcommand{\identityMatrix}{\eye} \newcommand{\inducingInputScalar}{z} \newcommand{\inducingInputVector}{\mathbf{ \inducingInputScalar}} \newcommand{\inducingInputMatrix}{\mathbf{Z}} \newcommand{\inducingScalar}{u} \newcommand{\inducingVector}{\mathbf{ \inducingScalar}} \newcommand{\inducingMatrix}{\mathbf{U}} \newcommand{\inlineDiff}[2]{\text{d}#1/\text{d}#2} \newcommand{\inputDim}{q} \newcommand{\inputMatrix}{\mathbf{X}} \newcommand{\inputScalar}{x} \newcommand{\inputSpace}{\mathcal{X}} \newcommand{\inputVals}{\inputVector} \newcommand{\inputVector}{\mathbf{ \inputScalar}} \newcommand{\iterNum}{k} \newcommand{\kernel}{\kernelScalar} \newcommand{\kernelMatrix}{\mathbf{K}} \newcommand{\kernelScalar}{k} \newcommand{\kernelVector}{\mathbf{ \kernelScalar}} \newcommand{\kff}{\kernelScalar_{\mappingFunction \mappingFunction}} \newcommand{\kfu}{\kernelVector_{\mappingFunction \inducingScalar}} \newcommand{\kuf}{\kernelVector_{\inducingScalar \mappingFunction}} \newcommand{\kuu}{\kernelVector_{\inducingScalar \inducingScalar}} \newcommand{\lagrangeMultiplier}{\lambda} \newcommand{\lagrangeMultiplierMatrix}{\boldsymbol{ \Lambda}} \newcommand{\lagrangian}{L} \newcommand{\laplacianFactor}{\mathbf{ \MakeUppercase{\laplacianFactorScalar}}} \newcommand{\laplacianFactorScalar}{m} \newcommand{\laplacianFactorVector}{\mathbf{ \laplacianFactorScalar}} \newcommand{\laplacianMatrix}{\mathbf{L}} \newcommand{\laplacianScalar}{\ell} \newcommand{\laplacianVector}{\mathbf{ \ell}} \newcommand{\latentDim}{q} \newcommand{\latentDistanceMatrix}{\boldsymbol{ \Delta}} \newcommand{\latentDistanceScalar}{\delta} \newcommand{\latentDistanceVector}{\boldsymbol{ \delta}} \newcommand{\latentForce}{f} \newcommand{\latentFunction}{u} \newcommand{\latentFunctionVector}{\mathbf{ \latentFunction}} \newcommand{\latentFunctionMatrix}{\mathbf{ \MakeUppercase{\latentFunction}}} \newcommand{\latentIndex}{j} \newcommand{\latentScalar}{z} \newcommand{\latentVector}{\mathbf{ \latentScalar}} \newcommand{\latentMatrix}{\mathbf{Z}} \newcommand{\learnRate}{\eta} \newcommand{\lengthScale}{\ell} \newcommand{\rbfWidth}{\ell} \newcommand{\likelihoodBound}{\mathcal{L}} \newcommand{\likelihoodFunction}{L} \newcommand{\locationScalar}{\mu} \newcommand{\locationVector}{\boldsymbol{ \locationScalar}} \newcommand{\locationMatrix}{\mathbf{M}} \newcommand{\variance}[1]{\text{var}\left( #1 \right)} \newcommand{\mappingFunction}{f} \newcommand{\mappingFunctionMatrix}{\mathbf{F}} \newcommand{\mappingFunctionTwo}{g} \newcommand{\mappingFunctionTwoMatrix}{\mathbf{G}} \newcommand{\mappingFunctionTwoVector}{\mathbf{ \mappingFunctionTwo}} \newcommand{\mappingFunctionVector}{\mathbf{ \mappingFunction}} \newcommand{\scaleScalar}{s} \newcommand{\mappingScalar}{w} \newcommand{\mappingVector}{\mathbf{ \mappingScalar}} \newcommand{\mappingMatrix}{\mathbf{W}} \newcommand{\mappingScalarTwo}{v} \newcommand{\mappingVectorTwo}{\mathbf{ \mappingScalarTwo}} \newcommand{\mappingMatrixTwo}{\mathbf{V}} \newcommand{\maxIters}{K} \newcommand{\meanMatrix}{\mathbf{M}} \newcommand{\meanScalar}{\mu} \newcommand{\meanTwoMatrix}{\mathbf{M}} \newcommand{\meanTwoScalar}{m} \newcommand{\meanTwoVector}{\mathbf{ \meanTwoScalar}} \newcommand{\meanVector}{\boldsymbol{ \meanScalar}} \newcommand{\mrnaConcentration}{m} \newcommand{\naturalFrequency}{\omega} \newcommand{\neighborhood}[1]{\mathcal{N}\left( #1 \right)} \newcommand{\neilurl}{http://inverseprobability.com/} \newcommand{\noiseMatrix}{\boldsymbol{ E}} \newcommand{\noiseScalar}{\epsilon} \newcommand{\noiseVector}{\boldsymbol{ \epsilon}} \newcommand{\noiseStd}{\sigma} \newcommand{\norm}[1]{\left\Vert #1 \right\Vert} \newcommand{\normalizedLaplacianMatrix}{\hat{\mathbf{L}}} \newcommand{\normalizedLaplacianScalar}{\hat{\ell}} \newcommand{\normalizedLaplacianVector}{\hat{\mathbf{ \ell}}} \newcommand{\numActive}{m} \newcommand{\numBasisFunc}{m} \newcommand{\numComponents}{m} \newcommand{\numComps}{K} \newcommand{\numData}{n} \newcommand{\numFeatures}{K} \newcommand{\numHidden}{h} \newcommand{\numInducing}{m} \newcommand{\numLayers}{\ell} \newcommand{\numNeighbors}{K} \newcommand{\numSequences}{s} \newcommand{\numSuccess}{s} \newcommand{\numTasks}{m} \newcommand{\numTime}{T} \newcommand{\numTrials}{S} \newcommand{\outputIndex}{j} \newcommand{\paramVector}{\boldsymbol{ \theta}} \newcommand{\parameterMatrix}{\boldsymbol{ \Theta}} \newcommand{\parameterScalar}{\theta} \newcommand{\parameterVector}{\boldsymbol{ \parameterScalar}} \newcommand{\partDiff}[2]{\frac{\partial#1}{\partial#2}} \newcommand{\precisionScalar}{j} \newcommand{\precisionVector}{\mathbf{ \precisionScalar}} \newcommand{\precisionMatrix}{\mathbf{J}} \newcommand{\pseudotargetScalar}{\widetilde{y}} \newcommand{\pseudotargetVector}{\mathbf{ \pseudotargetScalar}} \newcommand{\pseudotargetMatrix}{\mathbf{ \widetilde{Y}}} \newcommand{\rank}[1]{\text{rank}\left(#1\right)} \newcommand{\rayleighDist}[2]{\mathcal{R}\left(#1|#2\right)} \newcommand{\rayleighSamp}[1]{\mathcal{R}\left(#1\right)} \newcommand{\responsibility}{r} \newcommand{\rotationScalar}{r} \newcommand{\rotationVector}{\mathbf{ \rotationScalar}} \newcommand{\rotationMatrix}{\mathbf{R}} \newcommand{\sampleCovScalar}{s} \newcommand{\sampleCovVector}{\mathbf{ \sampleCovScalar}} \newcommand{\sampleCovMatrix}{\mathbf{s}} \newcommand{\scalarProduct}[2]{\left\langle{#1},{#2}\right\rangle} \newcommand{\sign}[1]{\text{sign}\left(#1\right)} \newcommand{\sigmoid}[1]{\sigma\left(#1\right)} \newcommand{\singularvalue}{\ell} \newcommand{\singularvalueMatrix}{\mathbf{L}} \newcommand{\singularvalueVector}{\mathbf{l}} \newcommand{\sorth}{\mathbf{u}} \newcommand{\spar}{\lambda} \newcommand{\trace}[1]{\text{tr}\left(#1\right)} \newcommand{\BasalRate}{B} \newcommand{\DampingCoefficient}{C} \newcommand{\DecayRate}{D} \newcommand{\Displacement}{X} \newcommand{\LatentForce}{F} \newcommand{\Mass}{M} \newcommand{\Sensitivity}{S} \newcommand{\basalRate}{b} \newcommand{\dampingCoefficient}{c} \newcommand{\mass}{m} \newcommand{\sensitivity}{s} \newcommand{\springScalar}{\kappa} \newcommand{\springVector}{\boldsymbol{ \kappa}} \newcommand{\springMatrix}{\boldsymbol{ \mathcal{K}}} \newcommand{\tfConcentration}{p} \newcommand{\tfDecayRate}{\delta} \newcommand{\tfMrnaConcentration}{f} \newcommand{\tfVector}{\mathbf{ \tfConcentration}} \newcommand{\velocity}{v} \newcommand{\sufficientStatsScalar}{g} \newcommand{\sufficientStatsVector}{\mathbf{ \sufficientStatsScalar}} \newcommand{\sufficientStatsMatrix}{\mathbf{G}} \newcommand{\switchScalar}{s} \newcommand{\switchVector}{\mathbf{ \switchScalar}} \newcommand{\switchMatrix}{\mathbf{S}} \newcommand{\tr}[1]{\text{tr}\left(#1\right)} \newcommand{\loneNorm}[1]{\left\Vert #1 \right\Vert_1} \newcommand{\ltwoNorm}[1]{\left\Vert #1 \right\Vert_2} \newcommand{\onenorm}[1]{\left\vert#1\right\vert_1} \newcommand{\twonorm}[1]{\left\Vert #1 \right\Vert} \newcommand{\vScalar}{v} \newcommand{\vVector}{\mathbf{v}} \newcommand{\vMatrix}{\mathbf{V}} \newcommand{\varianceDist}[2]{\text{var}_{#2}\left( #1 \right)} \newcommand{\vecb}[1]{\left(#1\right):} \newcommand{\weightScalar}{w} \newcommand{\weightVector}{\mathbf{ \weightScalar}} \newcommand{\weightMatrix}{\mathbf{W}} \newcommand{\weightedAdjacencyMatrix}{\mathbf{A}} \newcommand{\weightedAdjacencyScalar}{a} \newcommand{\weightedAdjacencyVector}{\mathbf{ \weightedAdjacencyScalar}} \newcommand{\onesVector}{\mathbf{1}} \newcommand{\zerosVector}{\mathbf{0}} $$
at Trent.AI on Jan 16, 2026 [reveal]
Neil D. Lawrence

Abstract

When working with AI coding assistants, the traditional cost model inverts: generating documentation becomes cheap while debugging misimplementation becomes expensive. VibeSafe is a framework that forces intent to be explicit before implementation. The aim is to catch AI misinterpretation when it costs editing a markdown file, not unwinding code changes.

This talk introduces VibeSafe’s philosophy and core components (CIPs, Backlog, Tenets, Requirements) and explores how they create shared understanding between humans and AI systems. We’ll discuss the workflow, benefits, trade-offs, and gather your insights on applying these practices to real-world engineering teams.

The AI Development Paradox

[edit]

You’re all experienced engineers. You’ve been using AI coding assistants such as GitHub Copilot, Cursor, Claude Code, maybe others. And you’ve seen the promise: code generation that’s faster than anything we’ve had before.

But you’ve probably also experienced the flip side: debugging code that an AI generated based on a misunderstanding of your intent. Maybe it was a subtle architectural assumption. Maybe it interpreted “user authentication” differently than you meant. And you discovered it late, after it was wired through multiple files.

The Cost Model Has Inverted

In traditional development, writing code was the expensive part, i.e. human engineering time. Documentation was often deferred or skipped.

With AI assistance, this inverts. The AI can generate documentation quickly. But if the AI misunderstands your intent and implements the wrong thing, you discover it late. The cost isn’t in writing the code, it’s in unwinding the wrong implementation after it’s integrated into your system.

A Natural Reaction

When you first see VibeSafe, a natural reaction is: “This looks like a lot of paperwork.” And you’d be absolutely right to think that … if we were still in the traditional cost model where human time writing code was the bottleneck.

But we’re not in that model anymore. The bottleneck has shifted.

VibeSafe’s Philosophy

[edit]

VibeSafe’s core philosophy is simple: force intent to be explicit before implementation. When correcting a misunderstanding costs editing a markdown file instead of unwinding code changes, you want to have that conversation early.

This isn’t about creating overhead, it’s about creating a checkpoint where humans can catch AI misinterpretation when it’s cheap to fix.

Human-AI Collaboration

AI coding assistants are powerful … like the BFG 9000 … but maybe a little too powerful. When you underspecify … or even when you correctly specify … they include implicit or incorrect assumptions.

The challenge is to creating shared context and ensuring the AI understands not just what you said, but what you meant. VibeSafe provides structures for making that context explicit. This isn’t just good for you and the current AI, it’s good for you and the rest of the team … and future AIs.

From Intent to Documentation

VibeSafe structures development as a flow from principles to implementation to documentation:

  • WHY (Tenets): Your project’s guiding principles
  • WHAT (Requirements): Desired outcomes, not methods
  • HOW (CIPs): Design decisions and architectural choices
  • DO (Backlog): Specific implementation tasks
  • DOCUMENT (Compression): Distilling the development history into formal docs

Each level is explicit, reviewable, and provides a checkpoint for ensuring shared understanding.

Core Components

[edit]

Tenets: Your Project’s Principles

Tenets are your project’s guiding principles, eventually 5-7 of them, enough to cover key decisions but few enough to actually remember and apply.

They’re not rigid rules. When tenets conflict, you need judgment. For example, VibeSafe has a tenet of “User Autonomy Over Prescription” (let users configure things) that can conflict with “Simplicity at All Levels” (don’t overwhelm with options). The resolution: sensible defaults with configuration options.

Requirements: What, Not How

Requirements define what needs to be true, not how to make it true.

“Users can install VibeSafe with a single command” is a requirement, it’s an outcome. “Create an install-minimal.sh script” is implementation, it’s a method.

This separation is crucial because it lets the AI (or you) explore multiple approaches to achieving the requirement. Maybe a shell script is best. Maybe it’s a Python package. Maybe something else. The requirement stays constant while implementations can evolve.

CIPs: Design Before Implementation

Code/Capability Improvement Plans (CIPs) are where you document design decisions before implementing them.

Each CIP includes: - What problem it solves (Motivation) - How it solves it (Detailed Description) - Implementation plan with checkpoints - Testing strategy - Links to requirements it addresses

The key: you review and refine the CIP before writing code. When you discover the AI misunderstood something, you edit markdown, not code.

Backlog: Execution Tasks

The backlog contains specific implementation tasks. Critically, you create backlog tasks only when a CIP moves from “Proposed” to “Accepted.”

Why wait? Because you don’t want to create detailed implementation tasks for a design that might change or be rejected. This avoids wasted effort and keeps the backlog focused on approved work.

Everything is Markdown + YAML

A key design decision: everything is stored as standard markdown files with YAML frontmatter. No proprietary formats. No platform lock-in.

This means VibeSafe works with Cursor, GitHub Copilot, Claude Code, Codex, or any other AI assistant that can read project files. Following VibeSafe’s own tenets: user autonomy over prescription.

The Workflow in Practice

[edit]

Example: Adding Authentication

Let’s walk through a concrete example: adding authentication to your application.

First, you check your project’s tenets. Maybe you have a tenet about “Security Without Friction” that guides this decision.

Then you write a requirement: “Users must authenticate securely with support for single sign-on.” This is the WHAT—the outcome you need.

Next, you create a CIP to explore the HOW: JWT tokens? Session-based auth? OAuth2 integration? The CIP documents your reasoning, trade-offs, and chosen approach. You review this with your team (human or AI) before implementing.

AI Natural Breakpoints

VibeSafe defines natural breakpoints in the workflow where AI assistants should pause for human review:

  1. After creating a CIP (status: Proposed) let the human(s) review the design.
  2. After accepting a CIP ask if we should create backlog tasks now.
  3. After implementation let the human test and validate.

These aren’t arbitrary, they’re the points where human judgment is most valuable. The AI can generate the content, but humans verify it matches intent.

The What’s Next Script

VibeSafe includes a “What’s Next” script that summarizes project status:

  • Current git branch and recent commits
  • CIPs by status (proposed, accepted, in progress, closed)
  • Backlog items by priority
  • Recommended next steps

Both humans and AI assistants use this to quickly understand “Where are we? What should I work on next?”

It’s useful when an AI assistant or human starts the session, it gets immediate context without reading through dozens of files.

Documentation Compression

After implementation is complete and a CIP is closed, there’s a final step: compression.

You’ve now got the development history, why you made certain decisions, what alternatives you considered, how you implemented it. This is valuable, but future developers (human or AI) don’t need to read all of that to understand the current state.

Compression means distilling the closed CIPs into streamlined formal documentation (like Sphinx docs). The history is preserved for those who need it, but the primary docs stay clean and focused.

Benefits and Trade-offs

[edit]

What You Gain

What do you gain from this approach?

Early detection: You catch AI misunderstandings when they’re cheap to fix, i.e. editing a CIP instead of unwinding code.

Shared context: When a new team member joins (human or AI), they can read your tenets, requirements, and CIPs to understand not just what the code does, but why it does it that way.

Traceability: You can trace from implementation back through CIPs to requirements to tenets. “Why did we choose this architecture?” has a documented answer.

Better AI interactions: By making context explicit, AI assistants give better suggestions. They understand your project’s principles and constraints.

What It Costs

What does it cost?

Upfront time: Yes, you spend more time documenting before implementing. In the old cost model, this was pure overhead. In the AI-assisted model, it’s front-loading the verification work.

Learning curve: Your team needs to learn the VibeSafe workflow. This takes time and adjustment, especially for engineers used to moving straight to implementation.

Maintenance: More files to keep updated. CIPs need status updates. Requirements need validation. Tenets need to be actually applied, not just written once and forgotten.

When Does This Make Sense?

When does VibeSafe make sense?

It’s most valuable when: - You’re actively using AI coding assistants - You have complex systems where misunderstandings are expensive - Multiple engineers need shared context - Your codebase will live for years, not months

It’s probably overkill for: - Small scripts or one-off tools - Solo weekend projects - Throwaway prototypes

The question: Is the cost of potential misimplementation higher than the cost of upfront documentation? For many production systems with AI assistance, the answer is yes.

The Real Question

But here’s the real question for you, as experienced engineers: Does this match how you actually work with AI assistants? Or how you want to work?

We’re still early in understanding how to collaborate effectively with AI in software development. VibeSafe is one approach, born from practice. But it’s not the only approach, and it may not be the right approach for your context.

That’s why we’re here, to get your input.

Discussion

[edit]

Questions for You

I’d like to hear from you:

Your experiences: Have you hit cases where an AI assistant misunderstood your intent and generated plausible but wrong code? How did you discover it? What was the cost?

Current practices: How do you currently ensure shared understanding between team members and AI assistants? Do you have patterns that work well?

Improvements: Looking at VibeSafe, what would make it more useful for your team? What seems like unnecessary overhead? What’s missing?

Open Questions

Some open questions we’re exploring:

Granularity: What’s the right level of detail for a CIP? Too detailed and it’s busywork. Too high-level and it doesn’t catch misunderstandings.

Tool integration: How should VibeSafe integrate with issue trackers, CI/CD, code review? What would make it feel more natural in your workflow?

Legacy systems: Most of us aren’t starting greenfield. How do you introduce VibeSafe practices to an existing codebase? Where do you start?

Team adoption: If we wanted to try this with the team, what would be the barriers? What would help?

Try It Yourself

If you’re interested in trying VibeSafe, it’s a one-line install:

bash -c "$(curl -fsSL https://raw.githubusercontent.com/lawrennd/vibesafe/main/scripts/install-minimal.sh)"

It’s on GitHub at lawrennd/vibesafe. MIT licensed. Works with Cursor, Copilot, Claude Code, or any AI assistant that can read markdown files.

I’d love to hear how it works (or doesn’t work) for your projects.

Questions?

Thank you. Let’s open it up for discussion. Your experience and insights will be invaluable in understanding whether this approach has legs, what needs to change, and how it might fit into real engineering practice.

Thanks!

For more information on these subjects and more you might want to check the following resources.

References