Lock-in Lab
RESEARCH 2026
Introduction Personal Context Management Context as Code Context as Property GitHub walnut.world
LOCK-IN LAB
RESEARCH
Three papers defining Personal Context Management — a new category for the agent era.

Introduction

This collection presents the research foundation for Personal Context Management (PCM) — a category we believe is the successor to Personal Knowledge Management for the age of AI agents. Each paper addresses a different dimension of the problem.

01
Personal
Context
Management
Defining the Category
LOCK-IN LAB
PAPER 01Personal Context ManagementDefining the Category25 min read
02
Context
as Code
Agent Runtime from Plain Files
LOCK-IN LAB
PAPER 02Context as CodeAgent Runtime from Plain Files10 min read
03
Context
as Property
The Ownership Thesis
LOCK-IN LAB
PAPER 03Context as PropertyThe Ownership Thesis16 min read

About Lock-in Lab

Lock-in Lab is a research institute exploring the future of human productivity, creativity, and entrepreneurship in the age of AI agents. Founded at Network School (Forest City, Malaysia) by Ben Flint, the lab produced the ALIVE Context System — the first Personal Context Manager.

These papers are the theoretical foundation. The implementation — a Claude Code plugin with 14 hooks, 15 skills, and 9,754 lines of configuration — has over 100 users in its first two weeks, with adoption spreading organically across the Claude Code, Hermes, and OpenClaw ecosystems.

PAPER 01
Personal Context Management
Defining the Category
LOCK-IN LAB — 2026

Personal Context Management

Defining the Category. We introduce Personal Context Management (PCM) as a distinct category of software and practice, separate from Personal Knowledge Management (PKM). Where PKM systems organize knowledge for human retrieval, PCM systems provision context for agent action.

Abstract

We introduce Personal Context Management (PCM) as a distinct category of software and practice, separate from Personal Knowledge Management (PKM). Where PKM systems organize knowledge for human retrieval, PCM systems provision context for agent action. We survey 27 years of PKM — from Frand & Hixon's 1999 coining through PARA, Zettelkasten, and the current $2.45 billion tool landscape — and identify a structural gap: PKM captures the what but loses the why. The contextual wrapper — who, when, why, what-for — that makes knowledge actionable decays on capture and is absent from every major PKM tool.

We formalize PCM as the practice of managing this contextual wrapper: the decisions, relationships, temporal state, and domain knowledge that enable AI agents to continue work across sessions without re-explanation. We present a reference implementation (the Alive Context System) and argue that PCM is not an evolution of PKM but a successor category — one designed for the agent era, where the primary consumer of your knowledge is no longer you, but your AI.

1. The State of Things

1.1 Twenty-Seven Years of Filing

1.1 Twenty-Seven Years of Filing
1.1 Twenty-Seven Years of Filing

In 1999, Dr. Jason Frand and Carol Hixon at UCLA's Anderson School of Management coined the term "Personal Knowledge Management" in a working paper designed for MBA students. The question was simple: how do individuals manage their own knowledge growth? The answer they proposed — a seven-skill framework spanning retrieval, evaluation, organization, analysis, presentation, security, and collaboration — launched a field that would eventually produce a $2.45 billion software market.

Twenty-seven years later, the question has changed. The answer hasn't.

Every major PKM tool built since Frand & Hixon — from Evernote's web clippers to Obsidian's graph view — answers the same fundamental question their 1999 paper posed: where do I put this so I can find it later? The tools got better. The question stayed the same.

Vannevar Bush saw it coming in 1945. His Memex — a hypothetical machine for storing all books, records, and communications — was conceived as "an enlarged intimate supplement to his memory." Bush understood that the human mind "operates by association" and envisioned trails of connected thought. Eighty-one years later, we have exactly the trails he described. Obsidian calls them backlinks. Roam calls them bidirectional links. Logseq calls them block references. They all do the same thing Bush imagined: connect information by association.

What Bush couldn't anticipate — what none of them anticipated — was that the primary consumer of your organized knowledge would become non-human.

1.2 The Tools We Built

The PKM tool landscape in 2026 is vast, sophisticated, and fundamentally homogeneous.

Obsidian (1.5 million users, 22% year-over-year growth) positions itself as a local-first "second brain" built on plain Markdown files. Privacy-first, no telemetry, 1,000+ community plugins. Its graph view visualizes connections between notes. It is, at its core, a filing system with a beautiful map of the filing cabinets.

Notion offers an all-in-one workspace combining notes, databases, kanban boards, and wikis. It is team-first, cloud-dependent, and optimized for collaboration. Its users frequently report "reinventing pages because they cannot remember where something was stored."

Roam Research pioneered bidirectional linking and "networked thought" before declining from its 2020 viral peak. Many users migrated to cheaper alternatives. Its pricing remains among the highest for PKM tools. Its performance degrades with large databases.

Tana ($25 million in funding, 160,000+ waitlist) is the most AI-native of the current generation, with "supertags" modeled on object-oriented programming that transform unstructured information into structured data. It is cloud-dependent and still maturing.

Logseq, Capacities, Anytype, Mem, Reflect — each brings a variation on the theme. Local vs cloud. Blocks vs pages. AI-assisted vs manual. Open-source vs proprietary.

What they share is more important than what differentiates them. Every one of these tools is a storage system optimized for human retrieval. They help you file things and find things. They are, in Bush's terms, elaborate Memexes — supplement to your memory.

None of them ask: what does your agent need to know right now?

1.3 The Frameworks We Followed

The tools were shaped by frameworks, and the frameworks share the same assumption.

PARA (Tiago Forte, ~2017) organizes files into Projects, Areas, Resources, and Archives — a storage system based on "actionability." The question it answers: which folder does this note go in?

Zettelkasten (Niklas Luhmann, 1950s) produces atomic notes with numbered relationships. Luhmann described his system as a "communication partner" and attributed his prolific output — 70 books, 400+ scholarly articles — to the 90,000 cards he accumulated. The question it answers: how do I connect this idea to other ideas?

GTD (David Allen, 2001) separates actionable from non-actionable information. Allen's insight — "your mind is for having ideas, not holding them" — launched an industry. But he acknowledged that "a good general-reference file" remained "one of the biggest bottlenecks" in the system. The question GTD answers: what do I do next?

Evergreen Notes (Andy Matuschak) argues that notes should be atomic, concept-oriented, densely linked, and written for yourself. The question it answers: how do I think better?

LYT/ACE (Nick Milo) emphasizes linking over categorizing — "creating a web of knowledge where notes are connected based on context and relevance." The question: how do I find the connection?

Each framework is valuable. Each framework is a filing strategy. Each framework answers a variation of: how does a human organize and retrieve knowledge?

None of them answer: how does an agent receive the context it needs to continue your work?

That's a different question. It requires a different category.

1.4 A $2.45 Billion Filing Cabinet

The PKM software market reached $2.45 billion in 2024 and is projected to grow at 15.8% CAGR to $9.12 billion by 2033. It is growing fast. But growing toward what?

Consider the evidence: 68% of PKM tool adopters abandon their systems within six months. Evernote — the first-wave PKM tool that defined the category — saw downloads collapse 82%, from 9.6 million in 2017 to 1.7 million in 2023. Roam Research went from viral sensation to "legacy powerhouse" in under three years. The market grows while individual systems fail.

The churn is the market. People buy into the promise of organized knowledge, build elaborate systems, watch them decay, blame themselves, switch tools, and start over. The tool-hopping cycle — try Tool A, find it inadequate, migrate to Tool B, lose context in the migration, try Tool C, start from zero — costs 2-3 months of productivity per iteration.

A $2.45 billion market built on a foundation of recurring failure is not a healthy market. It's a signal that the category itself is wrong.


2. The Gap

2.1 Knowledge vs Context

2.1 Knowledge vs Context
2.1 Knowledge vs Context

Michael Polanyi drew a line in 1958 that the PKM industry has spent sixty-seven years ignoring.

In Personal Knowledge and later The Tacit Dimension (1966), Polanyi distinguished between knowledge-that — explicit facts and propositions — and knowledge-how — the tacit, embodied, contextual understanding that makes facts actionable. "We can know more than we can tell," he wrote. The knowledge that matters most is the knowledge we cannot easily articulate.

PKM tools capture knowledge-that. Your note says what happened. Your bookmark saves the article. Your highlight preserves the paragraph. These are facts — explicit, articulable, filing-cabinet-ready.

Context is everything else. Why you captured that article — because you were evaluating vendors for a project that was behind schedule and your co-founder was nervous about the budget. Who was involved — the three people in the meeting who disagreed about the approach, and the one who changed their mind. When it mattered — not just the date, but the state of the project, the phase of the work, the emotional temperature of the team. What it served — the decision it informed, the outcome it shaped, the next step it unlocked.

Context is the wrapper that makes knowledge actionable. Strip the wrapper and you have inert data — accurate, well-organized, and useless.

PKM systems strip the wrapper.

2.2 Context Collapse in Knowledge Systems

"Context collapse" was coined to describe how social media platforms flatten multiple audiences into a single undifferentiated context — the near impossibility of managing different identities in a space where everyone sees everything.

Knowledge systems suffer an analogous collapse. When you capture a note, you capture the what. The why evaporates. The who fades. The when becomes a datestamp with no surrounding state. The note remains. The meaning dies.

This is not a failure of discipline or methodology. It is a structural property of systems designed for storage and retrieval. If your system's unit of organization is a note — a discrete artifact of captured knowledge — then the contextual wrapper is, by definition, not part of the system. The note is the content. The context is invisible, uncaptured, and decaying.

Within one hour, 50% of learned information is forgotten. Within 24 hours, 70%. Within one month, up to 90% is gone without reinforcement. This is Ebbinghaus's forgetting curve, replicated by Murre and Dros in 2015. It applies not just to the knowledge itself but to the context around the knowledge — why you saved it, what it was for, where it fit.

The note survives. The context that made the note meaningful does not.

2.3 The Note Graveyard

The evidence is not theoretical. It is personal, widespread, and emotionally devastating.

"I wrote constantly. I read almost never." A developer who deleted — not archived, not reorganized, actually deleted — their entire Zettelkasten described it as "write-only memory." The notes accumulated. Retrieval never happened. "The act of writing the note was the value. The note itself was theater."

"You don't have a second brain. You have a primary brain which thinks, processes, and a filing cabinet where curiosity goes to die." A psychology student who spent 200+ hours building a second brain in Obsidian called it "a graveyard where good ideas go to be embalmed." He was spending half his study time on formatting and reorganization rather than synthesis. "I felt like a scammer scamming myself."

"Every attempt, over decades, from pen and paper to Obsidian to Notion, has ended in a mess that becomes unusable." A user with ADHD described having "hundreds or even thousands of transcriptions that are effectively meaningless." The volume accumulated. The value didn't. "Sometimes I feel like all this effort to manually do what my brain cannot is futile."

The pattern recurs across forums, blog posts, and confessional essays. One user documented 3,871 notes and 30,555 unprocessed files, deleted more than half "without hesitation," and missed 1%. Obsidian Forum threads discuss "pornographic productivity" — the phenomenon where building and maintaining the system feels like doing the work, replacing the actual work with its aesthetic simulation.

Even the PKM thought leaders acknowledge the failure mode. Andy Matuschak, the most intellectually rigorous voice in the space, observes that "people who write extensively about note-writing rarely have a serious context of use." The people selling PKM systems use them primarily to produce more content about PKM systems. "Luhmann, by contrast, barely wrote about his Zettelkasten: he focused on his prolific research output."

This is not a tool problem. Better search, better linking, better AI tagging — none of these address the structural issue. The system captures artifacts of knowledge. The context that makes those artifacts meaningful is not captured, not maintained, and not available for retrieval.

2.4 The Half-Life Problem

Knowledge decays. The human mind forgets more than half of what it encounters within two days. But PKM systems have no mechanism for decay. They are permanence machines in a world that requires freshness.

The more notes you keep, the harder it becomes to find and use what matters. "200 notes on a topic become clutter, and there's no way to distill it." Research describes this as the fundamental paradox of knowledge systems: "People overestimate the future value of what they save, leading to bloated systems."

Technical knowledge has a half-life of 6-18 months. Business contexts shift quarterly. Relationships evolve. Decisions are superseded. A note from six months ago that says "we're going with vendor X" is not just outdated — it is actively misleading if vendor X was dropped three months later and you don't have the context of why.

PKM systems have no mechanism for staleness, no mechanism for pruning, no mechanism for distinguishing what's current from what's historical. They treat all knowledge as equally permanent. This is a feature, not a bug — in a filing system, permanence is the point. But context is temporal. It moves. It expires. It needs to know what time it is.


3. The Shift

3.1 Filing vs Provisioning

The category shift is a single sentence:

PKM is filing for humans to retrieve later. PCM is provisioning for agents to act now.

This is not a feature upgrade. This is a paradigm change. The primary consumer of your organized knowledge is no longer you sitting at a desk, searching your notes, trying to remember where you put something. It is an AI agent that needs to understand your world in order to act on your behalf.

The shift from retrieval to provision changes everything about how context should be organized.

In PKM, you do the work of retrieval. You search. You browse. You follow links. You remember (or don't remember) where you filed something. The system optimizes for your ability to find things.

In PCM, the system provisions context to the agent. The agent doesn't search your notes. It receives the scoped context it needs — identity, history, active work, domain knowledge — injected into its context window before it speaks. The system optimizes for the agent's ability to continue your work.

This is the difference between a library and a briefing. A library organizes all the books and trusts you to find the right one. A briefing gives you exactly what you need for the meeting you're about to walk into.

PKM builds libraries. PCM delivers briefings.

3.2 Why This Isn't Just "PKM + AI"

The temptation is to frame PCM as "PKM with AI features bolted on." Tana adds AI tagging. Obsidian gets MCP bridges. Mem auto-organizes. Reflect's AI "understands the entire note graph."

But bolting AI onto a storage system doesn't change the paradigm. The agent still inherits the flat structure, the missing context, the note graveyard. You've given a robot the keys to a filing cabinet.

As one researcher put it: "Unstructured notes are almost as useless to AI as having no notes at all." AI agents need consistent types with defined properties, logical organization, structured metadata, interconnected links, and machine-readable format. They need context provisioned to them — scoped, current, and relevant to the work at hand.

Sebastien Dubois presents 8 Levels of AI Context Management, from generic AI (Level 1) to AI-ready knowledge systems (Level 8). Most users are stuck at Levels 2-3. The gap between "I have notes" and "my agent has context" is structural, not incremental.

You can't fix a provision problem with a better retrieval tool.


4. Personal Context Management: The Definition

4.1 What PCM Is

4.1 What PCM Is
4.1 What PCM Is

Personal Context Management (PCM) is the practice of capturing, structuring, and provisioning the contextual wrapper around personal knowledge — the decisions, relationships, temporal state, domain insights, and intentional metadata — such that AI agents can continue work across sessions without loss of continuity.

A PCM system:

  • Captures context, not just content. Not "what happened" but "why it happened, who was involved, what it served, and what state the work was in."
  • Provisions context to agents. Doesn't wait for a search query. Injects the right context, at the right scope, at the right time.
  • Maintains temporal state. Knows what's current, what's stale, and what's archived. Flags decay. Supports pruning.
  • Scopes context to prevent cross-pollination. A walnut for each domain. A bundle for each workstream. Context stays in its lane.
  • Links context across domains without merging it. Connections exist. Boundaries hold. You can follow a link without loading everything.
  • Enables pruning without breaking relationships. Remove a node without collapsing the graph.
  • Compounds across sessions. Each session builds on every session before it. Nothing is lost. Everything progresses.

4.2 What PCM Is Not

PCM is not a note-taking app. It is not a "second brain" — your brain is no longer the primary consumer. It is not PKM with AI features. It is not a task manager, a search engine, or a chatbot memory system. The memory your AI platform provides — fragments, disconnected facts, no structure, no history, no portability — is a platform's approximation of what PCM provides natively.

4.3 PCM vs PKM vs PIM vs KM

PIM PKM KM PCM
Scope Personal info Personal knowledge Organizational Personal context
Consumer Human Human Team Agent
Operation Store & retrieve Organize & connect Share & codify Provision & compound
Unit File Note Document Context unit (walnut)
Failure mode Lost files Note graveyard Siloed teams Context collapse
Optimizes for Finding Connecting Distributing Continuing
Question Where is it? How does it relate? Who needs it? What does my agent need?

4.4 The Ancestry

PCM's intellectual lineage traces a clear arc:

Year Thinker Contribution Question Answered
1945 Vannevar Bush Memex — associative retrieval How do I supplement my memory?
1950s Niklas Luhmann Zettelkasten — communication partner How do I think in partnership with a system?
1958 Michael Polanyi Tacit Knowledge — "we know more than we can tell" What knowledge can't be captured?
1962 Doug Engelbart Augmenting Human Intellect How do computers extend cognition?
1999 Frand & Hixon Coined PKM How do individuals manage knowledge?
2001 David Allen GTD — externalize the actionable What do I do next?
~2017 Tiago Forte PARA/CODE — organize by actionability Which folder does this go in?
~2018 Andy Matuschak Evergreen Notes — atomic, linked, personal How do I think better?
2020 Nick Milo LYT/ACE — link, don't categorize How do I find the connection?
2025 Anthropic Context Engineering — enterprise discipline How do we curate tokens for agents?
2025 Sebastien Dubois Agentic Knowledge Management How does AI interact with PKM?
2026 Ben Flint Personal Context Management How do individuals manage their context?

The through-line: each step moved closer to the insight that context, not knowledge, is what needs to be managed. PCM is where the line was always heading.


5. Context Engineering: The Enterprise Cousin

5.1 What Anthropic Defined

5.1 What Anthropic Defined
5.1 What Anthropic Defined

In September 2025, Anthropic's Applied AI team published "Effective Context Engineering for AI Agents," formalizing a discipline that had been emerging across the industry. Their definition:

"Context engineering: the set of strategies for curating and maintaining the optimal set of tokens during LLM inference, including all the other information that may land there outside of the prompts."

The distinction from prompt engineering is significant. Prompt engineering is writing good instructions. Context engineering is managing the entire information state. As the paper puts it: "Building with language models is becoming less about finding the right words and phrases for your prompts, and more about answering the broader question of 'what configuration of context is most likely to generate our model's desired behavior?'"

The paper introduces key concepts: the attention budget (every token depletes the model's ability to attend to other tokens), context rot (recall accuracy decreases as context length increases), and the four strategy pillars of system prompts, tools, examples, and message history. It documents techniques for long-horizon work: compaction, structured note-taking, and sub-agent architectures.

Andrej Karpathy amplified the framing: "Context engineering is the delicate art and science of filling the context window with just the right information for the next step." Tobi Lutke's endorsement reached 1.9 million views. Gartner identified context engineering as a top emerging technology skill for 2026 and predicts that by 2028, 80% of AI tools will include context engineering features.

The term stuck. The discipline is real. The question is: who does it apply to?

5.2 The Enterprise Claim

DataHub's co-founder Shirshanka Das drew a further distinction between context engineering (single-application) and context management (enterprise-wide):

"Context engineering solves the problem within a single application — it's the techniques and tools one team uses to fill their agent's context window effectively. It ends up being artisanal, bespoke, and isn't well set up to scale across an organization."

"Context management is what gives it those superpowers by solving this across your entire enterprise. It's systematic, governed, and built for scale."

DataHub positions context management as a $9 billion platform category. Their CONTEXT 2025 conference drew 1,500+ data leaders. Apple demonstrated agentic workflows. Netflix presented metadata infrastructure. The enterprise layer is being claimed.

5.3 The Personal Gap

The context stack now has four layers:

Prompt Engineering        → technique    (deprecated)
Context Engineering       → discipline   (Anthropic, Karpathy, Gartner)
Context Management        → enterprise   (DataHub, $9B market)
Personal Context Mgmt     → individual   (UNCLAIMED)

Anthropic tells developers how to curate tokens for agents. DataHub tells enterprises how to govern context at scale. But who tells individuals how to structure their personal context so that any AI agent they interact with can serve them effectively?

The answer, today, is nobody.

Anthropic's own paper acknowledges the gap without naming it: "This approach mirrors human cognition: we generally don't memorize entire corpuses of information, but rather introduce external organization and indexing systems like file systems, inboxes, and bookmarks to retrieve relevant information on demand." These "external organization and indexing systems" are precisely what PCM provides — but for personal context, not enterprise data.

DataHub's analogy is equally revealing: "Context engineering is like each development team writing their own authentication system. Context management is like implementing enterprise SSO." They solve for enterprises. Individuals don't get SSO. Individuals get a flat MEMORY.md file.

Dubois sees it most clearly: "Your AI is only as good as the context you give it. Most people give it almost nothing." He proposes Agentic Knowledge Management as the solution — but frames it as an evolution of PKM, not a new category. The tools stay the same. The paradigm stays the same. AI is added on top.

PCM is the missing layer. Not an evolution of PKM. Not enterprise context management scaled down. A distinct category for the distinct problem of managing personal context in the agent era.


6. The Context Crisis

The argument for PCM is not theoretical. Every major platform, every developer tool, and every agent framework is currently failing at personal context. The evidence is systemic.

6.1 Platform Memory: Automatic and Dumb

6.1 Platform Memory: Automatic and Dumb
6.1 Platform Memory: Automatic and Dumb

The three largest AI platforms have each shipped memory features. All three follow the same pattern: the model decides what to save, the user has minimal control, and the result is non-portable.

ChatGPT Memory stores a flat list of short statements on OpenAI's servers. The model decides what to save. Users cannot control what the "Reference Chat History" feature surfaces — it is a black box. MIT's 2025 research found an 83% recall failure rate — two-thirds of users who saw "Memory updated" later found their memories missing or corrupted. In February and November 2025, catastrophic memory wipes erased users' accumulated context without warning. Creative writers lost "entire fictional universes" with 40+ character backstories. Over 300 complaint threads accumulated on r/ChatGPTPro. Users described the system as "far gone in dementia." There is no native API for exporting memories. 900 million weekly active users, 70% with memory enabled, accumulating context they cannot take with them.

Claude's web memory (free tier, March 2026) synthesizes a categorized summary updated every 24 hours. More structured than ChatGPT's flat list — organized by professional domain — but still a single synthesized document. To its credit, Anthropic offers export and import, including from ChatGPT. But the memory remains server-side and model-synthesized.

Google's Personal Intelligence (January 2026) connects Gemini to Gmail, Photos, YouTube, and Search to reason across all your Google data simultaneously. The intelligence is real — and completely non-portable. There is no mechanism to export what Gemini "knows" about you. When Google auto-enabled the feature without asking in October 2025, a class-action lawsuit followed. Leave Google, lose everything.

Microsoft Copilot Recall captures screenshots of your active window every few seconds, runs OCR, and indexes everything you've seen or typed. It is surveillance-as-search, encrypted to a single device's TPM chip, non-portable by design. It captures self-destructing messages from Signal and WhatsApp. It records Zoom calls. It is not a memory system. It is a panopticon with a search bar.

The pattern across all four: context is captured automatically, stored on the platform's terms, structured minimally, and portable not at all. The model decides what matters. The user hopes for the best.

6.2 Developer Context: Manual and Flat

For developers working with AI coding tools, context management is manual practice built on flat files.

MEMORY.md — Claude Code's auto-memory — persists context in local Markdown. The model decides what to save. The first 200 lines are loaded at session start; everything beyond is invisible. There is no scoping, no linking, no temporal state. This is PKM thinking applied to agent context: one file, one place, one undifferentiated list. It works at 10 items. It fails at 200.

CLAUDE.md files offer the most structured approach currently available — a five-level hierarchy from organization policy down to directory-scoped rules, with glob-pattern matching. This is genuine scoping. But CLAUDE.md is project configuration, not personal context. It tells the agent how to behave here, not who you are everywhere.

AGENTS.md (60,000+ repositories, 20+ supporting tools) standardizes project-level agent instructions. But a March 2026 ETH Zurich paper found that AGENTS.md files may actually hinder agent performance. Cursor rules, Windsurf rules, and other tool-specific configurations follow the same pattern: static files that shape per-turn behavior but maintain no state, no history, no compounding.

The tools are getting better at reading context. Nobody is getting better at structuring it.

6.3 RAG: The Default That Isn't Enough

Retrieval Augmented Generation claimed the "context engine" slot by default. The chunk-embed-retrieve pipeline is now standard infrastructure for giving agents access to documents. Four out of five organizations increased AI investment in 2026, and most of that investment flows through RAG.

But RAG answers "what's in the documents?" not "what does my agent need right now?" It has no mechanism for personal context, preferences, relationships, or project state. It has no temporal awareness. Google Research found RAG paradoxically increases hallucination confidence — the additional context makes the model more sure of wrong answers.

The industry's own verdict: "Cannot live without RAG, yet remain unsatisfied." Gartner predicts 60% of AI projects abandoned without AI-ready data practices. RAG is necessary infrastructure. It is not a context management solution.

6.4 The Emerging Alternatives

OpenClaw (247,000+ GitHub stars, 2 million monthly active users) has the architecture right: a pluggable context engine slot with four lifecycle hooks — Ingest, Assemble, Compact, After Turn. Developers can swap in custom context strategies. But only RAG-style engines currently fill the slot. OpenClaw built the socket. Nobody has built the plug for personal context.

Hermes Agent (Nous Research, February 2026) ships persistent memory in local Markdown — the right instinct. The implementation: MEMORY.md gets 2,200 characters. USER.md gets 1,375 characters. Total: 3,575 characters. Two paragraphs. For everything an agent knows about you and your work. Memory updates aren't even visible in the current session — only on restart.

6.5 The Evidence

Every approach demonstrates the same structural failure. Agent context today is either:

  • Automatic and dumb — the platform captures fragments, decides what matters, stores it non-portably, and fails 83% of the time
  • Manual and flat — the developer writes flat files, hopes the agent reads them, has no scoping, and hits limits at 200 lines or 3,575 characters
  • Retrieval-only — RAG finds similar document chunks but knows nothing about you, your state, your relationships, or your work in progress

None of these are PCM. None provision scoped, structured, temporal personal context to agents. The slot is open. The category is empty.

6.6 Siloed but Linked

6.6 Siloed but Linked
6.6 Siloed but Linked

The PCM architecture principle is the opposite of flat: context should be scoped (siloed) but connected (linked). Not merged. Not flattened. Not tagged-and-searched.

In the reference implementation (the Alive Context System), a walnut holds context for a single domain — a venture, an experiment, a person, a life area. Each walnut has a kernel of three source files: identity (what it is), history (where it's been), and knowledge (what's known). A generated projection — a snapshot of current state — is rebuilt on every save.

Bundles of work grow inside walnuts. Each bundle is a self-contained unit with its own manifest, tasks, observations, and source material. Bundles can contain sub-bundles. The tree can nest without limit.

The key property: each node in the tree has its own manifest. You can scan 50 bundles by reading 50 manifests — 50 lines of YAML — without loading any content. You can prune a bundle without touching its siblings. You can graph the connections between walnuts without loading them all into memory.

Compare this to MEMORY.md. To understand what's in MEMORY.md, you read all of MEMORY.md. To prune MEMORY.md, you read all of MEMORY.md and decide what to remove. To find connections in MEMORY.md, you... read all of MEMORY.md.

The file system is the methodology. Scoping is the feature. The tree structure prevents the entropy that flat files guarantee.

6.3 The Three Files That Replace Everything

A walnut's kernel contains three source files:

  • key.md — what it is. Identity, people, connections, goal. Rarely changes.
  • log.md — where it's been. Append-only history. Signed entries. Immutable.
  • insights.md — what's known. Standing domain knowledge. Confirmed evergreen.

Plus one generated projection:

  • now.json — where it is right now. Current state, active work, next action. Regenerated from scratch on every save.

Four artifacts. That is the entire state of any context unit.

Compare to PARA's "figure out which folder" or Zettelkasten's "number your cards" or GTD's "is this actionable?" Context management is simpler than knowledge management because it is scoped by design. The scope is the structure. The structure is the methodology.


7. The Reference Implementation

The Alive Context System is the first PCM. It is a Claude Code plugin — 15 skills, 14 hooks, plain Markdown files on the user's machine — with over 100 users in its first two weeks of public release. It has been adopted across the Claude Code, Hermes, and OpenClaw ecosystems, with 257 unique cloners and organic international reach.

It is not the only possible PCM. It is the proof that PCM works — that context can be structured to compound across sessions, that agents can be provisioned rather than searched, that the contextual wrapper can be preserved rather than lost.

A detailed technical analysis of the implementation — including the "Context as Code" pattern of building an agent runtime from hooks and plain files — is the subject of a companion paper.


8. Implications

8.1 For Tool Builders

Every PKM tool is one paradigm shift away from becoming a PCM tool. The features they need: scoped context units (not flat note vaults), agent provisioning (not just human search), temporal state (not just timestamps), manifest-driven indexing (not just tags), and session continuity (not just storage).

Obsidian is closest — local-first, plain files, extensible. An Obsidian plugin that adds walnut-style scoping and agent provisioning would transform it from the best PKM tool into a PCM tool.

8.2 For AI Agent Developers

Your agent is only as good as the context provisioned to it. MCP gives you connectivity — 97 million monthly SDK downloads. PCM gives you the context worth connecting. Without structured personal context, MCP is a highway with no cars on it.

8.3 For Individuals

You own your context. Not the platform. Not the model provider. You.

The memory your AI platform offers — ChatGPT's fragments, Gemini's Personal Intelligence, Copilot's recall — is platform-locked, non-portable, and structurally incapable of compounding. It is, as the whitepaper describes, "a shitty photocopy of your memory." They keep the original. You rent the copy.

PCM means your context lives on your machine, in your files, in a structure you control. When you switch models, your context travels with you. When you switch tools, your context survives. When you stop paying for a subscription, your context remains.

This is not a feature. It is a property right. The argument for context sovereignty — that personal context is property, not platform data — is the subject of a companion paper.


9. Conclusion

Frand and Hixon asked "how do individuals manage their knowledge?" in 1999. Twenty-seven years later, the answer is: badly. The tools are sophisticated. The frameworks are elegant. The market is $2.45 billion. And 68% of users abandon their systems within six months.

The problem is not the tools. The problem is the question.

The successor question is: how do individuals manage their context?

Context is not knowledge. Knowledge is the what. Context is the why, the who, the when, and the what-for. Knowledge can be filed. Context must be provisioned. Knowledge is permanent. Context is temporal. Knowledge is retrieved by humans. Context is consumed by agents.

Personal Context Management is the category that answers the successor question. It is not an evolution of PKM — it is a replacement for it, as surely as PKM replaced general-purpose filing. The agent era does not need better filing cabinets. It needs context infrastructure.

The age of the second brain is over. The age of the living context has begun.


References

Academic & Foundational

  • Frand, J. & Hixon, C. (1999). "Personal Knowledge Management: Who, What, Why, When, Where, How?" UCLA Anderson School of Management.
  • Polanyi, M. (1958). Personal Knowledge. University of Chicago Press.
  • Polanyi, M. (1966). The Tacit Dimension. Doubleday.
  • Bush, V. (1945). "As We May Think." The Atlantic.
  • Luhmann, N. (1981). "Kommunikation mit Zettelkastens."
  • Ebbinghaus, H. (1885). Memory: A Contribution to Experimental Psychology. Replicated by Murre & Dros (2015), PMC.

Frameworks

  • Forte, T. (2022). Building a Second Brain. Atria Books.
  • Ahrens, S. (2017). How to Take Smart Notes. CreateSpace.
  • Allen, D. (2001, 2015). Getting Things Done. Penguin.
  • Matuschak, A. "Evergreen Notes." notes.andymatuschak.org.
  • Milo, N. "Linking Your Thinking." linkingyourthinking.com.

Context Engineering

  • Rajasekaran, P. et al. (2025). "Effective Context Engineering for AI Agents." Anthropic.
  • Das, S. (2025-2026). "Context Management: The Missing Piece for Agentic AI." DataHub.
  • Willison, S. (2025). "Context Engineering." simonwillison.net.
  • Gartner (2026). "Context Engineering Replacing Prompt Engineering."
  • Chen et al. (2025). "A Survey of Context Engineering for LLMs." arXiv:2507.13334.

PKM Critique

  • Matuschak, A. "People who write extensively about note-writing rarely have a serious context of use."
  • Chapin, S. "Notes Against Note-Taking Systems." Substack.
  • Nussenbaum, M. (2022). "Don't Take Notes." Candy for Breakfast.
  • Dubois, S. "Your AI Doesn't Know You." dsebastien.net.
  • Dubois, S. "Agentic Knowledge Management." dsebastien.net.
  • Tietze, C. "The Collector's Fallacy." zettelkasten.de.

Platform Memory & Context Systems

  • Willison, S. (2025). "I really don't like ChatGPT's new memory dossier." simonwillison.net.
  • WebProNews (2025). "ChatGPT's Fading Recall: Inside the 2025 Memory Wipe Crisis."
  • Anthropic (2026). "Use Claude's Chat Search and Memory." Claude Help Center.
  • Google (2026). "Personal Intelligence." Google AI Blog.
  • Beaumont, K. (2025). "Testing Recall Security and Privacy Implications." DoublePulsar.
  • Claude Code Docs (2026). "How Claude Remembers Your Project."
  • RAGFlow (2025). "From RAG to Context: Year-End Review."
  • OpenClaw Docs (2026). "Context Engine." docs.openclaw.ai.
  • Nous Research (2026). "Hermes Agent: Persistent Memory."
  • ETH Zurich (2026). "New Research Reassesses the Value of AGENTS.md Files."

Market Data

  • DataIntelo (2024). "Personal Knowledge Management Software Market Report."
  • Electroiq (2023). "Evernote Business Statistics."
  • DataHub (2026). "State of Context Management Report."
  • DemandSage (2026). "ChatGPT Statistics 2026."

Appendix A: PKM Tool Landscape (2026)

[To be populated with full tool comparison matrix from pkm-landscape survey]

Appendix B: Framework Comparison Matrix

[PARA vs Zettelkasten vs GTD vs LYT vs ALIVE/PCM — detailed feature and philosophy comparison]

Appendix C: MEMORY.md Teardown

[Side-by-side analysis: what MEMORY.md looks like at 50/100/200 entries vs what a bundle tree looks like at the same scale]

Cite this work

Flint, B. (2026). "Personal Context Management: Defining the Category." Lock-in Lab Research.
PAPER 02
Context as Code
Agent Runtime from Plain Files
LOCK-IN LAB — 2026

Context as Code

Agent Runtime from Plain Files. We define Context as Code — a design pattern in which structured context injection into a stateless language model creates persistent agent runtime behavior without training, fine-tuning, or custom infrastructure. We show that this pattern is supported by converging evidence: Anthropic's persona vectors research proves context injection creates measurable neural activation patterns; in-context learning closes the performance gap with fine-tuning to 3%; fine-tuning degrades safety alignment while context injection preserves it; and three independent billion-dollar-scale agent systems have converged on plain files as their primary behavioral specification.

Abstract

We define Context as Code — a design pattern in which structured context injection into a stateless language model creates persistent agent runtime behavior without training, fine-tuning, or custom infrastructure. We show that this pattern is supported by converging evidence: Anthropic's persona vectors research proves context injection creates measurable neural activation patterns; in-context learning closes the performance gap with fine-tuning to 3%; fine-tuning degrades safety alignment while context injection preserves it; and three independent billion-dollar-scale agent systems have converged on plain files as their primary behavioral specification.

We survey the landscape of agent configuration (AGENTS.md, Cursor rules, MCP servers, Codified Context) and find that while individual components of Context as Code are increasingly common, the integrated pattern — hooks as lifecycle, rules as behavior, skills as operations, files as state — is undocumented in any of the 1,400+ papers surveyed. We coin the term and define the architecture.

A companion paper documents a reference implementation that has run daily for eight months (the ALIVE Technical Whitepaper). This paper defines the pattern it implements.

1. Why This Pattern Exists

This paper exists because of a problem identified in a companion paper: Personal Context Management.

The PCM paper argues that personal knowledge management (PKM) solves the wrong problem — it files knowledge for human retrieval when what the agent era requires is provisioning context for agent action. PCM systems need to capture context (not just content), maintain temporal state, scope context to prevent cross-pollination, compound across sessions, and provision context to agents at the right scope and time.

A third companion paper argues that personal context is property — exportable, inheritable, deletable with certainty, owned by the individual.

These two theses — context should be provisioned, and context should be owned — create a design constraint: the system that manages personal context must be built from sovereign, portable, human-readable infrastructure. No platform dependency. No proprietary format. No cloud service requirement.

Context as Code is the pattern that meets this constraint. It is the engineering answer to a philosophical and categorical question.


2. The Inversion

2.1 Configuration vs Runtime

2.1 Configuration vs Runtime
2.1 Configuration vs Runtime

Every existing approach to agent behavior treats context as configuration for an engineered runtime. AGENTS.md configures Copilot. CLAUDE.md configures Claude Code. Cursor rules configure Cursor. The tool is the runtime. The file is its settings.

Context as Code inverts this. The files are not configuration. They are the runtime. There is no engineered system underneath — only a foundation model and a context window. Structure the context precisely enough, inject it at the right lifecycle boundaries, and runtime behavior emerges from the model's interpretation.

The model is the execution engine. The context is the program.

2.2 Infrastructure as Code: The Precedent

2.2 Infrastructure as Code: The Precedent
2.2 Infrastructure as Code: The Precedent

This inversion has a fifteen-year precedent.

Before Infrastructure as Code, engineers manually configured servers. Terraform changed the paradigm: declare the desired state in version-controlled files, let the system converge. GitOps extended it: the repo is the source of truth.

Context as Code makes the same move for agent behavior. Files on disk declare the desired behavioral state. The model converges toward it. Change a markdown file, change the behavior. Version the context, version the behavior.

GitOps Principle Context as Code Equivalent
Declarative specification Rules declare constraints, not per-turn instructions
Version-controlled source of truth Context files in Git with full diff history
Automated reconciliation Agent reads files each session, converges to spec
Observable state Human-readable plain text

2.3 Why Now

Three capabilities converged: lifecycle hooks (injection points at session boundaries), long context windows (room for a runtime alongside actual work), and plugin architectures (structured capability injection). None individually enables Context as Code. Together, they make it inevitable.


3. The Mechanism: Model Wrangling

3.1 Four Ways to Shape Behavior

Approach Mechanism Persistence Cost Safety Reversibility
Training Weight modification Permanent $millions Depends Irreversible
Fine-tuning Weight adjustment Semi-permanent $thousands Degrades (10 examples can break alignment) Retrain
Prompt engineering Per-turn instructions Ephemeral Fractions of a cent Preserved Instant
Context injection Structured context per session Session-persistent, file-permanent ~$0.15-0.50/session Preserved Edit a file

Context injection uniquely combines fine-tuning's persistence with prompting's reversibility and safety preservation.

3.2 The Neural Evidence

Anthropic's persona vectors research (2025) proved that context injection creates measurable neural activation patterns: "the persona vector activates before the response — it predicts the persona the model will adopt in advance." This is not instruction-following. It is architectural behavioral shaping at the representation level.

Research on in-context representation learning confirms that "when supplied with structured in-context examples, transformers dynamically reorganize the geometry of their latent representation space." The model's internal state physically reconfigures based on the structure of provided context.

3.3 The Power Law

Anthropic's many-shot research (NeurIPS 2024) proved that behavioral change from context injection follows a power law. More context, more behavioral change, on a predictable curve. Crucially: "larger models show greater susceptibility." Context as Code becomes more effective as models become more capable.

3.4 The Safety Argument

Fine-tuning with ten harmful examples is sufficient to "undermine safety guardrails substantially" (ICLR 2024). Context injection operates within existing safety boundaries because no weights are modified. Anthropic's preventative steering via context caused "little-to-no degradation in model capabilities."

This is not a minor advantage. It is the difference between a behavioral system that respects its safety training and one that has been partially untrained.

3.5 The Performance Gap

Stanford's TART closed the ICL-fine-tuning gap to 3%. Google DeepMind found ICL achieves better generalization than fine-tuning in data-matched settings. Within 3% of fine-tuning. Better generalization. No safety degradation. Instantly reversible. Orders of magnitude cheaper.

3.6 The Attention Budget

Context injection is not "dump everything in." Chroma Research found every model degrades as input length increases. A 1M-token window "still rots at 50K tokens." The "Lost in the Middle" finding (TACL 2024) showed 30%+ performance drops at certain positions.

The governing principle (Anthropic): "find the smallest possible set of high-signal tokens that maximize the likelihood of some desired outcome." A 200-token skill file replacing 50,000 tokens of MCP context is the exemplar. Structure matters more than quantity.


4. The Architecture Pattern

4. The Architecture Pattern
4. The Architecture Pattern

A Context as Code system has four layers. The specific implementation varies; the pattern is constant.

4.1 The Hook Layer

Hooks inject context at lifecycle boundaries. The minimum viable hook set:

  • Session start: inject behavioral rules, user identity, and current state
  • Tool interception: enforce invariants the model must not violate (infrastructure-tier constraints)
  • Context monitoring: re-inject rules as the context window fills (graceful degradation)
  • Compaction recovery: restore full behavioral context after context window compression

The critical architectural insight is dual-tier enforcement:

  • Infrastructure tier: Hook guards that MECHANICALLY block violations. The model cannot bypass them regardless of its instructions. Log immutability, file protection, deletion prevention.
  • Context tier: Injected rules that the model INTERPRETS and follows. Persona, voice, decision-making style, energy matching.

Rules that must be absolutely enforced → hook guards. Rules that benefit from judgment → context injection. This split is what separates Context as Code from "a really long system prompt."

4.2 The Rules Layer

Declarative behavioral constraints in plain markdown. Not per-turn instructions — persistent behavioral constitution. Injected at every session start and re-injected at context thresholds.

A well-structured rules layer covers:

  • Relationship contract: How the agent relates to the human (surface options, don't decide; read before speaking; hold position when pushed)
  • Operating instincts: Behavioral instructions that run every session without being asked
  • Communication constraints: Voice, tone, banned phrases, energy matching
  • Structural standards: How files are named, formatted, signed

These are not suggestions. They are the operating system's kernel. English, not code.

4.3 The Skills Layer

Procedural knowledge encoded as markdown files, loaded when invoked. Skills are step-by-step protocols for complex operations — a save protocol, a context loading sequence, a search procedure.

Skills are the API surface. Rules define behavior. Hooks manage lifecycle. Skills define operations.

4.4 The State Layer

Plain files on disk. No database. No service. The filesystem is the database. YAML for session state. Markdown for content. JSON for computed projections.

The key property: every piece of state is human-readable, version-controllable, and portable. Copy the files, move the runtime.


5. The Landscape

5.1 What Exists

A survey of 1,400+ papers on context engineering found no reference to "Context as Code" as a named pattern. The individual components are increasingly common:

  • Hooks for lifecycle: Claude Code, Windsurf Cascade, and several open-source projects use hooks for automation and enforcement
  • Markdown for context: AGENTS.md (60,000+ repos), CLAUDE.md, Cursor rules — all shape agent behavior through markdown files
  • Session persistence: MCP-based systems, Hermes Agent, and various tools provide some cross-session memory
  • Subagent dispatch: Claude Code natively supports spawning background agents

5.2 What Doesn't Exist

No documented system combines all four layers into an integrated runtime:

System Hooks Rules-as-Behavior Skills-as-Operations Files-as-State Integrated Runtime
AGENTS.md No Static config No No No
Cursor rules No Scoped config No No No
Hermes Agent No No No 3,575 chars total No
Codified Context (paper) Retrieval Via domain agents No Partial Partial
Context as Code pattern Yes Yes Yes Yes Yes

The closest conceptual articulation — "Markdown as an Operating System" (LeverageAI) — describes the idea without implementing it. The closest academic work — "Codified Context" (arXiv:2602.20478) — uses a more static, retrieval-based architecture without lifecycle hooks or behavioral emergence.

5.3 Convergent Evolution

Three independent billion-dollar-scale systems converged on plain files:

Manus (acquired ~$2B) chose context engineering over fine-tuning: "improvements in hours instead of weeks." Treats "the file system as the ultimate context."

OpenClaw (247K+ GitHub stars) uses markdown files as primary memory and exposes a pluggable context engine slot.

Claude Code uses CLAUDE.md and MEMORY.md as its behavioral specification.

Three teams. Three architectures. Same answer: plain files.


6. Emergent Properties

The defining characteristic of Context as Code is that the combination of layers produces behavior that no individual layer specifies.

6.1 Persistence Without State

The model has no memory between sessions. But the hook system creates effective persistence: session records on disk, state snapshots regenerated on save, rules re-injected on every context reset. The model "forgets" but the system remembers.

6.2 Self-Protection

Infrastructure-tier hooks prevent the model from modifying its own rules. The behavioral constitution is tamper-proof from inside. Customizations flow through a separate override channel.

6.3 Graceful Degradation

As the context window fills, monitoring hooks re-inject rules at progressive thresholds. The model's behavioral consistency is actively maintained rather than silently eroding.

6.4 Multi-Session Awareness

File timestamps and session records enable concurrent sessions to detect each other's activity — primitive multi-agent coordination using the filesystem as a shared channel.

6.5 Crash Recovery

Continuous state checkpointing means any session can recover from a crash, compaction, or abrupt termination. The next session reads the recovery state and continues.

None of these are "features" in the traditional sense. They are emergent properties of correctly structured context.


7. Implications

7.1 For Agent Architecture

Context as Code suggests a different design philosophy. Instead of engineering runtime systems and configuring them, structure context so precisely that runtime behavior emerges. The engineering effort moves from system design to context design.

7.2 For Model Providers

If context injection achieves within 3% of fine-tuning while preserving safety, the economics change. You don't need a custom model. You need structured context. The moat moves from weights to context.

Anthropic's own research supports this: system prompts produce "stronger behavioral patterns" than user prompts, and larger models follow them more faithfully. Context as Code becomes more powerful as models improve.

7.3 For Platform Context Engines

OpenClaw's pluggable context engine slot is the right architecture. But the engines that fill it should not just be RAG retrievers. They should be Context as Code runtimes — structured behavioral contexts that create agent identity, enforce constraints, and compound across sessions.

7.4 The Open Question: Scale

Does Context as Code scale beyond a single user? Subagent dispatch, brief packs, and tiered loading suggest it can. The proof at enterprise scale does not yet exist.


8. Conclusion

Context as Code is the engineering pattern that makes Personal Context Management implementable and Context as Property achievable. It is the answer to a constraint: how do you build an agent runtime that is sovereign, portable, human-readable, and requires no platform infrastructure?

The answer: you don't build a runtime at all. You structure context so precisely that a runtime emerges.

The mechanism is supported by neural evidence (persona vectors), performance data (3% gap), safety research (fine-tuning breaks, injection preserves), and convergent evolution (three independent billion-dollar systems chose the same approach).

A reference implementation — the Alive Context System — has run daily for eight months, documented in a companion whitepaper. But the pattern is not the implementation. Anyone with a foundation model, a hook system, and a text editor can build a Context as Code runtime.

Context is not configuration for a runtime. Context is the runtime.


References

[Same references as draft-02, with additions for companion paper cross-references]

Companion Papers

  • Flint, B. (2026). "Personal Context Management: Defining the Category." Lock-in Lab.
  • Flint, B. (2026). "Context as Property." Lock-in Lab.
  • Flint, B. (2026). "The Alive Context System: Technical Whitepaper." Lock-in Lab.

Cite this work

Flint, B. (2026). "Context as Code: Agent Runtime from Plain Files." Lock-in Lab Research.
PAPER 03
Context as Property
The Ownership Thesis
LOCK-IN LAB — 2026

Context as Property

The Ownership Thesis. The most valuable thing an AI will ever know is you. The question of who owns that knowledge is the defining property rights question of the AI age.

Abstract

The most valuable thing an AI will ever know is you. The question of who owns that knowledge is the defining property rights question of the AI age.

This paper argues that personal context — the accumulated decisions, relationships, domain knowledge, and temporal state that make AI useful to you specifically — is property. Not platform data. Not training material. Not a feature of your subscription. Property you own, control, export, inherit, and delete.

We survey what platforms built (and credit them for it), examine the enshittification cycle now beginning in AI memory, trace the bundling/unbundling pattern through four historical parallels, and argue that the technology for sovereign context already exists. It is called files.

Companion papers define the category (Personal Context Management) and document the engineering (Context as Code). This paper makes the ownership argument.

1. The Observation

Here's something worth noticing: the most valuable AI feature isn't intelligence. It's memory.

An AI that remembers your project is categorically better than one that doesn't. An AI that knows your preferences saves you fifteen minutes of re-explanation per session. An AI that understands your domain — the decisions you've made, the people involved, the history of how you got here — can continue your work instead of starting from scratch.

Memory is what turns a chat window into an assistant. And right now, you don't own yours.

Your ChatGPT memories live on OpenAI's servers, in a format you can't export, structured in a way you can't control, subject to terms you didn't negotiate. In February 2025, hundreds of users lost their accumulated memories overnight in a catastrophic wipe — years of curation, gone. OpenAI's response took ten to twelve days. There is no native API for exporting your memories.

Your Claude context is synthesized into a server-side summary updated every 24 hours. Your Gemini context is inseparable from your Google account. Your Copilot context is encrypted to a single device's TPM chip.

This paper is about a simple proposition: that personal context should be property, with all the rights that word implies. Exportable. Inheritable. Deletable with certainty. Yours.


2. What Platforms Built

Before making the property argument, an honest acknowledgment.

Platforms built something real. They proved a concept that deserves credit.

ChatGPT Memory proved that persistent AI context is transformatively valuable. Before February 2024, every AI conversation started from zero. You re-explained your job, your preferences, your projects, every time. Memory proved that even crude persistence — a flat list of key-value pairs — creates enormous user value. Hundreds of millions of people experienced, for the first time, an AI that knows them. That's a genuine achievement.

Google's Personal Intelligence proved that cross-application reasoning is powerful. When Gemini can see your email, calendar, photos, and search history simultaneously, it makes connections you'd never make manually. The insight — that context spans applications — is correct and important.

Obsidian proved that people will invest enormous effort in organizing their knowledge. 1.5 million users, 22% year-over-year growth, a thriving community of people who genuinely believe that structured knowledge changes their lives. And Obsidian proved something else that matters: local-first, plain-file knowledge management works. People prefer owning their data.

Notion proved that structured, relational data is valuable for individuals, not just enterprises. The idea that a person would build a relational database of their own knowledge — and find it useful — seemed unlikely. Notion proved it.

These are real contributions. The people who built them deserve recognition. The question is not whether AI context is valuable — that's settled. The question is whether the next decade of AI context will be owned by the platforms that proved the concept, or by the people whose lives generate the context.

History has a clear answer to this question. But first, the pattern.


3. The Pattern

3. The Pattern
3. The Pattern

3.1 Enshittification

3.1 Enshittification
3.1 Enshittification

Cory Doctorow described a cycle in 2023 that is now beginning in AI memory.

"Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die."

The mechanism that enables each phase transition is lock-in — the ability to raise switching costs so users can't leave even as the experience degrades.

ChatGPT Memory is in Phase 1. The memory is free. It's useful. It feels generous. Users accumulate context — preferences, project history, relationship notes, domain knowledge — and each memory stored increases the cost of switching. This is the attraction phase. OpenAI is spending its investor capital to acquire your lock-in.

Phase 2 is visible in the pricing tiers. Plus subscribers already get 25% more memory capacity. Enterprise customers get API access. The feature that was free becomes the feature you pay to keep. Your accumulated context becomes the subscription's moat.

Phase 3 is predictable because it is always predictable. The platform's understanding of you — built from your conversations, your decisions, your preferences — becomes an asset the platform monetizes. Training data. Behavioral targeting. Insight resale. The memory that was supposed to serve you begins to serve the platform.

The antidote, as Doctorow has argued consistently, is interoperability and data portability. If you can leave, the platform has to stay good. If you can't, the platform doesn't have to stay anything.

3.2 Bundling and Unbundling

Jim Barksdale, CEO of Netscape, reportedly told his board: "Gentlemen, there's only two ways I know of to make money: bundling and unbundling."

The observation is that industries oscillate. A company bundles many services together. Customers pay for the bundle because it's convenient. The bundler captures value through convenience premiums and switching costs. Then technology makes it possible to access individual components separately — often better and cheaper. Startups pick off the most valuable parts of the bundle. The bundle collapses. Eventually, a new bundler emerges, and the cycle repeats.

Platforms are currently bundling your context. Your ChatGPT conversations, your Google search history, your Notion workspace, your calendar, your email — each platform holds a piece of your context, and none has the whole picture. Google's Personal Intelligence is the ultimate bundling move: aggregate context across all Google services into a single intelligence layer. The more services you use, the smarter Gemini gets, the harder it is to leave.

Every unbundling happens when two conditions are met: technology makes the bundle unnecessary, and the bundle's extraction becomes intolerable. Both conditions are now met for personal context. Local AI and edge computing mean you don't need a cloud platform for persistent context. And platforms are visibly beginning to use your context against you — premium-tier gatekeeping of your own history, auto-enrollment without consent, catastrophic data losses with weeks-long response times.

The historical parallels are precise:

AT&T bundled the phone, the line, the switching, and the long distance. You rented your phone from them. The 1984 breakup and the Carterfone decision — you can attach any device to the network — unbundled the stack. Innovation exploded. The PCM argument is the Carterfone argument: you should be able to attach any context to any AI.

Cable bundled 200 channels to sell you the 12 you wanted. Streaming unbundled it. Then streaming re-bundled (Disney+/Hulu/ESPN). The cycle continues. Currently, platforms bundle all your context to sell you the AI features you want. PCM lets you selectively share context with any AI.

Banks bundled checking, savings, loans, investments, and payments into one institution. Switching was nearly impossible. Then Stripe unbundled payments, Robinhood unbundled investing, Venmo unbundled transfers, and Plaid unbundled data access. The plumbing for financial unbundling was APIs and open data standards. The plumbing for context unbundling is plain files and open formats.

We are at the unbundling moment for personal context.

3.3 The Context Debt Cycle

Ray Dalio describes economic cycles driven by debt accumulation — short-term cycles of expansion and contraction, and long-term cycles where debt builds across multiple short-term cycles until it becomes unsustainable, forcing a painful deleveraging.

Personal context follows the same pattern.

The short-term context cycle: you adopt a platform. You build context. The platform leverages your context to increase switching costs. You consider leaving. The switching cost is too high. You stay. The cycle repeats with deeper lock-in each time.

The long-term context cycle: over ten to twenty years, you accumulate context across many platforms. Gmail has your email context. Google has your search context. ChatGPT has your conversation context. Notion has your project context. Your calendar has your time context. LinkedIn has your professional context. Each platform has a piece. None has the whole picture. Your context is fragmented, siloed, and increasingly leveraged against you.

The deleveraging moment is when you consolidate your context into a system you own. That's the PCM moment. It is painful in the same way Dalio's deleveragings are painful — you leave behind accumulated context, you rebuild in a new system, you accept short-term loss for long-term sovereignty. But the alternative is continued debt accumulation until the system becomes unsustainable.


4. The Framework

4.1 Three Tests for Context Property

4.1 Three Tests for Context Property
4.1 Three Tests for Context Property

A simple framework for determining whether your context is property or leverage:

Test 1: Can you export it? If you cannot receive your personal context in a structured, machine-readable format and transmit it to another system, you do not own it. You are renting access to your own history.

Test 2: Can you inherit it? If your context dies with your subscription — if your family cannot receive your accumulated understanding after you're gone — it is not property. Property survives its owner. Platform features do not.

Test 3: Can you delete it with certainty? If you cannot remove specific context and be confident it is gone — not retained in backups, not embedded in model weights, not preserved in aggregate analytics — you are not in control. The "right to be forgotten" is meaningful only if forgetting is technically achievable.

Personal context that fails all three tests is not property. It is leverage — held by the platform, used for the platform's benefit, inaccessible to you when the relationship ends.

How do current platforms score?

Platform Export Inherit Delete with certainty
ChatGPT Memory No native API No provision Uncertain (training data)
Claude Web Export supported No provision Reset only (all or nothing)
Gemini Zero portability No provision Disconnect services only
Copilot Recall TPM-bound to device Not transferable Local deletion possible
Obsidian vault Fully portable (files) Files in estate Delete the file
PCM (files on disk) Inherent Files in estate Delete the file

Plain files pass all three tests by default. They are exportable because they are files. They are inheritable because they are estate assets. They are deletable because deletion means deletion.

4.2 What Context Property Means

If personal context is property, several things follow:

Portability is a right, not a feature. You should be able to move your context between AI providers the way you move your phone number between carriers. GDPR's Article 20 — the right to data portability — already gestures at this, requiring data controllers to provide personal data in a "structured, commonly used and machine-readable format." The question is whether AI-generated inferences about you (the model's understanding of you, not just the facts you provided) qualify as your personal data. Under any reasonable reading, they should.

Inheritance is natural. When you die, your context — your decisions, your domain knowledge, your understanding of how the world works — should pass to your heirs as naturally as your library does. A folder of Markdown files is inheritable with zero legal friction. Your executor copies the folder. No lawyers, no platform negotiations, no ticking clock before the account is deleted.

Sovereignty is the default. Your context lives on your machine, synced how you choose, shared when you decide. No platform has access unless you grant it. No model provider trains on your context unless you opt in. This is not a radical position. It is the default state of files on a computer. The radical position is what platforms have normalized: that your cognitive history is their asset.


5. The Precedent

5. The Precedent
5. The Precedent

5.1 Protocols Over Platforms

The most durable technologies are protocols, not platforms.

HTTP turned thirty-five this year. HTML turned thirty-three. Email — SMTP, IMAP — turned forty-four. RSS turned twenty-seven. These protocols are still the most reliable, most interoperable, most resilient ways to share information ever built.

The key property they share: no one owns them. Gmail can email Outlook can email Protonmail. Any browser can read any website. Any feed reader can subscribe to any RSS feed. The protocols are substrate. Applications are built on top. The substrate endures; the applications come and go.

The open web was built on plain, readable formats. HTML is human-readable. JSON is human-readable. RSS is XML, which is human-readable. The formats are simple, durable, and universally parseable.

Personal context should be built the same way. Markdown is human-readable, machine-parseable, supported by every editor on every platform. A Markdown file written in 2014 is still perfectly readable in 2026. Can you say the same about a Notion database? An Evernote export? A ChatGPT memory that was silently wiped?

The argument is simple: when people ask "what format should personal context be stored in?", the answer has been obvious for thirty years. The same formats the web runs on. The same formats every tool can read. The same formats that will still work in 2050. Plain files.

5.2 Git as Precedent

In 2005, Linus Torvalds built Git because BitKeeper — a proprietary version control system — revoked the Linux kernel project's free license. The most important open-source project in history was held hostage by a proprietary tool's business decision.

Torvalds' response was not to negotiate with BitKeeper. It was to build a system where no platform could ever hold their work hostage again.

Git's architecture is the precedent for sovereign context:

  • Every clone is a full copy. No single point of failure.
  • No central server required. Two developers can sync peer-to-peer.
  • History is immutable. Every change is tracked, attributed, and permanent.
  • GitHub is optional. GitHub is a convenience layer on top of Git. Git works without GitHub. GitHub cannot work without Git.

The mapping to personal context:

Git PCM
Repository Walnut (context unit)
Commit Save (immutable snapshot)
Branch Bundle (scoped workstream)
Clone Install on new device
Remote walnut.world (optional hub)
.gitignore Privacy controls
Plain text diffs Context changes are human-readable

Git proved that distributed ownership of source code was not only possible but superior to centralized control. Twenty-one years later, it is the universal standard. The same architecture applies to personal context. Your context should be a repository you own, with optional remotes for sharing, full history, and the ability to work offline.

Tim Berners-Lee saw this fifteen years ago and proposed Solid — personal data pods that you control, with applications requesting access. His vision was right. The implementation was too complex for mainstream adoption. PCM achieves the same goals — sovereignty, interoperability, user control — with the simplest possible implementation: files on your disk.

5.3 The Technology Already Exists

This is the part that should feel almost anticlimactic. The technology for sovereign personal context is not futuristic. It is not complex. It is not expensive. It already exists, it is already proven, and it has been working for decades.

  • Markdown for human-readable, machine-parseable content.
  • YAML for structured metadata.
  • JSON for state snapshots and projections.
  • Files on disk for storage that doesn't require a running service, an internet connection, or an active subscription.
  • iCloud / Google Drive / Dropbox for sync across devices (optional, user-chosen).
  • Git for version control and collaboration (optional).
  • MCP for connecting context to AI agents (97 million monthly SDK downloads).

The most sophisticated context management system in existence — the one described in the companion paper "Context as Code" — is built from bash hooks and markdown files. 9,754 lines of configuration, zero trained weights, running daily for eight months. The substrate is files. Everything else is optional.


6. The Rights

6.1 Where the Law Is

GDPR's Article 20 establishes a right to data portability: you can receive your personal data in a "structured, commonly used and machine-readable format" and transmit it to another controller. California's CCPA/CPRA adds rights to deletion and rights to know what data companies hold about you.

These laws were written for the database era. They assume data is discrete and locatable — a row in a table, a file in a folder. AI context is diffuse and emergent — embedded in model weights, conversation histories, and inference patterns. When ChatGPT generates an inference about you ("Ben prefers direct communication and is building a context management system"), is that inference your data (you generated the conversations it's based on) or OpenAI's data (they built the model that made the inference)?

Under any reasonable reading of privacy law's intent, inferences derived from your personal data are your personal data. But "reasonable reading" and "current enforcement" are different things.

6.2 Where the Law Is Going

The EU AI Act (2024) adds AI-specific regulations. India's Digital Personal Data Protection Act (2023) builds domestic data infrastructure to reduce platform dependence. The indigenous data sovereignty movement — the CARE Principles for Indigenous Data Governance — argues that communities should control data about them.

The direction is clear: more sovereignty, more portability, more individual control. The legal frameworks are converging toward the proposition that your data — including your AI context — is yours.

6.3 The Practical Shortcut

But here is the thing: if your context lives in files on your disk, the legal question is moot.

You don't need GDPR to export a Markdown file. You don't need CCPA to delete a folder. You don't need a data portability regulation to copy your files to a new machine. You don't need a lawyer to include a directory in your will.

The property rights argument is important for the platforms. They need laws to force them to treat your context as yours. But for individuals who adopt PCM, the legal fight is already won — because there is no platform to fight. Your context is files. Files are property. The end.


7. The Exit

Balaji Srinivasan draws a distinction between voice and exit. Voice is trying to change a system from within — petitioning, protesting, negotiating. Exit is leaving the system entirely and building an alternative.

Voice says: "OpenAI, please make ChatGPT memories exportable." Exit says: "I'll keep my context in files. Any AI can read them."

Voice says: "Google, please don't auto-enroll me in Personal Intelligence." Exit says: "My context isn't in Google. It's on my machine."

Voice says: "Platforms, please treat our data as our property." Exit says: "It's already our property. It's a folder."

PCM is an exit technology. It does not ask platforms to be better. It makes platforms optional. When your context lives in files you own, the platform becomes a service provider — one you can replace — rather than a landlord holding your cognitive history hostage.

This changes the power dynamic entirely. A platform that knows you can leave has to compete on quality. A platform that knows you can't leave competes on lock-in. The history of technology is the history of reducing lock-in: number portability for phones, open banking for finance, data portability for personal information. Context portability is next.


8. The Invitation

This paper is not a manifesto. It is not a call to burn down platforms or boycott AI tools. Platforms will continue to build valuable AI features. Some people will prefer the convenience of platform-managed context. That's fine.

This paper is an invitation to consider a different default.

The default today: your context is scattered across platforms, in formats you can't control, subject to terms you didn't negotiate, at risk of wipes you can't prevent. You hope the platforms stay good. You have no plan for when they don't.

The alternative default: your context lives on your machine, in plain files, in a structure that any AI can read. You share what you choose. You keep what you keep. When you switch models, your context travels. When you switch tools, your context survives. When you stop paying a subscription, your context remains. When you die, your context passes to your family like any other possession.

The technology for this exists today. It is not complex. It is not expensive. It is not theoretical. It is files.

Xerox proved the GUI. Apple owned it. Netscape proved the browser. The open web won. Platforms proved AI context. Ownership is next.


References

Enshittification & Platform Theory

  • Doctorow, C. (2023). "The Enshittification of TikTok." Pluralistic / Wired.
  • Thompson, B. (2015-2026). "Aggregation Theory." Stratechery.
  • Barksdale, J. (quoted widely). "There's only two ways to make money: bundling and unbundling."

Data Sovereignty & Privacy Law

  • GDPR Article 20 — Right to Data Portability.
  • GDPR Article 17 — Right to Erasure.
  • CCPA/CPRA — California Consumer Privacy Act.
  • EU AI Act (2024).
  • Nissenbaum, H. "Contextual Integrity." As framework for information flow norms.
  • Weyl, G. & Lanier, J. "Data as Labor." Economic argument for data ownership.
  • CARE Principles for Indigenous Data Governance.

Open Web & Protocols

  • Berners-Lee, T. Solid Project. solidproject.org.
  • Torvalds, L. (2005). Git. Distributed version control.
  • Raymond, E. (1997). "The Cathedral and the Bazaar."

Thesis Structure & Style

  • Dalio, R. (2017). Principles. Simon & Schuster.
  • Srinivasan, B. (2022). The Network State. 1729.
  • Graham, P. "Do Things That Don't Scale." paulgraham.com.

AI Memory & Context Systems

  • OpenAI (2024). "Memory and New Controls for ChatGPT."
  • Anthropic (2026). "Claude Memory." Claude Help Center.
  • Google (2026). "Personal Intelligence." Google AI Blog.
  • Willison, S. (2025). "I really don't like ChatGPT's new memory dossier."

Market Data

  • DemandSage (2026). "ChatGPT Statistics."
  • DataIntelo (2024). "PKM Software Market Report."
  • DataHub (2026). "State of Context Management Report."

Companion Papers

  • Flint, B. (2026). "Personal Context Management: Defining the Category." Lock-in Lab.
  • Flint, B. (2026). "Context as Code: Building an Agent Runtime from Plain Files." Lock-in Lab.

Cite this work

Flint, B. (2026). "Context as Property: The Ownership Thesis." Lock-in Lab Research.