Published

Personal Context Management: Defining the Category

Ben Flint
Lock-in Lab
March 2026
Abstract

We introduce Personal Context Management (PCM) as a distinct category of software and practice, separate from Personal Knowledge Management (PKM). Where PKM systems organize knowledge for human retrieval, PCM systems provision context for agent action. We survey 27 years of PKM — from Frand & Hixon's 1999 coining through PARA, Zettelkasten, and the current $2.45 billion tool landscape — and identify a structural gap: PKM captures the what but loses the why. The contextual wrapper — who, when, why, what-for — that makes knowledge actionable decays on capture and is absent from every major PKM tool.

We formalize PCM as the practice of managing this contextual wrapper: the decisions, relationships, temporal state, and domain knowledge that enable AI agents to continue work across sessions without re-explanation. We present a reference implementation (the Alive Context System) and argue that PCM is not an evolution of PKM but a successor category — one designed for the agent era, where the primary consumer of your knowledge is no longer you, but your AI.

1. The State of Things

1.1 Twenty-Seven Years of Filing

In 1999, Dr. Jason Frand and Carol Hixon at UCLA's Anderson School of Management coined the term "Personal Knowledge Management" in a working paper designed for MBA students. The question was simple: how do individuals manage their own knowledge growth? The answer they proposed — a seven-skill framework spanning retrieval, evaluation, organization, analysis, presentation, security, and collaboration — launched a field that would eventually produce a $2.45 billion software market.

Twenty-seven years later, the question has changed. The answer hasn't.

Every major PKM tool built since Frand & Hixon — from Evernote's web clippers to Obsidian's graph view — answers the same fundamental question their 1999 paper posed: where do I put this so I can find it later? The tools got better. The question stayed the same.

Vannevar Bush saw it coming in 1945. His Memex — a hypothetical machine for storing all books, records, and communications — was conceived as "an enlarged intimate supplement to his memory." Bush understood that the human mind "operates by association" and envisioned trails of connected thought. Eighty-one years later, we have exactly the trails he described. Obsidian calls them backlinks. Roam calls them bidirectional links. Logseq calls them block references. They all do the same thing Bush imagined: connect information by association.

What Bush couldn't anticipate — what none of them anticipated — was that the primary consumer of your organized knowledge would become non-human.

1.2 The Tools We Built

The PKM tool landscape in 2026 is vast, sophisticated, and fundamentally homogeneous.

Obsidian (1.5 million users, 22% year-over-year growth) positions itself as a local-first "second brain" built on plain Markdown files. Privacy-first, no telemetry, 1,000+ community plugins. Its graph view visualizes connections between notes. It is, at its core, a filing system with a beautiful map of the filing cabinets.

Notion offers an all-in-one workspace combining notes, databases, kanban boards, and wikis. It is team-first, cloud-dependent, and optimized for collaboration. Its users frequently report "reinventing pages because they cannot remember where something was stored."

Roam Research pioneered bidirectional linking and "networked thought" before declining from its 2020 viral peak. Many users migrated to cheaper alternatives. Its pricing remains among the highest for PKM tools. Its performance degrades with large databases.

Tana ($25 million in funding, 160,000+ waitlist) is the most AI-native of the current generation, with "supertags" modeled on object-oriented programming that transform unstructured information into structured data. It is cloud-dependent and still maturing.

Logseq, Capacities, Anytype, Mem, Reflect — each brings a variation on the theme. Local vs cloud. Blocks vs pages. AI-assisted vs manual. Open-source vs proprietary.

What they share is more important than what differentiates them. Every one of these tools is a storage system optimized for human retrieval. They help you file things and find things. They are, in Bush's terms, elaborate Memexes — supplement to your memory.

None of them ask: what does your agent need to know right now?

1.3 The Frameworks We Followed

The tools were shaped by frameworks, and the frameworks share the same assumption.

PARA (Tiago Forte, ~2017) organizes files into Projects, Areas, Resources, and Archives — a storage system based on "actionability." The question it answers: which folder does this note go in?

Zettelkasten (Niklas Luhmann, 1950s) produces atomic notes with numbered relationships. Luhmann described his system as a "communication partner" and attributed his prolific output — 70 books, 400+ scholarly articles — to the 90,000 cards he accumulated. The question it answers: how do I connect this idea to other ideas?

GTD (David Allen, 2001) separates actionable from non-actionable information. Allen's insight — "your mind is for having ideas, not holding them" — launched an industry. But he acknowledged that "a good general-reference file" remained "one of the biggest bottlenecks" in the system. The question GTD answers: what do I do next?

Evergreen Notes (Andy Matuschak) argues that notes should be atomic, concept-oriented, densely linked, and written for yourself. The question it answers: how do I think better?

LYT/ACE (Nick Milo) emphasizes linking over categorizing — "creating a web of knowledge where notes are connected based on context and relevance." The question: how do I find the connection?

Each framework is valuable. Each framework is a filing strategy. Each framework answers a variation of: how does a human organize and retrieve knowledge?

None of them answer: how does an agent receive the context it needs to continue your work?

That's a different question. It requires a different category.

1.4 A $2.45 Billion Filing Cabinet

The PKM software market reached $2.45 billion in 2024 and is projected to grow at 15.8% CAGR to $9.12 billion by 2033. It is growing fast. But growing toward what?

Consider the evidence: McKinsey reports that knowledge workers waste 9.3 hours per week searching for information — nearly a quarter of their working time lost to retrieval. Evernote — the first-wave PKM tool that defined the category — saw downloads collapse 82%, from 9.6 million in 2017 to 1.7 million in 2023. Roam Research went from viral sensation to "legacy powerhouse" in under three years. The market grows while individual systems fail.

The churn is the market. People buy into the promise of organized knowledge, build elaborate systems, watch them decay, blame themselves, switch tools, and start over. The tool-hopping cycle — try Tool A, find it inadequate, migrate to Tool B, lose context in the migration, try Tool C, start from zero — costs 2–3 months of productivity per iteration.

A $2.45 billion market built on a foundation of recurring failure is not a healthy market. It's a signal that the category itself is wrong.

2. The Gap

2.1 Knowledge vs Context

Michael Polanyi drew a line in 1958 that the PKM industry has spent sixty-seven years ignoring.

In Personal Knowledge and later The Tacit Dimension (1966), Polanyi distinguished between knowledge-that — explicit facts and propositions — and knowledge-how — the tacit, embodied, contextual understanding that makes facts actionable. "We can know more than we can tell," he wrote. The knowledge that matters most is the knowledge we cannot easily articulate.

PKM tools capture knowledge-that. Your note says what happened. Your bookmark saves the article. Your highlight preserves the paragraph. These are facts — explicit, articulable, filing-cabinet-ready.

Context is everything else. Why you captured that article — because you were evaluating vendors for a project that was behind schedule and your co-founder was nervous about the budget. Who was involved — the three people in the meeting who disagreed about the approach, and the one who changed their mind. When it mattered — not just the date, but the state of the project, the phase of the work, the emotional temperature of the team. What it served — the decision it informed, the outcome it shaped, the next step it unlocked.

Context is the wrapper that makes knowledge actionable. Strip the wrapper and you have inert data — accurate, well-organized, and useless.

PKM systems strip the wrapper.

2.2 Context Collapse in Knowledge Systems

"Context collapse" was coined to describe how social media platforms flatten multiple audiences into a single undifferentiated context — the near impossibility of managing different identities in a space where everyone sees everything.

Knowledge systems suffer an analogous collapse. When you capture a note, you capture the what. The why evaporates. The who fades. The when becomes a datestamp with no surrounding state. The note remains. The meaning dies.

This is not a failure of discipline or methodology. It is a structural property of systems designed for storage and retrieval. If your system's unit of organization is a note — a discrete artifact of captured knowledge — then the contextual wrapper is, by definition, not part of the system. The note is the content. The context is invisible, uncaptured, and decaying.

Within one hour, 50% of learned information is forgotten. Within 24 hours, 70%. Within one month, up to 90% is gone without reinforcement. This is Ebbinghaus's forgetting curve, replicated by Murre and Dros in 2015. It applies not just to the knowledge itself but to the context around the knowledge — why you saved it, what it was for, where it fit.

The note survives. The context that made the note meaningful does not.

2.3 The Note Graveyard

The evidence is not theoretical. It is personal, widespread, and emotionally devastating.

"I wrote constantly. I read almost never." A developer who deleted — not archived, not reorganized, actually deleted — their entire Zettelkasten described it as "write-only memory." The notes accumulated. Retrieval never happened. "The act of writing the note was the value. The note itself was theater."

"You don't have a second brain. You have a primary brain which thinks, processes, and a filing cabinet where curiosity goes to die." A psychology student who spent 200+ hours building a second brain in Obsidian called it "a graveyard where good ideas go to be embalmed." He was spending half his study time on formatting and reorganization rather than synthesis. "I felt like a scammer scamming myself."

"Every attempt, over decades, from pen and paper to Obsidian to Notion, has ended in a mess that becomes unusable." A user with ADHD described having "hundreds or even thousands of transcriptions that are effectively meaningless." The volume accumulated. The value didn't. "Sometimes I feel like all this effort to manually do what my brain cannot is futile."

The pattern recurs across forums, blog posts, and confessional essays. One user documented 3,871 notes and 30,555 unprocessed files, deleted more than half "without hesitation," and missed 1%. Obsidian Forum threads discuss "pornographic productivity" — the phenomenon where building and maintaining the system feels like doing the work, replacing the actual work with its aesthetic simulation.

Even the PKM thought leaders acknowledge the failure mode. Andy Matuschak, the most intellectually rigorous voice in the space, observes that "people who write extensively about note-writing rarely have a serious context of use." The people selling PKM systems use them primarily to produce more content about PKM systems. "Luhmann, by contrast, barely wrote about his Zettelkasten: he focused on his prolific research output."

This is not a tool problem. Better search, better linking, better AI tagging — none of these address the structural issue. The system captures artifacts of knowledge. The context that makes those artifacts meaningful is not captured, not maintained, and not available for retrieval.

2.4 The Half-Life Problem

Knowledge decays. The human mind forgets more than half of what it encounters within two days. But PKM systems have no mechanism for decay. They are permanence machines in a world that requires freshness.

The more notes you keep, the harder it becomes to find and use what matters. "200 notes on a topic become clutter, and there's no way to distill it." Research describes this as the fundamental paradox of knowledge systems: "People overestimate the future value of what they save, leading to bloated systems."

Technical knowledge has a half-life of 6–18 months. Business contexts shift quarterly. Relationships evolve. Decisions are superseded. A note from six months ago that says "we're going with vendor X" is not just outdated — it is actively misleading if vendor X was dropped three months later and you don't have the context of why.

PKM systems have no mechanism for staleness, no mechanism for pruning, no mechanism for distinguishing what's current from what's historical. They treat all knowledge as equally permanent. This is a feature, not a bug — in a filing system, permanence is the point. But context is temporal. It moves. It expires. It needs to know what time it is.

3. The Shift

3.1 Filing vs Provisioning

The category shift is a single sentence:

PKM is filing for humans to retrieve later. PCM is provisioning for agents to act now.

This is not a feature upgrade. This is a paradigm change. The primary consumer of your organized knowledge is no longer you sitting at a desk, searching your notes, trying to remember where you put something. It is an AI agent that needs to understand your world in order to act on your behalf.

The shift from retrieval to provision changes everything about how context should be organized.

In PKM, you do the work of retrieval. You search. You browse. You follow links. You remember (or don't remember) where you filed something. The system optimizes for your ability to find things.

In PCM, the system provisions context to the agent. The agent doesn't search your notes. It receives the scoped context it needs — identity, history, active work, domain knowledge — injected into its context window before it speaks. The system optimizes for the agent's ability to continue your work.

This is the difference between a library and a briefing. A library organizes all the books and trusts you to find the right one. A briefing gives you exactly what you need for the meeting you're about to walk into.

PKM builds libraries. PCM delivers briefings.

3.2 Why This Isn't Just "PKM + AI"

The temptation is to frame PCM as "PKM with AI features bolted on." Tana adds AI tagging. Obsidian gets MCP bridges. Mem auto-organizes. Reflect's AI "understands the entire note graph."

But bolting AI onto a storage system doesn't change the paradigm. The agent still inherits the flat structure, the missing context, the note graveyard. You've given a robot the keys to a filing cabinet.

As one researcher put it: "Unstructured notes are almost as useless to AI as having no notes at all." AI agents need consistent types with defined properties, logical organization, structured metadata, interconnected links, and machine-readable format. They need context provisioned to them — scoped, current, and relevant to the work at hand.

Sebastien Dubois presents 8 Levels of AI Context Management, from generic AI (Level 1) to AI-ready knowledge systems (Level 8). Most users are stuck at Levels 2–3. The gap between "I have notes" and "my agent has context" is structural, not incremental.

You can't fix a provision problem with a better retrieval tool.

4. Personal Context Management: The Definition

4.1 What PCM Is

Personal Context Management (PCM) is the practice of capturing, structuring, and provisioning the contextual wrapper around personal knowledge — the decisions, relationships, temporal state, domain insights, and intentional metadata — such that AI agents can continue work across sessions without loss of continuity.

A PCM system:

4.2 What PCM Is Not

PCM is not a note-taking app. It is not a "second brain" — your brain is no longer the primary consumer. It is not PKM with AI features. It is not a task manager, a search engine, or a chatbot memory system. The memory your AI platform provides — fragments, disconnected facts, no structure, no history, no portability — is a platform's approximation of what PCM provides natively.

4.3 PCM vs PKM vs PIM vs KM

PIM PKM KM PCM
Scope Personal info Personal knowledge Organizational Personal context
Consumer Human Human Team Agent
Operation Store & retrieve Organize & connect Share & codify Provision & compound
Unit File Note Document Context unit (walnut)
Failure mode Lost files Note graveyard Siloed teams Context collapse
Optimizes for Finding Connecting Distributing Continuing
Question Where is it? How does it relate? Who needs it? What does my agent need?

4.4 The Ancestry

PCM's intellectual lineage traces a clear arc:

Year Thinker Contribution Question Answered
1945 Vannevar Bush Memex — associative retrieval How do I supplement my memory?
1950s Niklas Luhmann Zettelkasten — communication partner How do I think in partnership with a system?
1958 Michael Polanyi Tacit Knowledge — "we know more than we can tell" What knowledge can't be captured?
1962 Doug Engelbart Augmenting Human Intellect How do computers extend cognition?
1999 Frand & Hixon Coined PKM How do individuals manage knowledge?
2001 David Allen GTD — externalize the actionable What do I do next?
~2017 Tiago Forte PARA/CODE — organize by actionability Which folder does this go in?
~2018 Andy Matuschak Evergreen Notes — atomic, linked, personal How do I think better?
2020 Nick Milo LYT/ACE — link, don't categorize How do I find the connection?
2025 Anthropic Context Engineering — enterprise discipline How do we curate tokens for agents?
2025 Sebastien Dubois Agentic Knowledge Management How does AI interact with PKM?
2026 Ben Flint Personal Context Management How do individuals manage their context?

The through-line: each step moved closer to the insight that context, not knowledge, is what needs to be managed. PCM is where the line was always heading.

5. Context Engineering: The Enterprise Cousin

5.1 What Anthropic Defined

In September 2025, Anthropic's Applied AI team published "Effective Context Engineering for AI Agents," formalizing a discipline that had been emerging across the industry. Their definition:

"Context engineering: the set of strategies for curating and maintaining the optimal set of tokens during LLM inference, including all the other information that may land there outside of the prompts."

The distinction from prompt engineering is significant. Prompt engineering is writing good instructions. Context engineering is managing the entire information state. As the paper puts it: "Building with language models is becoming less about finding the right words and phrases for your prompts, and more about answering the broader question of 'what configuration of context is most likely to generate our model's desired behavior?'"

The paper introduces key concepts: the attention budget (every token depletes the model's ability to attend to other tokens), context rot (recall accuracy decreases as context length increases), and the four strategy pillars of system prompts, tools, examples, and message history. It documents techniques for long-horizon work: compaction, structured note-taking, and sub-agent architectures.

Andrej Karpathy amplified the framing: "Context engineering is the delicate art and science of filling the context window with just the right information for the next step." Tobi Lutke's endorsement reached 1.9 million views. Gartner identified context engineering as a top emerging technology skill for 2026 and predicts that by 2028, 80% of AI tools will include context engineering features.

The term stuck. The discipline is real. The question is: who does it apply to?

5.2 The Enterprise Claim

DataHub's co-founder Shirshanka Das drew a further distinction between context engineering (single-application) and context management (enterprise-wide):

"Context engineering solves the problem within a single application — it's the techniques and tools one team uses to fill their agent's context window effectively. It ends up being artisanal, bespoke, and isn't well set up to scale across an organization."

"Context management is what gives it those superpowers by solving this across your entire enterprise. It's systematic, governed, and built for scale."

DataHub positions context management as a $9 billion platform category. Their CONTEXT 2025 conference drew 1,500+ data leaders. Apple demonstrated agentic workflows. Netflix presented metadata infrastructure. The enterprise layer is being claimed.

5.3 The Personal Gap

The context stack now has four layers:

Prompt Engineering        → technique    (deprecated)
Context Engineering       → discipline   (Anthropic, Karpathy, Gartner)
Context Management        → enterprise   (DataHub, $9B market)
Personal Context Mgmt     → individual   (UNCLAIMED)

Anthropic tells developers how to curate tokens for agents. DataHub tells enterprises how to govern context at scale. But who tells individuals how to structure their personal context so that any AI agent they interact with can serve them effectively?

The answer, today, is nobody.

Anthropic's own paper acknowledges the gap without naming it: "This approach mirrors human cognition: we generally don't memorize entire corpuses of information, but rather introduce external organization and indexing systems like file systems, inboxes, and bookmarks to retrieve relevant information on demand." These "external organization and indexing systems" are precisely what PCM provides — but for personal context, not enterprise data.

DataHub's analogy is equally revealing: "Context engineering is like each development team writing their own authentication system. Context management is like implementing enterprise SSO." They solve for enterprises. Individuals don't get SSO. Individuals get a flat MEMORY.md file.

Dubois sees it most clearly: "Your AI is only as good as the context you give it. Most people give it almost nothing." He proposes Agentic Knowledge Management as the solution — but frames it as an evolution of PKM, not a new category. The tools stay the same. The paradigm stays the same. AI is added on top.

PCM is the missing layer. Not an evolution of PKM. Not enterprise context management scaled down. A distinct category for the distinct problem of managing personal context in the agent era.

6. The Context Crisis

The argument for PCM is not theoretical. Every major platform, every developer tool, and every agent framework is currently failing at personal context. The evidence is systemic.

6.1 Platform Memory: Automatic and Dumb

The three largest AI platforms have each shipped memory features. All three follow the same pattern: the model decides what to save, the user has minimal control, and the result is non-portable.

ChatGPT Memory stores a flat list of short statements on OpenAI's servers. The model decides what to save. Users cannot control what the "Reference Chat History" feature surfaces — it is a black box. MIT Media Lab's 2025 research on cognitive offloading found that 83.3% of users who relied on ChatGPT for writing tasks couldn't recall a single sentence from their own work — compared to just 11.1% in control groups. Platform-dependent AI memory doesn't just fail to remember for you; it makes you unable to remember for yourself. Meanwhile, two-thirds of users who saw "Memory updated" later found their memories missing or corrupted. In February and November 2025, catastrophic memory wipes erased users' accumulated context without warning. Creative writers lost "entire fictional universes" with 40+ character backstories. Over 300 complaint threads accumulated on r/ChatGPTPro. Users described the system as "far gone in dementia." There is no native API for exporting memories. 900 million weekly active users, 70% with memory enabled, accumulating context they cannot take with them.

Claude's web memory (free tier, March 2026) synthesizes a categorized summary updated every 24 hours. More structured than ChatGPT's flat list — organized by professional domain — but still a single synthesized document. To its credit, Anthropic offers export and import, including from ChatGPT. But the memory remains server-side and model-synthesized.

Google's Personal Intelligence (January 2026) connects Gemini to Gmail, Photos, YouTube, and Search to reason across all your Google data simultaneously. The intelligence is real — and completely non-portable. There is no mechanism to export what Gemini "knows" about you. When Google auto-enabled the feature without asking in October 2025, a class-action lawsuit followed. Leave Google, lose everything.

Microsoft Copilot Recall captures screenshots of your active window every few seconds, runs OCR, and indexes everything you've seen or typed. It is surveillance-as-search, encrypted to a single device's TPM chip, non-portable by design. It captures self-destructing messages from Signal and WhatsApp. It records Zoom calls. It is not a memory system. It is a panopticon with a search bar.

The pattern across all four: context is captured automatically, stored on the platform's terms, structured minimally, and portable not at all. The model decides what matters. The user hopes for the best.

6.2 Developer Context: Manual and Flat

For developers working with AI coding tools, context management is manual practice built on flat files.

MEMORY.md — Claude Code's auto-memory — persists context in local Markdown. The model decides what to save. The first 200 lines are loaded at session start; everything beyond is invisible. There is no scoping, no linking, no temporal state. This is PKM thinking applied to agent context: one file, one place, one undifferentiated list. It works at 10 items. It fails at 200.

CLAUDE.md files offer the most structured approach currently available — a five-level hierarchy from organization policy down to directory-scoped rules, with glob-pattern matching. This is genuine scoping. But CLAUDE.md is project configuration, not personal context. It tells the agent how to behave here, not who you are everywhere.

AGENTS.md (60,000+ repositories, 20+ supporting tools) standardizes project-level agent instructions. But a March 2026 ETH Zurich paper found that AGENTS.md files may actually hinder agent performance. Cursor rules, Windsurf rules, and other tool-specific configurations follow the same pattern: static files that shape per-turn behavior but maintain no state, no history, no compounding.

The tools are getting better at reading context. Nobody is getting better at structuring it.

6.3 RAG: The Default That Isn't Enough

Retrieval Augmented Generation claimed the "context engine" slot by default. The chunk-embed-retrieve pipeline is now standard infrastructure for giving agents access to documents. Four out of five organizations increased AI investment in 2026, and most of that investment flows through RAG.

But RAG answers "what's in the documents?" not "what does my agent need right now?" It has no mechanism for personal context, preferences, relationships, or project state. It has no temporal awareness. Google Research found RAG paradoxically increases hallucination confidence — the additional context makes the model more sure of wrong answers.

The industry's own verdict: "Cannot live without RAG, yet remain unsatisfied." Gartner predicts 60% of AI projects abandoned without AI-ready data practices. RAG is necessary infrastructure. It is not a context management solution.

6.4 The Emerging Alternatives

OpenClaw (247,000+ GitHub stars, 2 million monthly active users) has the architecture right: a pluggable context engine slot with four lifecycle hooks — Ingest, Assemble, Compact, After Turn. Developers can swap in custom context strategies. But only RAG-style engines currently fill the slot. OpenClaw built the socket. Nobody has built the plug for personal context.

Hermes Agent (Nous Research, February 2026) ships persistent memory in local Markdown — the right instinct. The implementation: MEMORY.md gets 2,200 characters. USER.md gets 1,375 characters. Total: 3,575 characters. Two paragraphs. For everything an agent knows about you and your work. Memory updates aren't even visible in the current session — only on restart.

6.5 The Evidence

Every approach demonstrates the same structural failure. Agent context today is either:

None of these are PCM. None provision scoped, structured, temporal personal context to agents. The slot is open. The category is empty.

6.6 Siloed but Linked

The PCM architecture principle is the opposite of flat: context should be scoped (siloed) but connected (linked). Not merged. Not flattened. Not tagged-and-searched.

In the reference implementation (the Alive Context System), a walnut holds context for a single domain — a venture, an experiment, a person, a life area. Each walnut has a kernel of three source files: identity (what it is), history (where it's been), and knowledge (what's known). A generated projection — a snapshot of current state — is rebuilt on every save.

Bundles of work grow inside walnuts. Each bundle is a self-contained unit with its own manifest, tasks, observations, and source material. Bundles can contain sub-bundles. The tree can nest without limit.

The key property: each node in the tree has its own manifest. You can scan 50 bundles by reading 50 manifests — 50 lines of YAML — without loading any content. You can prune a bundle without touching its siblings. You can graph the connections between walnuts without loading them all into memory.

Compare this to MEMORY.md. To understand what's in MEMORY.md, you read all of MEMORY.md. To prune MEMORY.md, you read all of MEMORY.md and decide what to remove. To find connections in MEMORY.md, you… read all of MEMORY.md.

The file system is the methodology. Scoping is the feature. The tree structure prevents the entropy that flat files guarantee.

6.7 The Three Files That Replace Everything

A walnut's kernel contains three source files:

Plus one generated projection:

Four artifacts. That is the entire state of any context unit.

Compare to PARA's "figure out which folder" or Zettelkasten's "number your cards" or GTD's "is this actionable?" Context management is simpler than knowledge management because it is scoped by design. The scope is the structure. The structure is the methodology.

7. The Reference Implementation

The Alive Context System is the first PCM. It is a Claude Code plugin — 15 skills, 14 hooks, plain Markdown files on the user's machine — with over 100 users in its first two weeks of public release. Adoption has spread organically across the Claude Code, Hermes, and OpenClaw ecosystems, with 257 unique cloners confirmed by GitHub API and international reach across developer communities.

It is not the only possible PCM. It is the proof that PCM works — that context can be structured to compound across sessions, that agents can be provisioned rather than searched, that the contextual wrapper can be preserved rather than lost.

A detailed technical analysis of the implementation — including the "Context as Code" pattern of building an agent runtime from hooks and plain files — is the subject of a companion paper.

8. Implications

8.1 For Tool Builders

Every PKM tool is one paradigm shift away from becoming a PCM tool. The features they need: scoped context units (not flat note vaults), agent provisioning (not just human search), temporal state (not just timestamps), manifest-driven indexing (not just tags), and session continuity (not just storage).

Obsidian is closest — local-first, plain files, extensible. An Obsidian plugin that adds walnut-style scoping and agent provisioning would transform it from the best PKM tool into a PCM tool.

8.2 For AI Agent Developers

Your agent is only as good as the context provisioned to it. MCP gives you connectivity — 97 million monthly SDK downloads. PCM gives you the context worth connecting. Without structured personal context, MCP is a highway with no cars on it.

8.3 For Individuals

You own your context. Not the platform. Not the model provider. You.

The memory your AI platform offers — ChatGPT's fragments, Gemini's Personal Intelligence, Copilot's recall — is platform-locked, non-portable, and structurally incapable of compounding. It is, as the whitepaper describes, "a shitty photocopy of your memory." They keep the original. You rent the copy.

PCM means your context lives on your machine, in your files, in a structure you control. When you switch models, your context travels with you. When you switch tools, your context survives. When you stop paying for a subscription, your context remains.

This is not a feature. It is a property right. The argument for context sovereignty — that personal context is property, not platform data — is the subject of a companion paper.

9. Conclusion

Frand and Hixon asked "how do individuals manage their knowledge?" in 1999. Twenty-seven years later, the answer is: badly. The tools are sophisticated. The frameworks are elegant. The market is $2.45 billion. And knowledge workers still waste a quarter of their week searching for information their systems were supposed to organize.

The problem is not the tools. The problem is the question.

The successor question is: how do individuals manage their context?

Context is not knowledge. Knowledge is the what. Context is the why, the who, the when, and the what-for. Knowledge can be filed. Context must be provisioned. Knowledge is permanent. Context is temporal. Knowledge is retrieved by humans. Context is consumed by agents.

Personal Context Management is the category that answers the successor question. It is not an evolution of PKM — it is a replacement for it, as surely as PKM replaced general-purpose filing. The agent era does not need better filing cabinets. It needs context infrastructure.

The age of the second brain is over. The age of the living context has begun.

References

Academic & Foundational

Frameworks

Context Engineering

PKM Critique

Platform Memory & Context Systems

Market Data

Appendix A: PKM Tool Landscape (2026)

To be populated with full tool comparison matrix from pkm-landscape survey.

Appendix B: Framework Comparison Matrix

PARA vs Zettelkasten vs GTD vs LYT vs ALIVE/PCM — detailed feature and philosophy comparison.

Appendix C: MEMORY.md Teardown

Side-by-side analysis: what MEMORY.md looks like at 50/100/200 entries vs what a bundle tree looks like at the same scale.

← First paper Context as Code →