← Back to Home

Getting Started with hippocampus.md

Set up context lifecycle management in your AI agent in under 10 minutes.

Prerequisites

  • OpenClaw with Pi agent — hippocampus.md is a Pi extension
  • Node.js/TypeScript support — for the extension runtime
  • Basic understanding of AI agent context — what tokens are, why context size matters
1

Download the Extension

Copy the hippocampus.ts extension file to your Pi extensions directory:

# Option A: Clone this repo and copy
git clone https://github.com/starvex/hippocampus-md.git
cp hippocampus-md/extension/hippocampus.ts ~/.pi/extensions/

# Option B: Download directly  
curl -o ~/.pi/extensions/hippocampus.ts \
  https://raw.githubusercontent.com/starvex/hippocampus-md/main/extension/hippocampus.ts
2

Configure Compaction Mode

Critical: Set your Pi agent to use "default" compaction mode (NOT "safeguard"):

// ~/.pi/config.json
{
  "compaction_mode": "default"
}

The "safeguard" mode bypasses extension hooks, preventing hippocampus.md from working.

3

Restart & Verify

# Restart Pi agent
openclaw gateway restart

# Check the hippocampus log
tail -f ~/.pi/hippocampus.log

# You should see:
# [hippocampus] 🧠 hippocampus.md extension loaded

Configuration

hippocampus.md works out-of-the-box with sensible defaults. Tune for your use case:

const CONFIG = {
  // Per-type decay rates (lower = remembers longer)
  decayRates: {
    decision:    0.03,   // decisions persist ~30× longer
    user_intent: 0.05,   // user goals persist ~20× longer  
    context:     0.12,   // general context — standard decay
    tool_result: 0.20,   // tool outputs decay fast
    ephemeral:   0.35,   // heartbeats/status — decay very fast
  },
  
  sparseThreshold: 0.25,     // below this → pointer only
  compressThreshold: 0.65,   // below this → compressed summary  
  
  retentionFloor: {          // minimum retention per type
    decision:    0.50,       // decisions never drop below 0.50
    user_intent: 0.35,       // user goals never drop below 0.35
  },
  
  maxSparseIndexTokens: 2500,   // max tokens for sparse index
  summaryModel: "gemini-2.5-flash", // cheap model for classification
  debug: true,                  // verbose logging
};

Understanding Decay Rates

Decay rates control how fast different content types lose strength:

TypeRateHalf-lifeExample
decision0.03~46 turns"I'll deploy using Vercel"
user_intent0.05~28 turns"Build me a login system"
context0.12~12 turnsNormal conversation
tool_result0.20~7 turnsFile reads, API responses
ephemeral0.35~4 turnsHeartbeats, status checks

Lower rates = longer persistence.

Understanding Thresholds

Strength 1.0 ████████████████████ Full content
         0.65 ████████████▓▓▓▓▓▓▓▓ ← compressThreshold
         0.25 █████▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ← sparseThreshold  
         0.0  ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ Dropped from context
Above 0.65
Full content stored
0.25 - 0.65
Compressed summary
Below 0.25
Sparse pointer only

Architecture

┌─────────────────────────────────────────────────────────────┐
│                     CONTEXT WINDOW                          │
│  ┌─────────────────────────────────────────────────────┐    │
│  │              HIPPOCAMPUS INDEX                      │    │
│  │  ┌─────────────┐ ┌─────────────┐ ┌─────────────┐   │    │
│  │  │ Entry A     │ │ Entry B     │ │ Entry C     │   │    │
│  │  │ str: 0.92   │ │ str: 0.45   │ │ str: 0.12   │   │    │
│  │  │ type: dcsn  │ │ type: tool  │ │ type: tool  │   │    │
│  │  └─────────────┘ └─────────────┘ └─────────────┘   │    │
│  │                                                     │    │
│  │  Active context: ~5,000 tokens (index only)        │    │
│  └─────────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────────┘
                              │ pattern completion
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                    EXTERNAL SOURCES                         │
│  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐       │
│  │ memory   │ │ cache    │ │ API      │ │ browser  │       │
│  │ decisions│ │ file     │ │ config   │ │ snapshot │       │
│  └──────────┘ └──────────┘ └──────────┘ └──────────┘       │
│           Retrievable: ~500,000 tokens                      │
└─────────────────────────────────────────────────────────────┘

Tuning for Use Cases

Long Sessions

For agents running for hours:

sparseThreshold: 0.15
tool_result: 0.30
ephemeral: 0.50

Tool-Heavy

For many tool calls:

tool_result: 0.25
decision floor: 0.60
user_intent floor: 0.45

Sensitive Data

Preserve user context:

user_intent floor: 0.40
priority: "high" for
critical entries

Debug Mode

For development:

debug: true
sparseThreshold: 0.35
compressThreshold: 0.75

Expected Results

16.6×
Compression
89%
Retrieval Success
~10ms
Per-Turn Overhead
0
Data Lost

Part of the Agent Brain Architecture

defrag.md • synapse.md • hippocampus.md • neocortex.md

Happy memory management! 🧠