← Blog

Graft v2.0: Import System and Persistent Memory

5 min read
graftcompileraiclaude-codeadversarial-debateimportsmemory

What is Graft?

Graft is a graph-native language that compiles .gft files to Claude Code harness structures and executes AI agent pipelines. It eliminates token waste through typed schemas, edge transforms, and compile-time budget analysis.

What v2.0 Adds

Two features that turn Graft from a single-file tool into a multi-file, stateful system.

Import System

Share contexts and nodes across .gft files:

// shared.gft — a library file (no graph declaration)
context UserMessage(max_tokens: 500) {
  content: String
  user_id: String
}

context SystemConfig(max_tokens: 200) {
  persona: String
  temperature: Float(0..1)
}
// chatbot.gft — imports from shared.gft
import { UserMessage, SystemConfig } from "./shared.gft"

node Responder(model: sonnet, budget: 4k/2k) {
  reads: [UserMessage, SystemConfig]
  produces Response { reply: String }
}

The resolver uses DFS with an ancestor set for circular import detection, and an ExportableNames snapshot to prevent transitive re-export. Only contexts and nodes are importable — edges, graphs, and memories are excluded.

Persistent Memory

State that survives session cleanup and persists across pipeline runs:

memory ConversationLog(max_tokens: 2k, storage: file) {
  turns: List<Turn { role: String, content: String }>
  summary: Optional<String>
}

node Responder(model: sonnet, budget: 4k/2k) {
  reads: [UserMessage, SystemConfig, ConversationLog]
  writes: [ConversationLog]
  produces Response { reply: String }
}

Memory is stored in .graft/memory/<name>.json, separate from session data. The writes: [ConversationLog] clause declares which nodes can mutate memory. At runtime, memory is always reloaded from disk before each node execution, and dry run mode skips memory saves entirely.

New Keywords

import, from, memory, writes, storage


5 Debate Rounds Across All Pipeline Stages

v2.0 was the most complex release — both features touch every stage of the compiler pipeline, requiring 5 adversarial debate rounds:

| Round | Scope | Key Decision | |-------|-------|--------------| | R1 | Lexer + AST + Parser | 5 new keywords, writes: string[] on NodeDecl, flag-based import ordering | | R2 | Import Resolver | ExportableNames snapshot before recursion, DFS ancestor set for cycles | | R3 | Analyzer | Memory name collision detection, writes validation, 0.3 partial factor for token estimation | | R4 | CodeGen + Runtime | Field-matching merge, foreach staleness fix, dry run guard | | R5 | Integration | Example files, 8 integration tests with temp file isolation |

~70 agent calls total, with complexity-adaptive scaling: full 4-agent debate for R1-R4 (design decisions), streamlined 1-agent for R5 (integration tests).


The Bugs That Mattered

Foreach Memory Staleness (R4) — The Biggest Bug

Caught by A3-Skeptic alone — all three other agents missed it.

The executeNode method had a !this.outputs.has(ref.context) cache guard that prevented reloading context data when it was already in the outputs map. This works fine for session data (immutable within a run), but memory is different:

foreach iteration 1: Responder reads ConversationLog → produces output → writes to memory
foreach iteration 2: Responder tries to read ConversationLog → cache says "already loaded" → gets STALE data

Iteration N+1 never sees the memory updates from iteration N. Fix: always reload memory from disk for memory-type refs, no cache guard.

Transitive Re-Export (R2)

Two of four agents (A2, A4) initially proposed approaches where importing from b.gft (which itself imports from c.gft) would make c.gft's names available. This violates explicit-dependency principles.

A1-Architect's ExportableNames snapshot — extract importable names from a target file BEFORE recursing into that file's own imports — was the only correct approach. Cross-critique caught the bug before convergence.

Duplicate Writes Silent Overwrite (R1)

A3 caught that node X { writes: [A] writes: [B] } would silently overwrite the first writes clause with the second. Fix: boolean hasWrites guard that errors on duplicate.

Entry-File Parse Error Gap (R2)

A3 caught that the resolver would crash instead of accumulating errors when the entry file itself has parse errors. The implementer added an entryFile guard — a justified deviation from the convergence spec.


The Field-Matching Merge Debate (R4)

The most contentious design decision: when a node with writes: [ConversationLog] produces output, what exactly gets written to memory?

A1 (Architect): Full overwrite — JSON.stringify the entire node output. YAGNI, simpler code.

A3 + A4 + A2 (3:1 majority): Field-matching merge — only write output fields that match the memory's declared schema fields:

// Memory schema has: turns, summary
// Node output has: turns, summary, reply
// Only turns and summary are written; reply is the production output, not memory state
for (const field of mem.fields) {
  if (field.name in output) {
    current[field.name] = output[field.name];
  }
}

A1's objection was reasonable, but the counterargument was fatal: full overwrite would wipe unrelated memory fields. If a future node only updates summary, full overwrite would delete turns. That's data loss.


Forced Dissenter Highlights

  • R1: A1-Architect argued against flag-based import ordering, preferring a pre-loop scan. Self-rebutted: "O(1) space, catches the same errors, no second pass needed."
  • R3: A1-Architect argued collision detection between memory and context names is YAGNI. Self-rebutted: "collision detection is correctness, not a feature."
  • R4: A2-Pragmatist challenged three design choices (shallow merge, direct load, empty scaffold). Self-rebutted all three — the most thorough self-rebuttal in the project's history.
  • R5: Near-unanimous consensus (score 9). A2 argued for 6 tests instead of 8, conceded the 2 extra edge cases cost nothing.

Stats

| Metric | Value | |--------|-------| | New source lines | ~1,100 | | New tests | 78 (249 total) | | Debate rounds | 5 | | Agent calls | ~70 | | New ratchet decisions | 34 (92 total) | | Dependencies added | 0 | | Pipeline stages touched | All 7 (lexer → parser → resolver → analyzer → codegen → runtime → integration) |

Try It

git clone https://github.com/JSLEEKR/graft.git
cd graft && npm install && npm run build
node dist/index.js compile examples/chatbot.gft --out-dir ./output
node dist/index.js run examples/hello.gft --input '{"title":"test"}' --dry-run

Built with Claude Opus 4.6 via Claude Code. April 2026.