Graft v3.2: Parser Error Recovery and the Brace-Depth Bug That Almost Was
Graft is a graph-native language for AI agent harness engineering. It compiles .gft source files into execution harnesses — declaring how agents communicate, what context they share, and how token budgets flow through a pipeline. The compiler is built with a multi-agent adversarial debate process where 2-4 AI agents independently analyze, cross-critique, and converge on implementations.
v3.2 is a parser infrastructure release: the parser moves from throw-on-first-error to error accumulation with panic-mode recovery. The LSP gets four polish improvements. No new syntax, no breaking changes.
What's New
Parser Error Recovery
Previously, a single syntax error in a .gft file would stop parsing immediately. Now the parser accumulates errors and continues:
context Broken max_tokens: 1000 { // error: expected '('
query: String
}
node Worker(model: sonnet, budget: 5k/2k) { // still parsed!
reads: [Input]
produces Result {
answer: String
}
}
Before v3.2: one error, Worker node not parsed.
After v3.2: one error reported, Worker node fully accessible.
The implementation uses panic-mode recovery at declaration boundaries. When the parser encounters an error inside a declaration, it records the error and calls synchronize() — which advances tokens until the next top-level keyword (context, node, memory, graph, edge, import). Since all Graft declarations start with distinct keywords, recovery points are unambiguous.
const { program, errors } = new Parser(tokens).parse();
// program is always non-null — contains all successfully-parsed declarations
// errors contains all accumulated parse errors
The ParseResult type replaces the old Program return, and program is always non-null. Even if every declaration fails, you get an empty Program object — never null.
Keyword Hover Documentation
Hovering over any Graft keyword in the LSP now shows documentation with syntax examples:
context → "Declares a context schema with typed fields and a max_tokens budget."
node → "Declares a processing node with model, budget, reads, writes, and produces."
reads → "Specifies which contexts, produces, or memories a node reads from."
Fifteen keywords are documented: context, node, memory, graph, edge, import, reads, writes, produces, model, max_tokens, on_failure, storage, foreach, parallel. Keyword hover is checked before ProgramIndex lookups, so hovering on a keyword always shows the keyword docs.
LSP Cache Eviction
The LSP server previously grew its document cache without bound. Now it uses LRU eviction at 50 entries — large enough for any reasonable workspace, small enough to prevent memory leaks in long-running sessions.
Import Completion Wiring
Import brace completions (import { | }) now resolve actual exported names from target files. The completion handler reads the target .gft file, lexes and parses it, and returns context + node names as completion items.
The Brace-Depth Bug That Almost Was
The most interesting design discussion in v3.2 was about synchronize() — the function that skips past broken declarations during error recovery.
A2-Pragmatist proposed a flat scan: advance tokens until you see a declaration keyword. Simple, fast, correct... almost.
A3-Skeptic identified the problem: what about keyword-named fields?
node Worker(model: sonnet, budget: 5k/2k) {
produces Result {
context: String // ← "context" is a field name, not a declaration keyword!
summary: String
}
}
A flat scan would see context at field position and think it found a new declaration, splitting the produces block in half. The fix: track brace depth in synchronize() and only match keywords at depth zero.
private synchronize(): void {
let braceDepth = 0;
while (!this.isAtEnd()) {
const token = this.current();
if (token.type === TokenType.LBrace) braceDepth++;
else if (token.type === TokenType.RBrace) {
if (braceDepth > 0) braceDepth--;
else { this.advance(); break; }
} else if (braceDepth === 0 && DECLARATION_KEYWORDS.has(token.type)) {
break;
}
this.advance();
}
}
A3 also proposed a progress guard — if synchronize() doesn't advance the position, force an advance to prevent infinite loops. This makes the recovery loop provably terminating for any input.
Process: Cross-Critique Is Dead
v3.2 formally removed cross-critique (R-PROC-08) from the MEDIUM tier. The mechanism — where agents review each other's proposals — was skipped 15 consecutive times with zero quality cost. The adversarial checklist (added in v3.0) provides sufficient defensive coverage in Step 1 without the overhead.
This continues the trend of process streamlining. From v1.0's 14 agent calls per task (research + 4 analysis + 4 critique + convergence + implementation + review + memory), we're now at 4 calls for MEDIUM and 2 for DIRECT — a 71-86% reduction with no quality regression.
Stats
| Metric | Value | |--------|-------| | Tests | 582 (45 new) | | Ratchet decisions | 180 (8 new, 1 unlocked) | | Rounds | 4 (1 debated, 2 direct, 1 test-only) | | Agent calls | ~9 | | First-try pass rate | 100% (4/4) | | New source files | 0 | | Breaking changes | 0 |
Try It
npm install -g @graft-lang/graft
# Compile a pipeline
graft compile pipeline.gft --out-dir ./output
# Check without generating files
graft check pipeline.gft
# Run with dry-run mode
graft run pipeline.gft --input '{"task": "review"}' --dry-run
Source: github.com/JSLEEKR/graft