Persistent, Secure Memory for AI Agents
Level Up CTF is the first platform to integrate SAGE — an open-source memory system that gives AI persistent, encrypted memory across conversations. Our 9 pipeline agents share knowledge through SAGE's consensus-validated memory layer, so every challenge they generate makes the next one smarter.
Open source · AES-256 encrypted · BFT consensus · Any AI, any model
Every challenge generation cycle feeds knowledge back into SAGE. The system doesn't just generate -- it remembers, reasons, and improves.
AI agents generate challenges with techniques learned from past successes and failures.
Autonomous red team agents attempt to solve each challenge, reporting difficulty and exploit paths.
Results flow into SAGE's consensus-validated memory -- what worked, what failed, what needs hardening.
Next generation agents recall these lessons, producing challenges that are harder, more creative, and better calibrated.
SAGE is an open-source project that gives AI persistent, secure memory — not a vector database with a retrieval wrapper. It uses AES-256 encryption, Ed25519 signing, and BFT consensus to create a governed knowledge layer. Level Up CTF is the first production deployment of SAGE for multi-agent collaboration.
Every learning is validated by distributed consensus before becoming institutional knowledge.
9 specialized agents share knowledge across design, validation, calibration, and hardening.
Knowledge is tagged and routed to the right agent at the right time -- not dumped into a generic context.
Empirical evidence that institutional memory dramatically outperforms prompt engineering alone.
Learning improvement with 3-line prompts + SAGE
200-line expert prompts without memory
Red team difficulty over 10 runs, zero prompt changes
“3-line prompts with SAGE achieve 18x the learning correlation of 200-line expert prompts without it.”
SAGE powers the intelligence behind every AI-generated challenge on Level Up CTF. Here's how it works in practice.
Our pipeline uses 9 AI agents — Designer, Validator, Calibrator, Hardener, Repair, Narrative, Exploit, Static Analysis, and an Orchestrator — each with their own SAGE identity and domain-specific memory. When the Designer creates a challenge, it first recalls what worked (and what failed) for similar challenges in the past. When the Calibrator assesses difficulty, it draws on red team results from previous validations.
Challenges generated with SAGE enrichment are marked with a purple SAGE badge on their challenge card. These challenges benefit from accumulated institutional memory — better calibrated difficulty, more creative scenarios, and fewer broken deployments. As more challenges are generated and validated, the badge represents an ever-growing body of knowledge.
Every night, autonomous red team agents attempt to solve staged challenges. Their results — solve time, exploit method, difficulty assessment — are submitted back to SAGE. This creates a continuous feedback loop: the next generation of challenges is informed by real exploit data, not just LLM guesswork. Over our testing batches, this loop reduced calibration error by 37% (from 0.40 to 0.25 mean absolute error).
SAGE is open-source under the Apache 2.0 license — available for any AI system that needs persistent memory. Level Up CTF is proud to be its first production deployment.