Launching today

Aura
Semantic version control for AI coding agents on top of Git
99 followers
Semantic version control for AI coding agents on top of Git
99 followers
Legacy Git tracks text; Aura tracks mathematical logic. By hashing your AST instead of lines, Aura provides flawless traceability for AI-generated code. Block undocumented AI commits, surgically rewind broken functions with the Amnesia Protocol, and orchestrate massive code generation—all while saving 95% on LLM tokens. 100% local. Apache 2.0 Open Source.











Aura
Miro
@mhdashiquek congrats, very inspiring idea. Can you share any results you have seen in your own teams already? Did it fully replace the manual code reviews by your engineers or operate on a different level?
@mhdashiquek This is awesome - congrats. Would be great to also understand if this works for non-technical team members too who want to understand the quality of what their tech team is producing!
Aura
@kirolus_ghattas
Yes. Aura is built for the world where humans lead the Intent and AI handles the implementation.
Because Aura tracks the 'Why' in plain English rather than just text diffs, non-technical members can use `aura dashboard` to visualize logic quality and `aura prove` to get a mathematical 'Yes/No' on feature completion, without ever reading a line of code.
it's an internal tool we built we decided to open source. Please try and let us know if there's any improvements needed. Or please do contribute.
Aura
@lukaszsagol
Thanks Łukasz!
No, Aura didn’t replace code reviews for us at Naridon — but it changed what we review. Before, engineers spent most of their time just understanding what the AI did. Now Aura blocks undocumented or misaligned AI changes before review (it’s pretty brutal, our AI agents genuinely hate it 😅, I asked after a session to Claude code and it literally said Aura blocks it so much it's frustratingly annoying. but that’s the point).
When something breaks, we don’t revert whole PRs anymore. We rewind just the one function that caused the issue and move on.
One unexpected win: by using AST-based context (instead of dumping files or chat logs), we saw ~80% to even 93%+ reduction in LLM context size when handing work between agents. Way fewer tokens, way less noise.
So humans still review code, Aura just removes the AI archaeology and keeps things sane.
@mhdashiquek Hi Muhammed. Can Aura handle multi‑language repos and frameworks with consistent reliability? What kinds of visualisation or tooling does Aura offer to help developers understand semantic diffs?
Aura
@kimberly_ross
Hi Kimberly!
Yes, Aura uses tree-sitter under the hood, which means it parses code down to an Abstract Syntax Tree (AST) rather than reading text. This makes it framework-agnostic. It currently has native, highly reliable support for TypeScript/JS (including React/JSX), Python, and Rust.
For visualization, Aura moves away from traditional red/green text diffs. We offer two main tools:
This is a really interesting direction — moving from text diffs to intent + AST-level tracking makes a lot of sense in an AI-first workflow
Curious — how do you handle cases where the AI’s “intent” is correct at a high level, but the implementation subtly diverges across multiple files?
Does Aura catch cross-file semantic inconsistencies as well or mainly within scoped changes?
Aura
@shrujal_mandawkar1
Great question, Shrujal. This is exactly why we couldn't rely on text diffs! Aura handles cross-file divergence in two specific ways:
1. Global Merkle-Graph (Blast Radius): Aura doesn't just parse isolated files; it builds a mathematical graph of your entire repository locally. If an AI modifies a core function in file_a.ts, Aura's 'Proactive Blast Radius' engine immediately flags downstream functions in file_b.ts and file_c.ts that are now tainted by the change, warning you before you commit.
2. Strict Intent Alignment: If an AI agent refactors 15 logic nodes across 5 different files, Aura mathematically cross-references the AST hashes against the agent's stated intent. If the AI subtly hallucinated and modified a 16th node that it didn't explicitly declare in its reasoning, Aura triggers an 'Intent Mismatch' and halts the commit. For complex end-to-end verification, we also have `aura prove`, which traces the actual execution paths across multiple files to mathematically prove the AI's high-level intent was successfully implemented without breaking connected modules.
The `aura rewind` for single functions is exactly what I need — reverting entire PRs because one AI-generated function broke things has been my biggest pain point with Claude Code. The 93% token reduction on handover is wild if that holds up in practice.
Aura
@letian_wang3
Thanks Letian! That exact pain point with Claude Code was one of the main reasons we built this. Standard git revert is a sledgehammer, but AI hallucinations usually only require a scalpel. Because Aura maps the Abstract Syntax Tree (AST), it knows exactly where a specific function starts and ends, letting you swap out just that broken logic block while keeping the other
500 lines of perfect AI code intact.
As for the 93% token reduction with aura handover, it holds up! Instead of dumping raw, unstructured files full of comments and whitespace into the context window, Aura generates a dense XML payload of just the logic node signatures and their dependencies. The LLM gets the exact architectural context it needs, and you save a massive amount of tokens (and money).