We all want to ship code faster, but working with agents like Claude Code doesn't necessarily make you a 10x dev
If your agent doesn't follow all your companies rules your code might go off the rails. That's because it's not guaranteed that the rules in agents.md are picked up. Straion is solving this by helping Enterprise teams manage & enforce rules, like an insurance for code.
findable.
Hey makers & creators,
Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
@lukas_holzer @peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
Straion
@katrin_freihofner will tell you more from her product perspective!
Straion
@lukas_holzer @peterbuch @mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
@lukas_holzer @peterbuch @katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
Straion
@mangal_s07 Can you expand a bit on what you mean with Encoding "how we build here”? Not sure if I got that!
MCP-Builder.ai
Crongrats on the lunch. Totally see the need as i am often afraid that my coding Assistant is steadily drifitng away from our coding guidlines.
Am i also be able to setup different coding rules depending on the techstack of my project and teams? Web, python,... ?
Straion
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
Straion
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Netlify
Hey, this looks amazing! Really useful concept, especially with regard to giving focussed context to an agent and for centralising rules across repos. I'd love to know how the tool selects the right rules to use and if there's any way to see which rules have been selected for a prompt?
Straion
@orinokai We took a completely different route here for rule matching as Cursor or others are doing.
Instead of going on a folder level or file extension to match rules, we've trained a machine learning pipeline to do the matching of the rules. This is based out of a variety of constraints. classifications, embeddings, labelings and so on. Basically we've tried to immitate the human brain! My brain does not work by locating knowledge based on a directory 😂
By that we can be super agnostic of repos and the developer don't have to recall where the rules are located they need!
When it comes to visualisation we currently fall a bit short. We just present you the output inside the terminal of Claude Code, Codex or Github Copilot! (You get a kind of validation report)
But we are planning on implementing a dashboard so you see exactly for which task which rules where applied and taken!
That's how we showcase it currently:
Straion
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
Straion
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!
As a founder of a security consultancy, watching how quickly the AI and agentic movement has taken off has been incredible, but also has introduced new and interesting challenges in keeping the company safe!
I am super excited to see what Straion can do in keeping engineering teams moving quickly while keeping the codebase clean and company policies met!
Straion
@patrickfarwick Thanks! yea this whole thing is moving at light speed (or even warp speed?)
With straion we try to help devs to not have to go that pace and commit for one technology. we try to be a proxy managing all rules you you don't have to think about (skills, how to structure .md files so they are picked up best by the latest model, context engineering etc...) or even should I go with Cursor or Claude Code.
We are Provider agnostic and optimizing the rules internally so that they are best picked up by agents!
Straion
We built Straion because AI-generated code is everywhere — but in reality, it rarely fits how companies actually build software.
The problem isn’t generating code anymore. It’s alignment. Every company has its own standards for security, privacy, architecture, design systems, and frameworks. Yet AI tools don’t automatically understand those rules. The result? Manual fixes, long review cycles, and wasted time.
We built Straion to change that.
Straion automatically extracts company-specific requirements from sources like wikis, contribution guidelines, and best practices — and translates them into instructions AI agents can actually follow. That way, generated code fits the organization from the start.
This means:
Less manual correction
Fewer review loops
Better security and compliance alignment
Faster, more cost-efficient delivery
Before building, we conducted 100+ interviews with software teams to truly understand their pain points. The result is a product that doesn’t just work technically — it solves a real, expensive problem.
Ultimately, we built Straion so developers can focus on what really matters again: building great software instead of fixing AI output.