
Augment Code
Augment Code – Developer AI for real work
4.6•10 reviews•826 followers
Augment Code – Developer AI for real work
4.6•10 reviews•826 followers

Intent







826 followers
826 followers









Augment Code has been quietly building enterprise-grade coding tools for large engineering teams, and they launched Intent. heir answer to what comes after the IDE.
According to their announcement:
"The bottleneck has moved. The problem isn't typing code. It's tracking which agent is doing what, which spec is current, and which changes are actually ready to review."

Excited to hunt Intent by Augment Code today.
Intent is a developer workspace where agents coordinate and execute work end-to-end.
This isn’t a coding assistant. It’s an agent-driven development system.
Instead of prompting one agent at a time, you define a spec and a coordinator breaks it into tasks, delegating to specialists (implement, verify, debug, review) running in parallel.
This adds up to:
• Specs that stay alive as work progresses
• Built-in verification loops, not just code generation
• A full workspace (editor, terminal, git)
If you’ve been exploring agentic dev but didn’t want to build the orchestration layer yourself , this is definitely worth a look.
@curiouskitty Thanks for your question, it’s a good one.
Under the hood, Intent gives each task its own workspace backed by a git worktree + branch, so agents get an isolated checkout but share a single .git history for cheap branching and instant sync. The Coordinator turns your spec into a plan with explicit task dependencies, then runs specialist agents in waves: independent tasks in parallel, dependent ones after predecessors land, all staying aligned via a living spec that updates as work is done. On the back end, Intent has full git workflow built in (branching, commits, PRs, merge) plus auto-rebase/conflict surfacing, so you can stack or fan out branches without becoming the human traffic cop, you just review grouped changes per task/agent and ship.
Wrote up a post on how our teams collaborate within Intent. We've been able to effectively eliminate the designer/developer handoff. More details on the process, screenshots, etc: https://lukew.com/ff/entry.asp?2148
This looks very promising! Unfortunately, I can't test it on Windows yet.
I've been working with Augment in a WebStorm environment for over a year and I'm very happy with it.
However, I have two concerns regarding this next step:
a) How high will the token consumption be? I'm already using up my developer token allowance manually quite a bit. I usually have to top it up several times a month. If I imagine multiple agents working in parallel, orchestrated by even more agents, my token pool will be empty in just a few hours...?
b) I already have to closely monitor/review the activity of my one integrated agent and guide it in the right direction. Here, too, I see the risk that my incomplete/liquid spec will lead to absurdly high token consumption.
So: I think the idea is great, and I also think it will work very well.
But: Is it still affordable?
@daniel_beuter Thanks for this question. You’re right that using coordinators and subagents can introduce some overhead at first glance and use more tokens. That’s why I prepared a detailed post on how to save as many tokens as possible using Intent! It should really help you understand what kind of workflow you might want to use. Here is the post: https://www.reddit.com/r/AugmentCodeAI/comments/1r6ckev/intent_cost_tips_and_tricks/
Another important point is that when you use coordinators + subagents + a verifier, your first prompt will indeed cost more, but it can save you time and reduce the need for reprompts (so overall you’re saving both time and tokens by not having to ask again). Our verifier agents are there to make sure everything is handled correctly on the first try. Nothing is perfect, but we’ve seen better results with this approach internally.
When the flagon of GPT-5.4 was flowing freely, I gave Intent a spin and was deeply impressed. However, in some ways my view has reversed. I ran several projects in parallel with a lot of attention and, uh...intent, and I found the agent roles performed probably better than any other multi-agent harness, skill framework, orchestrator, or whatever other wrapper for the same basic proposition is being peddled around right now. That's not gut feel. I maintain a list of tools in this space (GUI-based agent orchestration) that— excluding the really held-together-by-shoestring, vibe-coded efforts— sits at 70+ examples at the moment.
For reasons of I'm Not Wealthy, I have recently upgraded to the Codex Pro plan, which means I'm not switching between Sonnet/Opus and whatever Chinese model I decide to hammer for cost:rates balance the moment Sonnet/Opus runs out or goes down. What that means is I'm back on the model I was using exclusively with Intent, but I'm not using it with Intent and I have some thoughts.
1) Intent's agents are reliable and persistent. Give them the task and go do something. It's fine. If it's a big task, they will persevere. If you use Antigravity, you will absolutely, without a shadow of a doubt, go through the inconvenience of setting up the least intuitive yolo mode in any piece of software right now, because you will be assaulted with permission prompts every 27 seconds if you do not. Intent, once you set your desired permissions, can just get on with what it needs to. If I approved a big enough plan, and was explicit about how many waves to run through, it honestly felt more like using Hermes to me in some ways than most coding agents (without setting up a Ralph Loop or similar).
2) When given UI-agnostic prompting, each of the 6 projects I ran in parallel, even on fundamentally different frameworks, delivered a consistently styled frontend, and it was ugly as all hell and its layout and user-facing content was not made for human beings. That's a prompting issue, obviously, but something to be aware of when I don't consider this to be a model issue (other harnesses have been knocking UI out the park for me with GPT-5.4). I'd imagine Opus would probably do a lot better, but I would be tempted therefore to run the same prompts in Intent/CC/OC/whatever to check this. The layouts were so bad they honestly created a lot of extra work for me.
3) There's whispers on the wind/subreddits and a growing body of literature that posits giving agents human roles, layering certain language of that nature into skill.md files or just your regular prompts, and generally anthropomorphising agents has a detrimental effect on their effectiveness in a way that did not used to be the case. The models are now pretty damn good and don't need that stuff.
I've seen the internal prompts from the Claude Code leak (Anthropic is life-coaching their own models), so what do I know, but, dear God, Intent was slow. Not in, like, a tok/ps manner. The agents are available to monitor and interrogate fully. Again, Intent has one of the best experiences for keeping this as [in]visible as is right for you due to a great interface. There is a constant array of spinners and streaming text showing all The Activity, but going through the app's internal chain of incredibly well-constructed agent governance is like amping up Qwen's thinking mode to 11 (a slight exaggeration, since I've received a 6-minute thinking process from Qwen-3.5 before delivering an answer to the prompt 'Hi' on a model that I run at over 100tps).
I'm currently getting much more streamlined execution from Codex with no agent frameworks, no Oh-My-Anythings, Superpowers or personal stable of agents. This is what makes me much more on the fence about recommending Intent to essentially everyone, as I was previously. However, I would hazard that (and this sits nicely with where the product is likely being aimed) this virtuous agentic cycle and internal QA-ing before reaching the human in the loop will sit nicely for enterprise customers. It feels like there is more demonstrable diligence happening in front of your eyes. If your employer is running Intent, you also aren't worrying about the cost of that diligence, since running multiple agents is expensive enough for private individuals without also worrying if they're being too 'conscientious' about their work.
I know Augment is using Opus 4.7 as the default model now, so this isn't my view on how Intent guides a particular model. It's a a warning that regular users might want to consider whether multi-agent, parallel workflows are actually the right move for them, regardless of cost.
4) Yes, I'm still bulleting here. The prioritisation and delegation of agents across different tasks is superb. Every tool like this is leveraging worktrees now, but Intent is the only one that I never had to go in to examine merge conflicts, feature collision and the like.
5) There's many nice touches that are worth exploring so I'd encourage you to give Intent a try, even if you think running a bunch of agents isn't for you. You really don't have to think about it in Intent. The living spec is so nicely done. If you've experimented with a bunch of context/memory systems like I have, this is the most sophisticated version of the simplest delivery for this challenge (basically something like a md updating itself as it goes along) due to its consistency and UI.
Wow, I just came here to write "Intent is really good". What happened?
@deejtulleken Thanks for such a detailed and honest write-up! This is extremely helpful.
Very short version of how we see it:
Reliability & permissions
What you liked here is exactly by design: explicit plan + approvals up front, so agents can run long tasks without constantly interrupting you. That’s the “enterprise-grade diligence” we’re aiming for.
Ugly/inefficient UIs
Have you tried the "UI Designer" agent yet? If not, it’s worth a shot, we’d love to hear if you get better results with it.
Human-like roles, slowness, and overhead
We agree that today’s models often don’t need as much anthropomorphic ceremony, and that layered governance can add latency. Intent intentionally leans into internal QA and multi-agent coordination to get better results. We’re continuously working to improve speed, but never at the expense of the quality bar we’re aiming for.
Delegation & worktrees
We’re really glad you called this out. Robust task delegation and conflict-free worktrees are exactly where we’ve invested a lot, so it’s validating to hear that this part “just worked” for you.
Living spec & context
The living spec is meant to be that simple, inspectable “single source of truth” you described. Your reaction here is very aligned with what we’re doubling down on.
One last note: if you want to save time and tokens on smaller tasks, you can skip the orchestrator and use Developer Mode directly, which is essentially the raw access to providers. You can still spawn more agents, but you’ll avoid the orchestration overhead.
@jaysym I'm definitely planning to revisit Intent. The UI Designer agent, I specifically noticed, seemed to rarely be self-invoked in the workflows I was running. Sometimes this wasn't surprising, sometimes it was. Again, this may be down to my planning and prompting. I am still fairly unimpressed with vibe-coded frontends anywhere I see them and I've experimented with a bunch of different solutions in this space, going back to Kombai and the Figma MCP, and now to Pencil, Paper, V0, Variant, Stitch, etc. The 'look' was not as problematic with Intent as the layout structures I was seeing... maybe something to do with not getting that UI Designer pass when it should have.
I threw £100ish into my account for the rest of the month once the free GPT ended, and Intent felt a bit budget-hungry for me to use it any more at that time. Certainly not more than other tool I've used with frontier models running multiple agents, though. Basically I felt like it warranted more spend to get what I wanted from it, but I'm juggling budget for other things I want to try too, and am sadly not that guy with 12 Claude Max accounts, being between jobs and having a wife that really doesn't want me to talk about AI any more than I already do :)
I saw Developer Mode, but as someone who switched my team from Cursor to Augment already a while back, I was specifically interested in testing Intent's orchestration capabilities at the time. On that topic I was pleased that when I ran out of budget you make it easy to BYOM. The execution flow of Intent is such that I didn't actually want to move those projects to another tool at the time. Setting up a bunch of agents with a range of different models is easy, however there were a few hiccups that ultimately made me abandon Intent for the time being. Newly-created agents didn't always respect when there was a model override for a specific agent. I speculated that this was down to the models not playing as well in the tool than your first-class supported models.
Again, this feedback is a few weeks old, which is ancient when discussing AI toolsets, but one thing I found was that the default orchestrator agent works extremely well with frontier models as well as upper-tier open weight models. What doesn't work well in either instance is if you try to have one orchestrator pick up where another one left off because you want to switch models, which you can't (or at least couldn't) do once the agent has been initiated.
@tanner_beetge Yes. In Intent, agents are usually hierarchical, not a flat swarm:
A controller agent acts like a tech lead: it understands the main goal, breaks it into subtasks, runs specialist agents, and decides what to accept/merge.
Specialist agents do focused work (code, tests, analysis) and report back, they don’t “vote,” their outputs are checked against the shared spec + workspace + tests/CI, and the controller, the verifier agent (plus the human) has final say.
So they “value” each other’s work through this structure and verification, not by arguing as equal peers.
I like this. It seems very interesting. CLI code understanding and agent-driven development make a lot of sense. I'm just wondering, does it do any browser work? That's where most of my time currently gets sucked going back to the browser and seeing if everything has been implemented okay. I think that would be a great problem to tackle anyways. Love the product, and we'll give it a try. Best of luck.
@bitsandtea We know the pain of switching back and forth between your workspace and browser. That’s why we’ve made it very easy to integrate third-party tools like Playwright. With a simple installation, agents can navigate your app in real time to see if changes worked or if there are any regressions.
You can also use the integrated browser directly in Intent, so you can stay on the same page and handle both coding and testing together.




Hey Bartolomeo, sorry to hear that you had a bad experience with our support. When you wrote this feedback, we were in the process of expanding our support team because our popularity grew faster than our support capacity, and that is now resolved. We now have a very responsive support team that is always ready to help! Let us know if you need anything, we will be there.