
Imbue
We build AI that works for humans
133 followers
We build AI that works for humans
133 followers
Imbue develops tools that help people think, create, and build. We believe technology should be loyal to the user and aligned with human goals.
We share many of our tools openly because we believe progress in AI should be collaborative and developer-driven.
This is the 2nd launch from Imbue. View more
Vet
Launched this week
Vet is a fast and local code review tool open-sourced by the Imbue team. It’s concise where others are verbose, and it catches more relevant issues.
Vet verifies your coding agent's work by considering your conversation history to ensure the agent's actions align with your requests. It catches the silent failures: features half-implemented, tests claimed but never run.
It reviews full PRs too, like logic errors, unhandled edge cases, and deviations from stated goals.




Free
Launch Team


IFTTT
This is the missing piece in the AI coding workflow. We all got comfortable letting agents write code, but verifying what they produce is still mostly manual eyeballing. Love that it's open source too - makes it way easier to trust and customize for different codebases. What's the performance overhead like on larger repos?
Imbue
@emad_ibrahim Thanks for the kind words! In general, Vet becomes slower and more expensive up to a point when running against larger diffs and codebases, this point being the context window for the model being used. The upper bound for the expense and time is quite low. I would expect it to take at most 15 seconds in the base configuration on the largest of diffs and codebases, increasing with the use agentic identifiers.
Told
The silent failures framing is sharp — half-implemented features and unclaimed value are the real activation killers in most B2B products, not churn from explicit dissatisfaction. Curious how Vet surfaces these gaps: is it correlating usage data against the expected activation path, or more of a qualitative signal from user sessions? The distinction matters because one tells you what's broken and the other tells you why. Would be interested in how this fits into a team's existing analytics stack — does it layer on top of tools like Mixpanel or Amplitude, or replace them for the activation layer?
The 'catches silent failures' angle is what gets me — half-implemented features and tests that were claimed but never actually run are exactly the kind of things that slip through normal code review because reviewers trust that the agent did what it said. How does it handle situations where the conversation history is ambiguous, or the original request was vague to begin with?
Curious how Vet handles the audit trail when an agent makes changes across multiple repos do you log at the diff level or capture the full agent reasoning chain too? Trying to figure out where the boundary between "agent decision" and "human accountability" sits in your model.
I tried this with Clawdbot and it successfully caught a 'silent failure' where the agent skipped a test. I can't get it to call Vet everytime though. Do you know how can I prompt it to always call Vet before reporting a task as complete?
Super interesting! We'll try it out for our vibecoding platform at matterhorn.so!
IFTTT
@abhinavramesh let us know what you think! You’re welcome to also share feedback, raise an issue, or help contribute to the open-source project: https://github.com/imbue-ai/vet
🙌