Badges



Recently Supported







Forums
Your AI agent just wrote 5,000 lines of code. How do you know it actually works?
Genuinely curious what the community does here.
We've been talking to hundreds of teams building with Cursor, Claude Code, and other agentic tools and the honest answer from most of them is: "We just run it and hope."
Some do a quick manual click-through. Some write a few spot checks. Some just ship and wait for users to find the bugs.
We built TestSprite to solve exactly this autonomous testing that runs from your PRD and codebase but I'm curious what your actual workflow looks like before you merge.
What Pain-Point are you Solving and How did you discover it?
We re all builders here, which usually means at some point we looked at something clunky, slow, or frustrating and thought, there has to be a better way. Most products don t start with a grand vision; they start with irritation, curiosity, or firsthand pain.
I d love to learn more about how others here have navigated that journey:
How did you uncover the problem you decided to work on?
What signals told you this problem was worth solving?
How did you validate (if at all) whether people would actually pay for a solution?
Has your product stayed true to the original problem, or did it evolve into something different?
What surprised you the most along the way?
Y Combinator offers 7 startups ideas they want to fund (Spring 2026)
As usual, Y Combinator came up with segments that are worth investing:
1. Cursor for Product Managers
2. AI-Native Hedge Funds
3. AI-Native Agencies
4. Stablecoin Financial Services
5. AI for Government
6. Modern Metal Mills
7. AI Guidance for Physical Work 8. Large Spatial Models 9. Infra for Government Fraud Hunters 10. Make LLMs Easy to Train

