Launching today

Heywa
Tappable visual stories instead of ChatGPT text walls
233 followers
Tappable visual stories instead of ChatGPT text walls
233 followers
From prompt to visual story in seconds. Heywa dynamically builds the right visual experience around your question, so you can browse, compare, and go deeper - without endless tabs or long chat responses.








Heywa
Hey Product Hunt 👋
I’m Milena, founder of Heywa Labs. I’ve wanted to launch this for a long time, and it's a bit surreal to finally share it here.
The origin story is simple: finding answers online is kind of boring. We spend hours a day in beautifully designed, intuitive mobile apps. They’re visual, responsive, easy to move through. And then the moment we want to learn, decide or scratch the curiosity itch, we’re back to either a list of blue links or a wall of chatbot text. It feels outdated.
Heywa is our attempt to make answering a question feel more like using a great app. You ask something - what to cook tonight, is HIIT actually good for you, what is solipsism - and instead of links or a long essay, you get a visual, structured story you can tap through. It helps you refine, it suggests follow-up actions, it lets you choose if you want to rabbit-hole or decide fast.
We're built for everyday questions. The small stuff. The random curiosity at 11pm. The decision you've been putting off. The idea that's been rattling around in your head.
Under the hood, it’s powered by what we call Generative UX. Not just generated content - the interface itself reshapes around your intent. A travel question looks different from a health question. A comparison behaves differently from open exploration. At Heywa Labs, we think this is where AI products are heading: interfaces that adapt to what you’re trying to do, not static boxes with smarter text inside.
We’re early and very open to feedback. Please drop a question below - Heywa and I are around all day to answer 👇
Milena 💚
Love this@milena_nikolic2! Someone had to change the current standard. Most humans have visual minds, the current interfaces of the big providers seem backwards thinking! Good luck!
Heywa
@sean_king5 thanks, really appreciate that! That was exactly the motivation behind Heywa. We spend most of our spare time in beautifully designed apps, but the moment we need an answer or to get things done online we’re back to blue links or walls of text. Hoping heywa makes this more delightful for all the visual minds out there!!
Heywa
Super excited to be part of this launch!
Heywa is super interesting and challenging to work on. One of the most interesting technical challenges was orchestrating all the different sources together into a engaging, truthful answer.
A single user query gets decomposed into many parallel sub-queries across multiple retrieval sources, MCP tool integrations, and image sources, then the results get synthesised back into a coherent, enriched answer with relevant images. Not easy to do!
Getting all of that to stream back to the user in real-time while an LLM planner dynamically decides which tools and sources to invoke was a genuinely hard problem. Really excited to finally share what we've been building!
Congrats on the launch! - qq, how does heywa decide which structure a story should have (cards, comparisons, steps etc) for different types of questions?
Heywa
@tomhennigan thanks, that's a great question - we try to match the structure of the answer to the intent of the query, rather than returning the same format every time.
Roughly speaking, the system first infers what kind of problem the user is trying to solve. For example:
• Decision questions: comparisons (e.g. “Which air fryer should I buy?”)
• How-to questions: step-by-step cards (e.g. “How to make ramen broth”)
• Exploration topics: swipeable collections (e.g. “Best hikes in the Dolomites”)
• Concept questions: structured explainers (e.g. “What is solipsism?”)
Once we detect the intent, we generate a story schema (basically the layout + card types) and then fill it with content. So instead of a single block of text, you get something closer to a mini app tailored to that question.
We’re still improving this - sometimes we get the structure wrong - but over time the goal is that the interface adapts naturally to the kind of answer you need.
Is it limited to only one picture per answer or there could be more. Also, it's only pictures, not videos, right?
Heywa
@viktorgems thanks! It’s definitely not limited to one picture. Most stories actually include multiple images across cards, so as you swipe through you get a more visual understanding of the topic (for example different dishes in a recipe list, different travel spots, steps in a process, etc.). We are
Right now we’re mostly using images, but the format itself isn’t limited to that. We feature some short background videos, as well as relevant TikToks if available in our search index. We are also experimenting with a rotating gallery on relevant cards (e.g. if you search for "top cafes in Camden" it will show multiple images slowly rotating in the background of each place-specific card.
The goal is definitely for the answer to feel more like browsing a small interactive guide than reading a wall of text.
Heywa
Lovely to see our work out in the world :)
From the design perspective one of the biggest challenges in developing heywa has been creating a system and logic that lets us create and tell a good quality, visual story to (almost) any question.
As Milena mentioned, Generative UX is our name for approach to this. It's about figuring out what the user wants, deciding the best way to tell a story that answers that query, then deciding how to display each step in a way that flows nicely and gets to the point.
We've started out focussing on the story format because it's a constrained canvas. We can refine and improve our approach without getting drowned in the scale and complexity of a full webpage or app (where I think a lot of products are falling down at the mo). Once we've got that nailed, I'm looking forward to introducing more interactivity and variety!
Love this notion that prompt engineering is a UI failure. Especially for a visual user the idea of this is amazing! I can see how this leads to higher conversion and more effective user outcomes. Super excited to see how this evolves 🚀🚀
Heywa
@mariarotilu Thank you! And yes - that idea resonated with us a lot while building Heywa. If you need to learn a new “prompting language” to get a good answer, that’s often a sign the interface isn’t doing enough of the work.
Our goal is that you can just ask naturally, and the system figures out the best way to present the answer - visually, structured, and easy to explore.
Really appreciate the support and curiosity about where this could go 🚀
Messenger Hunt
Congrats on the launch!
Heywa
@ire_aderinokun thanks so much Ire - we couldn't be more excited to share Heywa with the world! Hope you're enjoying it - feedback welcome any time and let us know what queries you find it most useful for. So far we heard from early users that they love looking up recipes or exploring history rabbit holes.