Challenges of Building AI Tools That Truly “Understand” User Intent

Tim Liao
1 reply
In AI product development, interpreting user intent is a critical challenge—especially when users are exploratory or vague. How do you guide users effectively without overwhelming them? For a long time, most SaaS products have been “expert systems”: the workflows and user inputs were strictly predesigned by developers. If you wanted to tell the system your preferences, you’d fill out a form or select predefined options. Essentially, the user adapted to the tool. But in the AI era, I see a fundamental shift: the system should actively adapt to each user. Instead of a rigid, form-based flow, AI-driven products can accommodate fluid, natural inputs—letting users express their intent however they like. This is a step beyond “user-friendly”; it’s what I call “human-like” design, where the software meets people on their terms. Join the Discussion: I’d love to hear from you! If this perspective resonates with you, feel free to share your thoughts, ideas, or examples in the comments. To spark discussion, here are a few questions to those great makers and marketing master in product hunt I’m curious about: - How do you build systems that guide users without overwhelming them, especially when users aren’t sure what they want? - Have you implemented frameworks or strategies to make AI systems adapt to users in real-time? What worked, and what didn’t? - What’s your approach to validating whether your AI truly captures user intent, instead of forcing users to adapt to preset flows? - Do you know of any products or tools that have successfully achieved this kind of “human-like” adaptability?

Replies

Daniel Abramov
I've had similar questions when working on my product (users write the search request, specifying any details if needed) precisely - I had to understand the intention behind each search prompt, if there are any details and any requirements that are optional or necessary. My approach is that I **do not** try to use LLM to do all of the analysis, instead, I use LLM to do what's oftentimes called "feature extraction". Simply put, I use LLM as some kind of a "smart pattern matching" tool to extract information that I want to know. Then, typically, I write my algorithms/logic on top of that. So in my case, I ended up having a processing pipeline that roughly looks like this: "shallow analysis (LLM)" -> "feature extraction (LLM)" -> "normalization, ranking and processing (custom logic)". That said, it does not give the universal "human-like" adaptability you mentioned since the custom processing logic needs to be written for a specific use case. TL;DR: I personally use LLM for tasks where it excels, while replacing its weakest points ("real reasoning", "thinking") by implementing my own (more deterministic) logic/algorithm on top of that. I believe that for **reliable** (predictable) behavior, LLM results should be processed by "classical" machine learning or NLP techniques.