ChattyUI

ChattyUI

Run open-source LLMs locally in the browser using WebGPU

95 followers

Open-source, feature rich Gemini/ChatGPT-like interface for running open-source models (Gemma, Mistral, LLama3 etc.) locally in the browser using WebGPU. No server-side processing - your data never leaves your pc!
ChattyUI gallery image
ChattyUI gallery image
Free
Launch Team / Built With
AssemblyAI
AssemblyAI
Build voice AI apps with a single API
Promoted

What do you think? …

Jakob Hoeg
This project is meant to be the closest attempt at bringing the familarity & functionality from popular AI interfaces such as ChatGPT and Gemini into a in-browser experience. You're more than welcome to contribute: https://github.com/addyosmani/ch...
Addy Osmani
We started this project because we're big fans of fast, privacy-friend on-device AI. We think there's a lot of opportunity here for chat interfaces and beyond. Chatty is built using WebLLM, Radix, Xenova Transformers and Next.js. We're excited for you to try it out and would love to hear any feedback!
Tony Lea
This is impressive guys. I feel like you should be doing better than the current position you are in here on ProductHunt. Is this using WASM under the hood?
Jakob Hoeg
@tonylea thank you for the kind words, Tony. And you’re correct about the WASM part!
Addy Osmani
@tonylea @jakob_hoeg Thank you! Yes, it's using WASM and WebGPU behind the scenes. Hope folks enjoy Chatty!
Toshit Garg
Congrats on launch of ChattyUI....
Jakob Hoeg
@toshit_garg thank you!
Addy Osmani
@toshit_garg Thanks so much!
Albert
congrats on the launch. running llms in-browser using webgpu seems efficient for privacy and performance. how do you see chattyui impacting developers specifically looking to integrate similar functionalities into their projects?