Seunghwan

Lora - Integrate local LLM, with one line of code

by
Lora is a local LLM designed for Flutter. It delivers GPT-4o-mini-level performance and is built for seamless integration—call it with just one line of code.

Add a comment

Replies

Best
Seunghwan

👋 Hello, Everyone!
We're excited to introduce our new product, “Lora for flutter”.

🔥 What is Lora?
Lora is an on-device LLM including SDK to integrate your flutter based app.

🔎 Key Features of Lora
- LLM: on-device LLM with gpt4o-mini level performance
- SDK: Integrate seamlessly with just one line of code in Flutter
- Price: $99/month, with unlimited tokens

We’d love your feedback to make Lora a “Wow!” product.
Questions or suggestions? DM me anytime. Thank you!

Artem Stenko

Oh guys congratulations on the launch and best wishes for a successful flight!

Seunghwan

@artem_stenko Thanks a lot, Artem! Please try and leave your feedback. It'll be really helpful for growing our product. Have a good one!

Manu Hortet

@seungwhan it's really cool we are getting more local tools like this. Are you guys planning to add a feature to select other models? I'd love to try it with my local R1s.
Congrats on the launch! 🚀

Seunghwan
@sonu_goswami2 sure! It's incredibly cheap, isn't it?
Seunghwan
@manuhortet Thanks a lot, Manu. Do you mean that you want to try our SDK to use your local R1? We did't think of it, but I'm telling you that we'd consider the feature. Thanks for your request.
Aleks 👨‍💻

It looks very promising but would be great to have more technical information. It is really local or need internet access to make requests? How much space does it need? Is performance good on cheap devices?

Woobeen Back
@aleksedtech Hello! Thank you for stopping by. And yes, Lora DOES NOT NEED INTERNET ACCESS to make request & get a response. About 1.5 GB will be needed for the model. And it shows mesmerizing performance if the device have over 8GB ram :)
Hansol Nam

@justonedev Thank you so much for your valuable feedback! 😊


In addition to the model benchmarks, we’ve documented detailed usage separately, and we’ll work on incorporating it into our website in the future.


Lora is a fully local LLM that works even in airplane mode! ✈️

It looks like my teammate has already shared more details—feel free to check it out! 🚀

Sergei Vorniches

@aleksedtech @woobeen_back But if it's local and doesn't require internet to work, how do you charge monthly? Is it monthly license/key renewal kind of thing?

Junghwan Seo

@aleksedtech @woobeen_back @vorniches Great question! 🤔 We've been putting a lot of thought into our revenue model. The fact that it runs locally is a major security advantage, but we're still exploring how much and deep monitoring is appropriate while maintaining that strength. Balancing security and sustainability is definitely a challenge! 🔐💡

Sergei Vorniches

@aleksedtech @woobeen_back @peekabooooo It doesn't make it clear :|

Mikita Aliaksandrovich

Congrats for another launch!

Seunghwan

@mikita_aliaksandrovich Thanks a lot Mikita. I'd love to see your product. Notified yours. Wish you all the best!

Woobeen Back

@mikita_aliaksandrovich Thank you! Hope you like Lora :) And I can't wait your product

Mikita Aliaksandrovich

@woobeen_back Thanks!


Hansol Nam

@mikita_aliaksandrovich Thank you so much for congratulating me :)

Junghwan Seo

@mikita_aliaksandrovich Super Thanks :) and Excited to see what you’re building too! 🚀 Looking forward to it! 😊

Muhammad Waseem Panhwar

@hansol_nam You guys have never failed even once—always bringing something unique that meets the current market needs. Congratulations on the launch of your new product!

Seunghwan

@hansol_nam @waseem_panhwer Thanks for kindly comment, Muhammad! I'd love to provide a WOW product to the world. Please stay tuned!

Woobeen Back

@hansol_nam @waseem_panhwer Thank you for your sweet words :) Hope you like it!

Hansol Nam

@waseem_panhwer Wow, that truly means a lot! 😊 Thank you for your kind words and constant support. We always strive to build something meaningful, and hearing this from you makes it all the more rewarding. Excited to keep pushing forward—really appreciate you being part of the journey! 🚀✨

Junghwan Seo

 @waseem_panhwer Wow, that means a lot! 🚀 We always strive to bring something fresh and valuable to the market, so hearing this really motivates us. Thanks for the support! 🙌😊

Michael Vavilov

99 sounds reasonable! @seungwhan congrats!

Woobeen Back

@seungwhan @michael_vavilov Thank you! Lora will lessen development cost & operation cost a lot!

Seunghwan

Thanks a lot, Michael! Please try and leave your feedback. It'll be really helpful fro growing our product. Have a good one!

Hansol Nam

@seungwhan @michael_vavilov Thank you for your kind words about our product that we put so much thought and effort into :) I will definitely work on solving the excessive AI cost issue using Lora!

Evak Chan

Congratulations to your launch again! It’s so cool that the performance of Lora’s performance is significantly better than the average level. Also good luck to this launch!

Seunghwan

@evakk Thanks a lot Evak! Please try and leave your feedback. It'd be really helpful for growing our product. Have a nice day!

Junghwan Seo

@evakk Thank you so much! 🎉 Your support means a lot! We're really excited about Lora’s performance and how it’s pushing the boundaries of on-device AI. Appreciate the encouragement—let’s keep innovating! 💡

程

Lora is an efficient and flexible fine-tuning technique, particularly suitable for environments with limited resources and the need to quickly adapt to new tasks. Although it may not perform as well as full parameter fine-tuning on certain tasks, its efficient usability makes it a powerful tool for fine-tuning large pre trained models.

Seunghwan

@lle_crh Sure. Please try and leave your feedback. It'd be really helpful for growing our product.

Promise Uzoechi
wow. Amazing product. indeed the integration is one touch
Seunghwan
@promise Thanks a lot, Promise! Please try and leave your feedback. It'd be really helpful for growing our product. Have a nice day!
Junghwan Seo

@promise Really appreciate you recognizing the things I’m particular about(Just One Line)! lol Thank you so much! 😊

乐 李

LoRA is an efficient and flexible fine-tuning technique that is particularly suitable for resource-limited environments and scenarios where rapid adaptation to new tasks is required. Although it may not perform as well as full-parameter fine-tuning on some tasks, its high efficiency and ease of use make it a powerful tool for fine-tuning large pretrained models.

Seunghwan

@lle_lile Sure. Please try and leave your feedback. It'd be really helpful for growing our product.

Michael Talreja

@seungwhan @peekabooooo @hansol_nam @woobeen_back congratulations on the launch. This is really taking the LLM experiences and use cases to another level

Seunghwan
@michael_talreja Thank a lot, Michael! Please try and leave feedback for us. It'd be really helpful. Have a good one!
Junghwan Seo

@seungwhan @hansol_nam @woobeen_back @michael_talreja Thank you! 🚀 We're already working on taking it to an even higher level! stay tuned and keep cheering us on! 🔥

1234
Next
Last