🎉 New: Introducing the lmstudio-python and lmstudio-js SDK libraries!

LM Studio

Your local AI toolkit.

Download and run Llama, DeepSeek, Mistral, Phi on your computer.

hero
Beginner Friendly, with Expert Features

Easy to start, much to explore

Discover and download open source models, use them in chats or run a local server

Easily run LLMs like Llama and DeepSeek on your computer. No expertise required

LM Studio app screenshot
Libraries:

Cross-platform local AI SDK

LM Studio SDK: Build local AI apps without dealing with dependencies

Python
TypeScript
Install the SDK using pip
pip install lmstudio
LLM Chat
Agentic Tools
Structured Output
Manage Models
import lmstudio as lms

llm = lms.llm() # Get any loaded LLM

prediction = llm.respond_stream("What is a Capybara?")

for token in prediction:
    print(token, end="", flush=True)

Frequently Asked Questions

TLDR: The app does not collect data or monitor your actions. Your data stays local on your machine.

No. One of the main reasons for using a local LLM is privacy, and LM Studio is designed for that. Your data remains private and local to your machine. Visit the Offline Operation page for more.

LM Studio works on M1/M2/M3/M4 Macs, as well as Windows (x86 or ARM) and Linux PCs (x86) with a processor that supports AVX2. Visit the System Requirements page for the most up to date information.

You can run any compatible Large Language Model (LLM) from Hugging Face, both in GGUF (llama.cpp) format, as well as in the MLX format (Mac only). You can run GGUF text embedding models. Some models might not be supported, while others might be too large to run on your machine. Image generation models are not yet supported. See the Model Catalog for featured models.

The LM Studio GUI app is not open source. However LM Studio‘s CLI lms, Core SDK, and our MLX inferencing engine are all MIT licensed and open source. Moreover, LM Studio makes it easy to use leading open source libraries such as llama.cpp without needing the know-how to compile or integrate them yourself.

llama.cpp is a fantastic open source library that provides a powerful and efficient way to run LLMs on edge devices. It was created and is led by Georgi Gerganov. LM Studio leverages llama.cpp to run LLMs on Windows, Linux, and Macs.

Please fill out the LM Studio @ Work request form and we will get back to you as soon as we can.

Yes! We always are looking for exceptional builders to join our team. If you‘re interested, please send an email to apply@lmstudio.ai with a blurb about yourself and a relevant project you‘ve worked on.

LM Studio

Get Started with LM Studio