Choose from multiple LLMs
Choose from a range of large language models that work best for you and comply with your organization's AI governance rules. Use cloud models or on-device models, or Granite accelerated by your GPU. Easily switch between multi-agent LLMs mid-conversation if your needs change.
Try Pieces for free
Trusted by teams everywhere
Choose from more than 40 LLMs
Pieces supports more than 40 different large language models, both cloud and on-device, so you can choose the LLM that works best for you. We support the most popular cloud models, including Claude 3.5 Sonnet, OpenAI GPT-4o, and Gemini Pro 1.5, as well as the top local models such as Llama 3, Granite, and Gemma. As new models come out, we bring them to Pieces as quickly as we can to enhance your AI workflows.
Run LLMs on device
Pieces supports multiple on-device LLMs, powered by Ollama to make the most of your computational resources, including NVIDIA GPUs, and Apple Silicon-powered Macs. Use these on-device LLMs to chat with Pieces when you are offline, or in security or privacy-focused environments, ensuring efficient task allocation and context management.
Switch LLM mid-conversation
Pieces allows you to change the LLM in use mid-conversation as your needs change. Started a chat with Claude, but need to go offline for a while for a flight? Switch to a local LLM and continue the chat with all your conversation history and context intact. Don’t like the response from Gemini for your use case? Switch to GPT-4o and compare the responses.
Comply with your organization's AI governance rules
AI governance is gaining importance as organizations want to limit how much of their IP, customer data, or other content is shared with AI models. Because Pieces supports so many cloud and on-device LLMs, it is easy to align your developer needs with the requirements of your organization. Bring-your-own-model with models deployed to your infrastructure is coming soon.
1 million +
saved materials
17 million +
associated points of context
5 million +
copilot messages
Dive into the Pieces technical documentation to explore everything our platform offers
Explore
Learn how to optimize your workflow with Long-Term Memory, on-device AI, and switching between LLM
Find solutions to common issues
Access additional tools, SDKs, and APIs for advanced integration
See what else we offer
With hundreds of tools competing for your attention, Pieces is the OS-level AI companion redefining productivity for software development teams.
Frequently asked questions
