Use your choice from multiple LLMs
Choose from a range of LLMs that work best for you and comply with your organization’s AI governance rules. Use cloud models like GPT-4o, Gemini, or Claude or on-device models like Llama 3 or Granite accelerated by your GPU. Easily switch models mid-conversation if your needs change.
Try Pieces for free
Trusted by teams everywhere
Choose from 23 different LLMs
Pieces supports 23 different LLMs, both cloud and on-device, so you can choose the LLM that works best for you. We support the most popular cloud models, including Claude 3.5 Sonnet, OpenAI GPT-4o, and Gemini Pro 1.5, as well as the top local models such as Llama 3, Granite, and Gemma. As new models come out, we bring them to Pieces as quickly as we can.
Run LLMs on device
Pieces supports multiple on-device LLMs, powered by Ollama to make the most of your hardware, including NVIDIA GPUs, and Apple Silicon-powered Macs. Use these on-device LLMs to chat with Pieces when you are offline, or in security or privacy-focused environments.
Switch LLM mid-conversation
Pieces allows you to change the LLM in use mid-conversation as your needs change. Started a chat with Claude, but need to go offline for a while for a flight? Switch to a local LLM and continue the chat with all your conversation history and context intact. Don’t like the response from Gemini for your use case? Switch to GPT-4o and compare the responses.
Comply with your organization’s AI governance rules
AI governance is gaining importance as organizations want to limit how much of their IP, customer data, or other content is shared with AI models. Because Pieces supports so many cloud and on-device LLMs, it is easy to align your developer needs with the requirements of your organization. Bring-your-own-model with models deployed to your infrastructure is coming soon.
1 million +
saved materials
17 million +
associated points of context
5 million +
copilot messages
Dive into the Pieces technical documentation to explore everything our platform offers
Explore
Learn how to optimize your workflow with Long-Term Memory, on-device AI, and switching between LLM
Find solutions to common issues
Access additional tools, SDKs, and APIs for advanced integration
See what else we offer
With hundreds of tools competing for your attention, Pieces is the OS-level AI companion redefining productivity for software development teams.