Back
Build Your Own Open Source Copilot with Pieces
Build your own open source copilot with the Pieces Client with all of the context and large language model support you will need.
As of last year and into the beginning of 2024, copilot functionality and access have increased 10-fold as their usage has become more prevalent across all the applications that we interact with in our workflow. As each copilot emerged, we at Pieces noticed a trend of many different copilots belonging to specific tools, leading to tedious copying and pasting between copilots to maintain a unified understanding of our workflow, which adds to the costs of context switching.
Suddenly, we see a world without any tools that would allow users to consume Local Large Language Models like Mistral and Llama 2, as well as cloud LLMs like GPT-4 and Gemini, all from the same location. Since it’s often unclear which is the best LLM for coding, we wanted to give users options. Pieces as an ecosystem works in different plugins (like VS Code, Obsidian, JetBrains, Chrome, and more) all supported by Pieces OS, which handles the logic and language models that you consume in Pieces Desktop or the plugins that connect to it.
After developing our proprietary Pieces Copilot, we decided to support an open-source copilot that developers can build on their own to leverage various LLMs, set custom context for more personalized responses, and integrate with their favorite tools. This guide will help you do just that, serving as an open-source alternative to GitHub Copilot and other AI tools.
What is Pieces OS Client?
Pieces OS Client is a flexible database that provides a custom open-source copilot for saving, generating, and iterating on code snippets across your development workflow. It comes with built-in Local Large Language Models that can be used to ask questions and generate powerful code solutions no matter what dev tool you’re working in.
Since Pieces OS contains a large amount of useful tooling, our first step was to create a number of ways for developers to access and use the SDKs. In order to meet that need, we have taken all of the functionality that has been added to Pieces OS over the last two years and bundled it up in different language packages:
What Can My Open Source Copilot Do?
Here is a high-level list of some of the features you have access to when building your copilot open source:
1. Switch between the following Large Language Models for your open-source copilot conversations. You can download these models on-device and use them without internet connection, or you can consume them in the cloud:
Local: Mistral AI (CPU + GPU)
Local: Llama 2 (CPU + GPU)
Cloud: GPT-3.5 Turbo + GPT-3.5 Turbo 16k + GPT-4 (you can also use your own OpenAI API key)
Cloud: Gemini
Cloud: Chat Bison + Code Chat Bison with Palm 2
2. Add context to your conversations and questions to give your open-source code copilot additional seeded information to answer your questions without having to train a model on your data
3. Ask questions about a specific code snippet that you have saved in the past
You can perform several other actions through the Pieces OS Client and can discover the other endpoints and SDKs through the Open Source by Pieces Repo.
Building your Open Source Copilot
When first using the SDK, it can be overwhelming to try to understand all of the classes available to you, so here is a brief introduction to some of the more powerful endpoints that are related to creating your own open-source copilot.
Universal Steps
1. Download the SDK in the language that you prefer— this guide follows the Typescript Example using WebSockets and this repo:
2. Be sure to install Pieces OS and get it running on your machine
Part 1: Creating Your Open Source Copilot
Adding the needed functionality can be super easy here, as you can create your first message without connecting or registering your application. If you want to send a single message and get back a response, you can do so with little to no configuration or data added:
You can see some of the configurations you will need to send the message in the CopilotStreamController
on .askQGPT
, but you can use the Pieces.QGPTStreamInput
to send the message:
You can read more about handleMessages
, updating the UI, and generating the response from the full article Build Your Own Copilot in 10 Minutes.
Part 2: Downloading Local Large Language Models like Mistral AI
At the beginning of the article, we mentioned all of the different models that are accessible through Pieces OS Client and that one of the benefits of using the Client for LLLM management is being able to control downloading, deleting, and using models interchangeably.
In a few steps, you can access the model’s snapshot and select the model you want, along with all of the model.foundation
parameters that are needed for identifying the model during the download process, or the readable name of the model. Here what that could look like in a file like modelsProgressController.tsx
as the constructor:
Then you could properly set, select, and use the models in your main()
entry point to use later when passing the model.id
into the ModelApi().modelSpecificModelDownload
endpoint like the following. If we were going to first get the model information and then set the selected model for download, it would look like this:
Then after the model has been downloaded, you can pass your model ID into the QGPTStreamInput
now including the model
parameter:
In this simple example, you could select the model, download it, and then use the model in your next query. For a more complete example, see the blog How to Build a Copilot Using Local LLMs to understand more about connecting the UI to these states and showing download progress.
Part 3: Attaching Context to Your Open Source Copilot Queries for Better Responses
In most copilot environments, you cannot attach context to your models without training them on the knowledge base to answer your questions first. With the Pieces OS Client, attaching context is straightforward and does not require waiting for a model to train. This allows for quick context swapping on a large scale from project to project, or on something as small as two separate lines of code.
To add files, you would need to add a file picker package to the project. But to speed up the process as developers are using this SDK, Pieces OS Client comes with a native OS file picker built in for handling the paths that are selected here.
You can use that file picker like so:
Now you have created your relevance pipeline to let the user select and send over the first half of the contextual message. But to use context, we have to seed an asset, which requires the application
to be created. Here is an example of creating an application and getting all of its values:
To see a full breakdown with more code examples and explanations, check out the article on Adding Context to your Local Copilot and try it out for yourself!
Conclusion
Building your very own open-source copilot with Pieces OS Client and the available SDKs should be an easy process; there are notes and code snippets throughout this article series that give a full view of how these key endpoints work and more. Check out all three of the articles here:
Part 1: Build Your Own Copilot - Create a Copilot in less than 10 minutes
Part 2: Downloading Local LLMs and Switching Between Models With Ease
Keep your eye out for more articles on using the open-source code copilot, Local Large Language Models, and other Pieces Copilot functionality that can be found in Pieces OS Client. If you want to get more involved, check out our Open Source program on GitHub. If you’re interested in Pieces for Developers and want to learn more, check out our documentation or our desktop app to give it a try.