Back
Build Your Own Copilot: Adding Local Context in to Your Conversation
Build your own copilot with the Pieces Client, based on how the Pieces Copilot uses context to understand your questions.
Pieces OS Client is a flexible database with a long list of APIs that can be used to build your own copilot that has the ability to understand local context and use it in combination with on-device Local LLMs to answer questions and assist developers in their coding activities. In previous articles, we have covered how you can create your own Pieces Copilot and utilize all of the features available in the Typescript SDK available on NPM in order to add this functionality to your own applications and projects.
We also have provided SDKs in Python and Kotlin, and are working hard to bring you a Dart SDK to provide support across a multitude of languages and environments. Each of these articles contains examples on how you can create a Pieces Copilot that fits your needs.
Code Along
Following the previous article and the addition of the local Llama2 (GPU/CPU) Models, we want to show the power of using these models on the tasks for which we previously used GPT4 (or 3.5). We also want to demonstrate what we can accomplish by adding context to get specific results back from our copilot—even in an offline environment.
Each article in this series corresponds to this repository, which contains a starter project for building your own copilot application in a Vanilla JavaScript environment. Use the repo to follow along with this article or to use as a starting point for your own project.
Prerequisites
In order to understand this article, it’s best to first read the other two articles on building your own copilot to understand aspects of the LLLMs and sending conversation messages.
If you have missed the other two preceding articles you can catch up here:
Part 1: Build Your Own Copilot - Create a Copilot in less than 10 minutes
Part 2: Downloading Local LLMs and Switching Between Models With Ease
For following along in your own project, be sure to have Pieces OS downloaded and the appropriate SDK for your language or environment.
Getting Started With Context
In our previous work we have created a few simple HTML elements in order to put other displayed information such as the copilot message that you are going to attach as a query
when you send a conversation message. Here we need to create a button to add the files to our context, a label (needed for targeting the text area), and a container that will display the selected files for context. You will see these again later as we populate them:
We shouldn't need to revisit this file at all, but can move over to index.ts
and start learning how to create our RelevanceRequest
. First, we’ll fetch our files that are selected as context and their associated file paths, then we’ll send them over to our CopilotStreamController.ts
to craft the request for our conversation message with the copilot.
Capturing Relevant Files as Context
When the add-files-as-context
button is pressed, we can use a Pieces OS Client API to open a File Picker native to your operating system. You can do that using a really simple API call using OSApi
and you’ll notice the empty filePickerInput: {}
object that is passed in; this is something you only have to do with the SDK.
This returns a specific type of data, and when it completes (or the files are selected), it will return the files as an array of strings which will be used in the next step when using .then()
:
We can attach this action to our button up above once we use document.getElementByID("add-files-as-context")
and ensure the button is present on the DOM:
Then on each file path that is returned, each file
(list of strings that is returned from the file picker, each being a string that is the file path of the selected file) that is selected in the FilePicker
is returned back individually on the Array
. We go through each one and can create a new child inside of our list of selected files that is displayed on the UI, along with updating our selectedContextFiles
global variable that is read elsewhere:
Creating Relevance and Adding User Context
Now that our global value has been updated for selectedContextFiles
we can head over to CopilotStreamController.ts
to use those values and formulate our relevance request. The Pieces.RelevanceRequest
object is the first step in a process to gather, seed, then attach relevance to your conversation message that you send.
Creating the Relevance.Request
When we created the FilePicker
functionality earlier by using the OSApi.pickFiles()
endpoint, we captured each of the FilePaths
(as strings) associated with files requested to be added as context by the user and stored them on the CopilotStreamController.selectedContextFiles
variable that we can now pass in with our query
:
Remember that query
is just the string value that comes from our userInput
text area from the earlier articles.
Connecting Your Application
In order to seed the qGPT.RelevanceInput.seed
that is needed to use the qGPTApi().relevance
endpoint, you will need to connect your application to Pieces OS and effectively authenticate as a registered application.
Below where the relevanceInput
is double-checked and error handled in our previous step, we can add these two lines:
Now over in the index.ts
file—or in your corresponding entrypoint.ts
file—there needs to be a new function defined for creating and getting our application value after communicating with Pieces OS. There are some notes on the different values here, but this is a copy and paste example of getting a generic application value for an OpenSource project:
The application
has been created and now we can get the application parameter added into our relevanceInput.qGPTRelevanceInput.seeds
to get back our relevance.
Building the relevanceInput.qGPTRelevanceInput.seeds
When a relevance call is made via the API, it needs a pre-seeded object that contains the application, the parameter type
that represents the type of seed that is going to be used, and the userContextInput
that is inside of our contextInput
. If there is no userContextInput
, then we want to be sure that we don't run this, as it would add additional processing with no supplied relevance. Here is the object in full:
Beneath that, we can then call the qGPTApi.relevance()
endpoint to get our relevance back once we pass in the relevanceInput
variable and store its output:
Finally, we pass in the relevanceOutput
into the relevant parameter on the Pieces.QGPTStreamInput
object. You may recognize it from when we initially created our question
during the first article of this series. Once all of the framing is configured throughout the rest of the project, the final call to ask the question becomes quite simple:
Seeing Relevance/Context in Action
With all parts added, the context functionality will give you the ability to ask questions based on the information that is provided to the copilot. If you are following along and have the repo cloned on your machine and are up and running with the copilot project, you can run your application and test this out on your machine.
Returning to the browser with your copilot, you can attach any file and ask a specific question about information in that file to feel the effects and see the difference between requests. Try adding a JSON object as a file to your context and ask the copilot questions to see the results, or try it out with the copilot example project.
Ready to Build Your Own Copilot?
Now you have a complete guide to building your own copilot, downloading and using local LLMs, and adding context to your copilot conversation messages. With these tools, you can build your own copilot application or add the functionality of Pieces OS Client into your own applications.
If you would like to get more involved with this project, you can check out our OpenSource Repo or this project on Github to download the complete code.
Resources:
Try the Desktop App to see the Power of Pieces OS
Vanilla Copilot Example: full, usable example of the copilot
Open Source Resources: other open source projects and resources currently underway
More articles coming soon around how to use the SDKs and other projects we are working on!