Back

Feb 6, 2024

Feb 6, 2024

Build Your Own Open Source Copilot with Pieces

Build your own open source copilot with the Pieces Client with all of the context and large language model support you will need.

Open Source by Pieces.
Open Source by Pieces.
Open Source by Pieces.

As of last year and into the beginning of 2024, copilot functionality and access have increased 10-fold as their usage has become more prevalent across all the applications that we interact with in our workflow. As each copilot emerged, we at Pieces noticed a trend of many different copilots belonging to specific tools, leading to tedious copying and pasting between copilots to maintain a unified understanding of our workflow, which adds to the costs of context switching.

Suddenly, we see a world without any tools that would allow users to consume Local Large Language Models like Mistral and Llama 2, as well as cloud LLMs like GPT-4 and Gemini, all from the same location. Since it’s often unclear which is the best LLM for coding, we wanted to give users options. Pieces as an ecosystem works in different plugins (like VS Code, Obsidian, JetBrains, Chrome, and more) all supported by Pieces OS, which handles the logic and language models that you consume in Pieces Desktop or the plugins that connect to it.

After developing our proprietary Pieces Copilot, we decided to support an open-source copilot that developers can build on their own to leverage various LLMs, set custom context for more personalized responses, and integrate with their favorite tools. This guide will help you do just that, serving as an open-source alternative to GitHub Copilot and other AI tools.

What is Pieces OS Client?

Pieces OS Client is a flexible database that provides a custom open-source copilot for saving, generating, and iterating on code snippets across your development workflow. It comes with built-in Local Large Language Models that can be used to ask questions and generate powerful code solutions no matter what dev tool you’re working in.

Since Pieces OS contains a large amount of useful tooling, our first step was to create a number of ways for developers to access and use the SDKs. In order to meet that need, we have taken all of the functionality that has been added to Pieces OS over the last two years and bundled it up in different language packages:

What Can My Open Source Copilot Do?

Here is a high-level list of some of the features you have access to when building your copilot open source:

1. Switch between the following Large Language Models for your open-source copilot conversations. You can download these models on-device and use them without internet connection, or you can consume them in the cloud:

  • Local: Mistral AI (CPU + GPU)

  • Local: Llama 2 (CPU + GPU)

  • Cloud: GPT-3.5 Turbo + GPT-3.5 Turbo 16k + GPT-4 (you can also use your own OpenAI API key)

  • Cloud: Gemini

  • Cloud: Chat Bison + Code Chat Bison with Palm 2

2. Add context to your conversations and questions to give your open-source code copilot additional seeded information to answer your questions without having to train a model on your data

3. Ask questions about a specific code snippet that you have saved in the past

You can perform several other actions through the Pieces OS Client and can discover the other endpoints and SDKs through the Open Source by Pieces Repo.

Building your Open Source Copilot

When first using the SDK, it can be overwhelming to try to understand all of the classes available to you, so here is a brief introduction to some of the more powerful endpoints that are related to creating your own open-source copilot.

Universal Steps

1. Download the SDK in the language that you prefer— this guide follows the Typescript Example using WebSockets and this repo:

2. Be sure to install Pieces OS and get it running on your machine

Part 1: Creating Your Open Source Copilot

Adding the needed functionality can be super easy here, as you can create your first message without connecting or registering your application. If you want to send a single message and get back a response, you can do so with little to no configuration or data added:

CopilotStreamController.getInstance().askQGPT({
    query: userInput,
    setMessage
});

Save this Snippet

You can see some of the configurations you will need to send the message in the CopilotStreamController on .askQGPT, but you can use the Pieces.QGPTStreamInput to send the message:

// @query == userInput from the index.ts
const input: Pieces.QGPTStreamInput = {
      question: {
        query,
        relevant: {iterable: []} 
    },
};

Save this Snippet

You can read more about handleMessages, updating the UI, and generating the response from the full article Build Your Own Copilot in 10 Minutes.

Part 2: Downloading Local Large Language Models like Mistral AI

At the beginning of the article, we mentioned all of the different models that are accessible through Pieces OS Client and that one of the benefits of using the Client for LLLM management is being able to control downloading, deleting, and using models interchangeably.

In a few steps, you can access the model’s snapshot and select the model you want, along with all of the model.foundation parameters that are needed for identifying the model during the download process, or the readable name of the model. Here what that could look like in a file like modelsProgressController.tsx as the constructor:

public models: Promise;
private constructor() {
	// the models snapshot 
	this.models = new Pieces.ModelsApi().modelsSnapshot();
	// filtering off our default model here:
	this.models.then((models) => {
	      this.initSockets(
	      // set intial model download value.
	        models.iterable.filter(
	          (el) =>
	            el.foundation === Pieces.ModelFoundationEnum.Llama27B &&
	            el.unique !== 'llama-2-7b-chat.ggmlv3.q4_K_M'
	        )
	      );
	});
}

Save this Snippet

Then you could properly set, select, and use the models in your main() entry point to use later when passing the model.id into the ModelApi().modelSpecificModelDownload endpoint like the following. If we were going to first get the model information and then set the selected model for download, it would look like this:

// define your model values by accessing it from the .iterable on models
const mistralcpu = models.iterable.find((model) =>  model.foundation === ModelFoundationEnum.Mistral7B && model.cpu);
// starts the model download on click.
downloadMistralCpuButton.onclick = (e) => {
    new ModelApi().modelSpecificModelDownload({model: mistralcpu!.id})
}

Save this Snippet

Then after the model has been downloaded, you can pass your model ID into the QGPTStreamInput now including the model parameter:

const input: Pieces.QGPTStreamInput = {
  question: {
     query,
     relevant: {iterable: []},
     // the updated parameter here:
     model: mistralcpu!.id
  },
};

Save this Snippet

In this simple example, you could select the model, download it, and then use the model in your next query. For a more complete example, see the blog How to Build a Copilot Using Local LLMs to understand more about connecting the UI to these states and showing download progress.

Part 3: Attaching Context to Your Open Source Copilot Queries for Better Responses

In most copilot environments, you cannot attach context to your models without training them on the knowledge base to answer your questions first. With the Pieces OS Client, attaching context is straightforward and does not require waiting for a model to train. This allows for quick context swapping on a large scale from project to project, or on something as small as two separate lines of code.

To add files, you would need to add a file picker package to the project. But to speed up the process as developers are using this SDK, Pieces OS Client comes with a native OS file picker built in for handling the paths that are selected here.

You can use that file picker like so:

// opens the filepicker and returns Array filepaths back.
new OSApi().pickFiles({filePickerInput: {}});
const files = [];
addFilesAsContext.onclick = () => {
	new Pieces.OSApi().pickFiles({filePickerInput: {}}).then((files) => {
            files.forEach((file) => {
            // Do anything to @file == string 
            })
    })
}
// then use relevanceInput to add your selected context as @paths:
const relevanceInput: Pieces.RelevanceRequest = {
      qGPTRelevanceInput: {
        query,
        paths: files,
    }
}

Save this Snippet

Now you have created your relevance pipeline to let the user select and send over the first half of the contextual message. But to use context, we have to seed an asset, which requires the application to be created. Here is an example of creating an application and getting all of its values:

let application: Pieces.Application;
export async function getApplication() {
    if (application) return application;
    // PlatformEnum corresponds to the current operating system.
    const platform: PlatformEnum = window.navigator.userAgent.toLowerCase().includes('linux') ? PlatformEnum.Linux : window.navigator.userAgent.toLowerCase().includes('win') ? Pieces.PlatformEnum.Windows : Pieces.PlatformEnum.Macos;
    // Creating the Application Here, and setting up parameters
    let context: Context = await new Pieces.ConnectorApi().connect({
        seededConnectorConnection: {
            application: {
                name: Pieces.ApplicationNameEnum.OpenSource,
                version: '0.0.0',
                platform,
            }
        }
    });
application = context.application
}
// Once Context is created and configured for our application
// we can use application in our seed on relevanceInput.
// create your relevance input here:
relevanceInput.qGPTRelevanceInput.seeds = {
    iterable: [
      {
        // the type of relevance input that is being used.
        type: SeedTypeEnum.Asset,
        asset: {
            // the application we created and regisetered.
            application,
            format: {
	            fragment: {
                string: {
                    // string value that the user supplied.
                    raw: userContextInput
                  }
              }
            }
        }
      }
    ]
  }
}
// creates the relevanceOutput that is needed for the QGPTStreamInput.
const relevanceOutput = await new QGPTApi().relevance(relevanceInput);
Save this Snippet
Then the final step is to add that relevant input as the relevant parameter on question:
const input: Pieces.QGPTStreamInput = {
    question: {
        query,
        // the relevance:
        relevant: relevanceOutput.relevant,
        model: mistralcpu!.id
    },
};

Save this Snippet

To see a full breakdown with more code examples and explanations, check out the article on Adding Context to your Local Copilot and try it out for yourself!

Conclusion

Building your very own open-source copilot with Pieces OS Client and the available SDKs should be an easy process; there are notes and code snippets throughout this article series that give a full view of how these key endpoints work and more. Check out all three of the articles here:

Keep your eye out for more articles on using the open-source code copilot, Local Large Language Models, and other Pieces Copilot functionality that can be found in Pieces OS Client. If you want to get more involved, check out our Open Source program on GitHub. If you’re interested in Pieces for Developers and want to learn more, check out our documentation or our desktop app to give it a try.

Pieces logo.
Pieces logo.

Written by

Written by

The Pieces Team

The Pieces Team

SHARE

SHARE

Build Your Own Open Source Copilot with Pieces

Title

Title

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.