/

Insights

Jan 20, 2025

Jan 20, 2025

Be more productive with Pieces and GitHub Copilot

Combine Pieces and GitHub Copilot to build a powerful workflow with long-term memory and AI assisted coding

Millions of developers use GitHub Copilot every day to understand and generate code. But like pretty much every AI developer tool, Copilot is limited in that it can only reason over your current project - it doesn’t have the context of everything you are doing. This is why adding Pieces with its Long-Term Memory to your GitHub Copilot workflow is a great way to boost your productivity.


What is GitHub copilot?

GitHub Copilot is an AI pair programmer, with agentic workflows to help you generate code. It originally started out as a powerful AI autocomplete, where the AI could infer the code you need from what you are already typing. 

Think of being able to write a comment to describe a function and have the autocomplete suggest the entire function for you. It then expanded to include a Copilot chat, and agentic workflows so you can ask the copilot to make changes across multiple files.

Like all AI copilots except Pieces, the context is limited to the current project. You can’t add code from outside the project for the Copilot to reason over, and it doesn’t have access to all the activities you are doing on your developer machine.


What is Pieces?

Pieces is Long-Term Memory for your developer workflow – it captures context from everything you are doing, from using advanced OCR to read your screen, to allowing you to save curated code snippets

The Pieces Copilot can then interact with all that captured context, as well as any files or folders of code you want. This allows you to ask questions like “What was I working on yesterday?”, or add a folder of code and ask “Summarize the discussion in the team chat about the reporting GitHub issue I was just reading, then show how I can implement this in the provided project”.

Pieces is available in a standalone desktop app, as well as in extensions and plugins for all the IDEs you use. This means you can fire up Visual Studio Code for example, and use both Pieces and GitHub Copilot in the same project, taking advantage of the strengths of both.


Pieces + GitHub Copilot

As helpful as AI tools are, the top productivity hits developers have do nothing to do with the creation of code. According to the 2024 Cortex State of Developer productivity report, the top 2 detractors of productivity are:

  • Duplicative effort/unsure what already exists

  • Too much time to gather context from disparate sources

It doesn’t matter how powerful the tools you have in your IDE are, your productivity will be hit hard if you are spending large chunks of time focusing on gathering the information you need before you even think about writing code. Chances are you have already been exposed to information about what already exists, and some of the context you need. The hard part is remembering this information, knowing where you got it from, and curating it. 

If only you had a long-term memory that could help. This is where Pieces shines! Where other AI developer tools focus on understanding and generating code based solely on the open project, Pieces focuses on helping you understand and curate all the activities you are doing on your developer machine to help you understand code better in the context of your organization, and generate the right code.

Pieces and GitHub Copilot are 2 different and complementary tools that work well together to give an amazing experience as a developer.

I can use Pieces when researching and gathering project context, reducing the main detractors of productivity by allowing me to mine a long-term memory of everything I have read in documents, emails, team chats, and other places. I can then bring this information together from disparate sources so I have the context I need for every task I am doing.

When I have the context I need, I can leverage the power of GitHub Copilot to make the code changes via its agentic workflows that understand just the project I am working on. As I make changes, if I need to continue my research using other projects, I can open these in my Pieces Copilot without leaving my IDE, ask questions, and use that context with GitHub Copilot.


Use the Pieces Long-Term Memory to get the information you need

Ask Pieces about what already exists

I’ve been working on a fun side project called RemoteLlama – this is an app that provides a CLI and API experience that looks like Ollama, but redirects all calls to a remote machine so that you can run any app that uses Ollama on one machine with the actual models running on another. This is for if you have one machine for development, and a more powerful AI-accelerated machine available for running models. 

It can also be used to redirect to other models, so if you have an app that uses one model, it can make Ollama run against another.

I wanted to use this with the Pieces local models that are powered by Ollama. Partly in the hope one day I get an NVIDIA Digits, and partly just to see what happens when I redirect to random models that Pieces isn’t optimized for. 

To ensure the interface for the remote llama project works with Pieces, I needed to find out what we already have in the way of code to monitor and manage Ollama. I’m sure I’ve discussed this in the past with someone on the engineering team here, so I decided to see what Pieces could help me with.

Use the Pieces Long-Term Memory to ask about a previous conversation

I launched the Pieces Copilot inside VS Code, made sure the Long-Term Memory context was turned on, and asked:

“Have I discussed the Pieces installation and management of Ollama with anyone?”

The Copilot was able to tell me about a conversation I had with Brian that covered the installation and management of Ollama, listing out the states that Pieces manages. It even helpfully provided a deep link to the chat. That deep link took me right into the conversation where I could see more details, and links to the internal code that manages all this – not shown here for obvious reasons!

This is great information for me – it means for RemoteLlama to work with Pieces, I have to ensure that it looks and acts like it was a binary installed just like Ollama, and not just have an API listening on the same port. It needs the same name, and to be installed in the same location.


Have Pieces gather context from disparate sources

As I’m writing RemoteLlama, I need to consider a few things:

  • The CLI needs to match the Ollama CLI, with the addition of commands to set the URL of the real Ollama install, and to change the model

  • The API endpoints need to match the Ollama CLI, with logic to use a different model if needed by changing the body passed to the API call for certain endpoints.

As I do this, I need to constantly switch between my code in VS Code, the Ollama README on GitHub that has the CLI commands, the output of my terminal that shows the output of getting help for the CLI commands, and the Ollama API page.

Use the Pieces Long-Term Memory to ask about API documentation

Instead of the constant context switching, I can read the docs once, then ask Pieces. For example, I wanted to know the parameters for the /api/generate endpoint so that I can map the model to another if necessary. To do this, I can ask Pieces for the details from the documentation with the prompt:

“What are the parameters for the Ollama generate endpoint that I was just reading about in my browser?”

Pieces then answers with details of the main and advanced parameters, with their name, if they are optional or required,, and a description.

Bring multiple sets of information together using the Pieces Long-Term Memory

I also need to tie these parameters back to the ollama run command, so I can implement that in my app - ollama run will use the /api/generate endpoint to generate chat completions. I can use my terminal to get help using this command, and ask Pieces to tie these back to the API. I asked Pieces:

“Based off the output of `ollama run – help` that I just ran in my VS Code terminal, what parameters in the generate endpoint can also come from the run command?”

Although I am using PIeces in the VS Code terminal, I could equally be using it in my macOS terminal, a Warp terminal, or any other terminal including the VS Code terminal in another window. Pieces captures context from all the foreground applications on your developer machine, not just those that are running a Pieces extension or plugin.

Pieces takes the information about the /api/generate endpoint, looks at the parameters, and is able to list the ones that match with the options for the ollama run command, bringing together 2 disparate sources of information.


Use this information with GitHub Copilot

Now that I’ve done my research, I’m ready to start coding using GitHub Copilot.

Make the app pretend to be Ollama

Based on my conversation with Brian, I need my app to look like it is a standard install of Ollama. The first thing I need to do is to ensure the binary is named correctly. Using what I’ve learned from Brian, I know it needs to be called ollama on Windows and Linux, and ollama-darwin on macOS. I can use this information in the GitHub Copilot chat to update my project file.

Add the run CLI command

Now to add the parameters to the ollama run command. I first can ask Pieces to give me the parameters as if it was showing the help text of the CLI command. This gives me some raw data to use with GitHub Copilot.

Next I can copy the list of parameters, and paste them into a GitHub Copilot chat to implement these in my run command.


Conclusion

Pieces and GitHub Copilot are 2 different and complementary tools that work well together to give an amazing experience as a developer. I can use Pieces when researching and gathering project context, reducing the main detractors of productivity by allowing me to mine a long-term memory of everything I have read in documents, emails, team chats, and other places. I can then bring this information together from disparate sources so I have the context I need for every task I am doing. When I have the context I need, I can leverage the power of GitHub Copilot to make the code changes via its agentic workflows.

How do you use Pieces? Do you combine it with a tool like GitHub Copilot? Let me know on X, Bluesky, LinkedIn, or our Discord. And don’t forget to try Pieces if you don’t already use it.

Written by

Written by

SHARE

SHARE

Be more productive with Pieces and GitHub Copilot

Title

Title

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.