Back
Best practices for prompt engineering with AI Copilots
Check out best practices for prompt engineering with AI Copilots. Learn clear and concise prompts, ensuring optimal responses and improved results.
With copilots like Pieces, the chat is the UI, meaning developers have to learn a new set of skills to interact with the copilot effectively using natural language. This post shows a number of best practices for getting the most out of your chats with the Pieces Copilot, with examples.
For a more theoretical discussion on prompt engineering, check out our practical developers guide to LLM prompt engineering blog post.
Code generation and manipulation
One of the main LLM use cases for developers is generating code. The training set for LLMs is based on a massive amount of code, with some LLMs, such as Code Gemma, an on-device model from Google that is available in Pieces, actively focused on code generation. With the right prompt, you can generate code based on existing, well established code from product documentation or open source projects, or take your code base and add new features or capabilities to it.
Be descriptive about the outcome including the programming language, libraries, and frameworks
LLMs are trained on a huge amount of code across a multitude of programming languages. When you are asking for code, be descriptive about the outcome you want, including specifying the programming language.
Instead of asking:
It is better to ask:
If there is ambiguity, LLMs usually default to whatever is the largest contributor to their training set, essentially matching the popularity of the language.
For example, if you don’t specify the language, you will probably get code in Python, unless you are asking for web-related projects, where the code will most likely be in JavaScript.
If you are using an existing project as context, then the copilot will detect the language used, and default to that. However, if you have a mixed language project, such as a C# web app with some JavaScript, then it is important to specify which of the languages should be used.
As well as being trained on a multitude of programming languages, LLMs are also trained on a huge amount of libraries and frameworks. Again, be specific about the library or framework you are using.
Instead of asking:
It is better to ask:
Remember, you can always use prompt chaining to ‘correct’ the LLM if you are not specific enough.
Provide clear naming, structure, and other guidance
If you want the LLM to generate code, you will get better results if you guide the LLM with class or field names, define the structure you want, and provide other guidance, such as the traits, base class, interface or other capabilities you need.
This is often more effective than asking for this step by step using prompt chaining. For example, if you are creating a model class for a user in C# with properties for Id, name, and email that you need to serialize to JSON, and will need to support equality, you can request all this in your prompt. Chain-of-thought prompting is particularly helpful here, adding the instructions as separate steps.
Instead of asking:
Followed by a long chain of follow up questions, it is better to ask:
By providing detail in the steps, everything is considered up front and you get one working code block. If this was in multiple prompts, you would need to apply updates piecemeal from the code blocks generated with every step, with potential for inconsistencies or incompatible code updates.
Provide relevant context
LLMs work best if they have as much relevant context as possible, and as little irrelevant context as possible. If you need to interact with a project, add the folder that the project lives in as context for the conversation. If you just need to interact with a file, just add that one file.
When you add context to a conversation, the Pieces relevancy engine will work out what it thinks is relevant to each question, and send that to the LLM. Each LLM has a context window size that limits the amount of context that can be passed, so the larger the project, the harder it is for the relevancy engine to ensure it is sending the relevant materials.
You can improve this by only adding what you know is relevant. For example, if you have a massive, multi-million line of code project, and you need to add a feature to the models for handling orders, instead of adding the entire project as context, just add the folder that contains the relevant order models. You can add multiple files and folders, so be granular with what you are adding.
Once your code is added as context, refer to it in your prompts. You can use terms like this project
when referring to the files or folders added as context as a whole, or reference files, classes or other components by name.
For example, ask:
Or
Give guidance on code style and format
A lot of companies and open source projects have code style and formatting guides. Sometimes these are industry standards, like Pythons PEP8, other times these are in-house guides. When creating code, you can prompt the LLM to follow your style or formatting rules. You can also provide instructions around naming conventions, parameter ordering, method positioning in classes, and pretty much anything you want to do. You can do this by providing the rules either in-line or by adding a format file as context.
For example, to format some C# code using a company .editorconfig file, you can add the file as context, and ask:
Long-term memory
Pieces is the only copilot that provides access to Long-Term memory using context captured from all the activities you are doing on your computer. To get the most out of this, there are some different prompting strategies you need to use to guide the copilot when using the Long-Term Memory.
Use names or other identifiers to guide memory relevance
Pieces has a relevance engine that processes your prompt locally to find what information stored in the Long-Term Memory needs to be sent to the LLM. To help guide this process, use names, titles, or other unique or specific identifiers. For example, when referring to a specific conversation, use the names of the people involved.
Instead of asking:
It is better to ask:
This way the results won’t reference other conversations with other people.
Refer to the applications or window titles when referencing a specific action
If you need Pieces to use context from a specific application, you can mention that by name to guide the relevancy engine. For example, if you have hit an error in VS Code for something you are running, you can ask specifically for the error from VS Code, to avoid getting details of other errors you have been researching in your browser.
Instead of asking:
It is better to ask:
Use time-based phrases
The Pieces Long-Term Memory is time-based, so you can interact with it using time-based phrases to narrow down the scope. For example, if you are looking for documentation that you were reading yesterday on one web framework, as opposed to something you were reading today on a different framework, you can specify this in your prompt.
Instead of asking:
It is better to ask:
The Pieces Long-Term Memory currently goes back to the last 7 days. When asking questions, use time/day period phrasing, such as just now
, an hour ago
, or 2 days ago
as opposed to dates, like what was I doing on December 1st
, as using dates will also reference content that has the date in it, such as an email dated December 1st that you read on December 2nd.
Ask for links using unique details
Pieces can provide deep links to websites that you have accessed, which you can retrieve by using prompts with specific details such as the general site, a name or other unique identifier such as an activity you were doing, and a time-based phrase.
This will give deep links to the site in the LLM response. Most developers have a huge amount of tabs open and navigate around a lot, so you need to be specific with details of the time range, general site, and unique details.
Instead of asking:
It is better to ask:
Optimizing prompts
This list is far from exhaustive, and there are many more best practices you can follow to not only get the most out of the underlying LLM but also leverage the unique features of Pieces, such as the ability to use any file or folder as context or use context captured by the Long-Term Memory.
Keep an eye out for more posts like this on the Pieces blog, and make sure to follow us on your social platform of choice to hear about more of these posts, along with other tips and tricks.
This article was first published on October 7, 2024, and we’ve updated it as of December 9, 2024, to improve your experience and share the latest information.