Get expert insights and analysis on developer trends and challenges—delivered to you every Monday morning.

All posts

Jul 1, 2025

Jul 1, 2025

Jul 1, 2025

| by

Oct 7, 2024

Oct 7, 2024

The Pieces Team

The Pieces Team

The Pieces Team

Building an AI Agent that thinks and grows with you

Build an AI agent that remembers, reasons, and adapts alongside you. Learn step-by-step workflows, memory architecture, and best-practice tooling to create truly intelligent systems.

Build an AI agent that remembers, reasons, and adapts alongside you. Learn step-by-step workflows, memory architecture, and best-practice tooling to create truly intelligent systems.

Jun 26, 2025

Jun 26, 2025

Jun 26, 2025

| by

Oct 7, 2024

Oct 7, 2024

The Pieces Team

The Pieces Team

The Pieces Team

Review: ChatGPT's Deep Research mode – finally, a tool that respects the rabbit hole

Reviewing ChatGPT’s new Deep Research feature: how it supports thoughtful inquiry, source diversity, and better questions for those tired of shallow answers.

Reviewing ChatGPT’s new Deep Research feature: how it supports thoughtful inquiry, source diversity, and better questions for those tired of shallow answers.

Jun 24, 2025

Jun 24, 2025

Jun 24, 2025

| by

Oct 7, 2024

Oct 7, 2024

The Pieces Team

The Pieces Team

The Pieces Team

Prompt evaluation: the complete guide to assessing and optimizing AI Model performance

Learn what prompt evaluation is, why it matters in AI development, and how to systematically assess prompt quality to improve performance, accuracy, and reliability across use cases

Learn what prompt evaluation is, why it matters in AI development, and how to systematically assess prompt quality to improve performance, accuracy, and reliability across use cases

Jun 23, 2025

Jun 23, 2025

Jun 23, 2025

| by

Oct 7, 2024

Oct 7, 2024

The Pieces Team

The Pieces Team

The Pieces Team

What is Red teaming in AI: a complete guide to AI security testing

Discover what red teaming in AI really means. This complete guide explains how AI red teaming works, why it's essential for security testing, and how organizations use it to identify vulnerabilities and strengthen AI safety.

Discover what red teaming in AI really means. This complete guide explains how AI red teaming works, why it's essential for security testing, and how organizations use it to identify vulnerabilities and strengthen AI safety.

Jun 19, 2025

Jun 19, 2025

Jun 19, 2025

| by

Oct 7, 2024

Oct 7, 2024

The Pieces Team

The Pieces Team

The Pieces Team

Investigating LLM Jailbreaking: how prompts push the limits of AI safety

Explore the concept of LLM jailbreaking: how users bypass safety guardrails in language models, why it matters for AI safety, and what it reveals about the limits of control in modern AI systems.

Explore the concept of LLM jailbreaking: how users bypass safety guardrails in language models, why it matters for AI safety, and what it reveals about the limits of control in modern AI systems.

Jun 18, 2025

Jun 18, 2025

Jun 18, 2025

| by

Oct 7, 2024

Oct 7, 2024

The Pieces Team

The Pieces Team

The Pieces Team

Claude fine-tuning: a complete guide to customizing Anthropic's AI model

Learn how to fine-tune Claude, Anthropic’s AI model, with this comprehensive guide. Explore customization strategies, use cases, and best practices for tailoring Claude to your organization’s needs.

Learn how to fine-tune Claude, Anthropic’s AI model, with this comprehensive guide. Explore customization strategies, use cases, and best practices for tailoring Claude to your organization’s needs.

Load more

Load more

Load more

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.