/

AI & LLM

Sep 17, 2025

Sep 17, 2025

Prototypes: the glue of Long-Term Memory

Explore how prototypes lay the foundation for long-term memory in AI. Learn why early experiments, iteration, and design “blueprints” are critical for building durable, context-rich intelligence.

This isn’t a new idea. Prototypical networks, the machine learning trick that powers systems like Google Photos work the same way. Embed a bunch of images of the same person, average them, and you get a prototype vector. That prototype becomes the stable representation, even as the raw inputs vary.

It’s a simple idea with massive implications:

  • Multimodality strengthens identity. Store text and images and voice, and you get a far more resilient anchor.

  • Prototypes scale. As your model backbone improves, the prototypes get stronger too.

  • Hard facts matter. Anchoring memory in prototypes creates “ground truth” for the system to lean on when generating.

We’ve been exploring similar patterns at Pieces. In some of our OCR cleanup experiments, we’ve paired extraction with narrative generation loops to make reasoning more robust. The parallel here is obvious: if you want reliable long-term context, you need more than lightweight embeddings. You need prototypes.


Why who comes first

Zoom out, and you’ll see why people are the most important entity class. Projects, goals, and blockers matter, but they’re transient. What persists,  what shapes the story of any company or product are the people.

If a system knows who you work with, everything else comes into focus:

  • Projects link to the teammates who drive them

  • Goals are tied to their champions

  • Blockers are often person-to-person misalignments

Founders and builders know this intuitively. Scaling doesn’t happen through abstract systems alone, it happens through people. A memory system that treats who as the primary anchor aligns with how organizations actually operate.


Where this leads

So what’s the practical takeaway? For anyone working on long-term memory — whether in consumer apps, productivity tools, or AI copilots don’t just store text and timelines. Persist the prototypes. Start with people.

That might look like:

  • Saving the exact names and titles of collaborators when they appear

  • Capturing profile pictures or voice samples for key entities

  • Anchoring projects or tables with representative artifacts (logos, top images, canonical versions)

From there, build semantic graphs that tie everything together. Let prototypes do the heavy lifting of disambiguation.

The irony is that scaling memory isn’t about abstracting further away. It’s about grounding more deeply in the concrete details that persist  and nothing persists more than who.


Closing thought

The ByteDance paper is light on some implementation details, but the direction feels right. If we want long-term memory that feels robust, useful, and human, we need to stop thinking in generic tokens and start thinking in entities.

And the most important entity to start with is the simplest one: the people you work with every day.

Because in “who, what, where, when, why,” the answer always starts with who.

Written by

Written by

SHARE

Prototypes: the glue of Long-Term Memory

Recent

Sep 15, 2025

Sep 15, 2025

Why developers need AI that actually gets Their context

Tired of re-explaining your codebase to AI every week? Discover why developers need context-aware AI that remembers your workflow. Learn how Workstream Activity, Sources, and Time Ranges in Pieces give you control, continuity, and a searchable memory for your entire dev process.

Tired of re-explaining your codebase to AI every week? Discover why developers need context-aware AI that remembers your workflow. Learn how Workstream Activity, Sources, and Time Ranges in Pieces give you control, continuity, and a searchable memory for your entire dev process.

Sep 11, 2025

Sep 11, 2025

AI memory explained: what Perplexity, ChatGPT, Pieces, and Claude remember (and forget)

Discover the different types of AI memory, how they work, key use cases, and the best prompting approaches to get accurate, context-aware responses

Discover the different types of AI memory, how they work, key use cases, and the best prompting approaches to get accurate, context-aware responses

Pieces IDE plugins
Pieces IDE plugins
Pieces IDE plugins

Sep 5, 2025

Sep 5, 2025

From Browser to IDE: how to carry context seamlessly with Pieces

iscover how Pieces helps developers carry context seamlessly across browser, IDE, CLI, and desktop. From snippet capture to Copilot-powered reuse, learn how to eliminate lost time, preserve continuity, and stay in flow throughout your workflow.

iscover how Pieces helps developers carry context seamlessly across browser, IDE, CLI, and desktop. From snippet capture to Copilot-powered reuse, learn how to eliminate lost time, preserve continuity, and stay in flow throughout your workflow.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.