Back

Company News

Aug 11, 2025

Aug 11, 2025

Visionary AI investor Flat Capital Invests in Pieces to Accelerate Artificial Memory For Individuals and the Enterprise

We’re thrilled to welcome Flat Capital as a new investor in Pieces. Learn more about this exciting partnership and what it means for the future of local-first AI.

We’re thrilled to welcome Flat Capital as a new investor in Pieces. Learn more about this exciting partnership and what it means for the future of local-first AI.


“Pieces feel like autocomplete for my own brain…” - Flat Investment Team

Today, we’re thrilled to join OpenAI, Perplexity, DeepL, ElevenLabs, and xAI as the latest addition to Flat Capital’s industry-transforming AI portfolio.  

Flat Capital AB – the Stockholm-based investment company founded by Klarna's Sebastian and Nina Siemiatkowski – backs dedicated founders who are in it for the long haul, and their decision to join our journey is a vote of confidence in our mission to deliver Long-Term Artificial Memory for everyone.


Why Flat's Partnership Matters

Flat’s “long haul” mindset mirrors the same far-reaching conviction at the core of our ambitions: to develop an artificial extension of human memory – one that silently captures intention, preserves meaningful interaction, and surfaces the exact context you need, in the moments you need it most. Our vision is to make your computer feel less like a tool, and more like a trusted partner – freeing users from the burden of digital recollection, and keeping focus on uniquely human, high-value work.


Why Flat Chose Pieces

In our first meeting with Flat, we showed how a marketer could instantly resurface every relevant slide, note, internal discussion, and competitor teardown from the last several weeks when planning an upcoming campaign. That moment of clarity and delight in the Flat Team’s quote above – seeing our technology underpin a whole new way of working – showed us that Flat was the right partner for us in the next phase of our journey.


Standing Shoulder-to-Shoulder with AI Pioneers

Joining a group of innovators that has built over $400B in enterprise value in just the past five years is not only a true honor and deeply humbling, but also an energizing challenge to keep raising the bar on technical excellence and human impact. We're grateful for the validation and see it as a call to action to expand our ambition, refine our craft, and deliver breakthroughs that ensure humans can keep up in a world of AI agents and LLM-first workflows.


What Comes Next for Pieces

This partnership comes at a pivotal time. In the coming weeks, we'll be rolling out two of our biggest advancements yet – innovations that take Pieces' artificial memory to unprecedented heights:

LTM 2.7: DeepStudy Your Own Memories

LTM 2.7 proudly debuts a new flagship capability: DeepStudy. Think of it like the "deep research" tools in cloud AI platforms – except instead of searching the web, it studies the digital touchpoints you've had and context you've interacted with, from across your entire desktop across the past day(s), month(s), and soon enough year(s). A Pieces user can ask a seemingly simple question where DeepStudy combs through all of your relevant artificial memory (all previously formed, stored, and now retrieved privately on your device) to generate a comprehensive, fact-checked report. No other tool in the world has this capability.  For example, you could ask it to:

  • "Pull together everything from the meetings, research, and decisions we made about our database strategy last summer, and give me a comprehensive analysis as to why we took the path we took."

  • "Digest all the ML inference papers I read on arXiv and the deep research artifacts scattered across Claude, ChatGPT, and Perplexity from last week, then produce a definitive step-by-step implementation plan."

  • "Recap all my investor meetings, feedback notes, and email exchanges between last November and this past January."

  • "Review all the support tickets I triaged earlier in the day today, and tell me what documentation we should improve to help users find what they're looking for. Also, put together a trend report for the relevant product owners."

LTM 2.5-Local: Your Favorite Features From 2.5, but Without the Cloud

LTM 2.5-Local delivers the extensive value of artificial memory and proactive context capture, all without the need for cloud LLMs – meaning your local-only experience in Pieces is faster, significantly more accurate, with your data remaining private, secure, and most importantly, yours.

Early in 2024, we realized that nano-models running on the edge (on your device) were the only way to privately, securely, and cost-effectively process tens of thousands of "memories" per user per day. After nearly 14 months of intense trials, patience-pushing errors, and meticulous breakthroughs, we're excited to be rolling out a custom-built edge ML inference layer capable of dynamically executing (in full 32-bit precision) twelve lightweight encoder–decoder task-specific nano models. This new inference layer consumes 75% less memory and achieves 15% lower latency compared to our previous on-device inference stack, ONNX Runtime.

To power LTM 2.5 Local, this release also introduces significant improvements to our vector search capabilities across macOS, Linux, and Windows. These enhancements include novel techniques for embedding generation, storage, and retrieval, with each component uniquely built from the ground up to facilitate on-the-fly vector indexing and dynamically scoped search spaces. In academic terms, this is referred to as “post-filtering” and ensures search across millions of vectorized memories remains extremely efficient – even on mid-range hardware specs.

Looking Ahead: Team Intelligence, Personal Flow

Our users have said it best: scattered knowledge, siloed context, and an endless stream of new information are becoming increasingly problematic to keep up with. With Flat’s backing, we are doubling down on building a seamless, invisible Artificial Memory layer that remembers for you, so you can focus on creating, deciding, and executing. 

Next on our roadmap is enabling teams to think, work, and remember as one. Shared Artificial Memory will allow organizations to build a living knowledge base that evolves with every project, decision, and teammate—ensuring that no insight is lost and that collective context is always just a moment away.

For individuals, we're building toward an entirely new layer of flow: a proactive, real-time heads-up display that delivers the right context, exactly when it matters. Think of it as your own on-device assistant—surfacing insights before you go looking, helping you stay focused, make smarter decisions, and never lose the thread. We've already seen early glimpses of this, and yes—it’s as helpful (and as awesome) as you'd hope.

As we look ahead to the exciting opportunities on the horizon, we’re reminded that none of this progress would be possible without the unwavering support of our investors, our dedicated team, our incredible community, and every individual who has been part of this journey with us.

To our community: thank you for the continuous feedback that helps shape every release.

To the Pieces team: thank you for bringing your everything, every day, in all of your hard work, making these visions a reality. This milestone is yours.

And to Flat Capital: Welcome aboard. We're thankful for your support and proud to build alongside the leaders you already champion.

Onward and upward,

Tsavo Knott

Co-Founder & CEO, Pieces

SHARE

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.