My chat with Scott Hanselman on AI, context, and what developers still need to learn
Scott Hanselman and Tsavo Knott explore AI’s impact on developers, long-term memory, and why curiosity, not just code, still drives real innovation during Pieces AI summit.
When I first met Scott Hanselman, Vice President of Developer Community, at Microsoft, he called us “a Pokémon he needed to catch.”
That kind of infectious enthusiasm is rare, and it’s part of what made our recent fireside chat at the AI Productivity Summit one of the most honest conversations I’ve had about the current state of AI, developer tools, and the future of work.
AI isn’t Magic. It’s math and memory.
One of the big themes Scott and I kept circling back to is the gap between what AI promises and what it actually delivers right now.
Tools like GitHub Copilot, Claude, and of course, Pieces, are changing the way developers work, but not in the way some people think.
Scott put it plainly:
“You can’t just vibe your way into enterprise-grade software.”
The idea that AI removes the need for deep technical understanding is not just misleading – it’s dangerous.
He warned against what he called “expert beginners” – developers who can generate a lot of surface-level code but haven’t built the foundational skills to debug or build something enduring.
And honestly, I couldn’t agree more.
Long-Term Memory isn’t a feature – it’s a necessity
This idea is core to why we built Pieces: developers don’t just need search.
They need memory.
Your tools shouldn’t just react to the file in front of you – they should remember the Stack Overflow post you found at 2 AM last week, the team chat where someone solved that weird token error, and the snippet you saved six projects ago.
Scott described Pieces as the “calm friend at 2 AM.”
Not a generator, but a tutor. I’ll take that.
In a world flooded with auto-complete and boilerplate, long-term context is the only way to rise above basic prompt engineering. AI without memory is like an intern with no notes from yesterday.
Vibes don’t fix production bugs
We talked about how junior developers are coming into this landscape with powerful tools but not always the mentorship or curiosity to match.
Scott said something that really stuck with me: “AI might make uncurious people even less curious.”
That’s a heavy thought.
It’s on us – builders of tools, team leads, educators – to help people ask better questions.
The way you level up as a developer isn’t by generating more code, but by understanding when to question it, when to go deeper, and when to say, “Hold up, let’s look under the hood.”
Agentic workflows need guardrails
We’re both excited about agent-based tooling, but there’s caution baked into our optimism.
I’ve had voice-based arguments with my own AI agents (“Stop deleting my Dart comments!”), and Scott reminded me of the deeper point: agents are great, but someone still has to steer.
We need better signals, better affordances, and better permission models.
The “camera is on” metaphor matters. In an ideal future, long-term memory won’t just run passively – it’ll show when it’s active, respect when it shouldn’t be, and give you the ability to pause, snooze, or scope it down by task or app.
Local vs. cloud: what belongs where?
Scott’s setup is pretty telling: multiple terabytes of storage, split across his work and play devices. He’s bullish on local-first. So are we.
“I’m being recorded right now,” he said, tapping his Stream Deck with a glowing red button. That metaphor extended perfectly to memory systems.
They shouldn’t run silently.
They should give you control, and they should work offline.
We’re already seeing products like Recall lean into screenshot-based memory capture. That’s one approach, but it opens the door to serious privacy concerns if it’s not scoped correctly.
What we’re aiming for with Pieces is surgical: context that’s helpful, permissioned, and private by default.
Bigger context isn’t always better context
We ended our conversation exploring the trend toward massive context windows: LLaMA 3 boasting 10 million tokens, Google and OpenAI chasing similar numbers. But size isn’t everything.
“There’s still the ‘lost in the middle’ problem,” Scott noted. And he’s right – whether you’re a human or a model, retaining and prioritizing the right memory is hard.
I’m increasingly convinced the future isn’t about stuffing more into memory – it’s about intent-aware retrieval.
Models that understand your goals and bring forward what’s actually relevant, whether it happened a minute ago or nine months back.
We’re already pushing hard in that direction.
Developers still need depth
To wrap up, I asked Scott what advice he’d give to new developers navigating this AI-powered world.
His response was spot-on:
“Whatever layer you’re working at – go one level deeper. If you’re a React dev, learn how the rendering engine works. If you’re building backend services, read up on the underlying protocol stack. Just pop the hood and drive stick shift for a day.”
He’s right.
Because the next generation of developers won’t win by being fast typers.
They’ll win by being skilled orchestrators. Strategic thinkers. Problem solvers who understand the terrain better than the agents they deploy.
Final thought
AI won’t replace developers, but developers who understand and embrace AI well just might outpace those who don’t.
The question isn’t whether AI can write the code.
The question is: Do you know why that code matters – and what to do when it breaks?
That’s where memory, context, and curiosity meet. And that’s where the future gets exciting.
