AI on the frontlines: a developer advocate roundtable
Six dev advocates discuss AI tools, skills, and challenges shaping modern developer workflows and collaboration.
At the Pieces AI Productivity Summit, six developer advocates from across the tech industry sat down for a candid panel chat. No keynotes, no slides, just real conversation.
From debugging horror stories to why AI still can't replace the human tone of a blog post, the session offered a front-row seat to the real impact of AI in developer workflows.
Meet the advocates
Francesco (Daily Dev): Developer Advocate, serious-face joker, known for deadpan delivery and surprise punchlines.
Erin (Galileo): Developer Experience Engineer by day, stand-up comic by night. Also, host of a podcast about dev rel and pickles.
Ebony (Block): Developer Advocate and aspiring nail artist. Recently turned her hand to gel polish between GitHub pushes.
Terry (Quadrant): Data Scientist and Developer Advocate who’s also taken his stand-up sets to stages in New Zealand.
Chloe (Google): Developer Relations Engineer, once played Fiona in Shrek: The Musical. No more green face paint, please.
Ali (Pieces): Host and Senior Developer Advocate at Pieces. Into RC cars, AI, and making frontend work… somehow.
Check out the full dev rel panel during Pieces AI Summit. The panel starts at 2:20:50.
The everyday stack: what they actually use
The panel began with a dive into the tools everyone was using. Gemini, GitHub Copilot, and ChatGPT all made appearances, but it was clear that the developer workflows extended well beyond the standard fare.
Pieces came up as part of real workflows, for it’s code management and being a useful memory assistant.
Goose, a local AI agent, got multiple shoutouts for its role in debugging, no drama, just fast fixes.
Instead of losing hours on Stack Overflow, developers now rely on AI as an invisible partner in problem-solving.
Warp, a smart terminal, and even Vim got some love, highlighting how traditional tools are being modernized with AI smarts layered in.

What stood out wasn’t just which tools were in rotation, but how naturally they were embedded into daily work. One panelist used AI to build a full-featured ticketing page for the summit itself despite having little frontend experience.
AI didn’t replace the developer; it expanded the scope of what they could pull off.
Trust and friction: AI adoption inside teams
While individual workflows were full of AI, the panelists acknowledged that broader team adoption still had friction points.
One of the biggest hurdles was perception, many still viewed AI as a chatbot gimmick.
But once it’s inside the developer environment, acting more like a helpful teammate than a search engine, the value becomes hard to ignore.
There was a shared understanding that the real resistance often didn’t come from developers.
Legal, marketing, and other non-engineering teams sometimes hesitated more.
One story involved a journalist, skeptical at first, who became a convert after using Goose to draft an article. Another panelist mentioned a bail bondsman who built an app with AI, without knowing how to code.
The implication was clear: AI is already practical for people who don't code. The tools are reaching across roles and breaking down technical gatekeeping.
But scaling that across organizations still requires advocacy, education, and trust.
Still learning, still doing – just differently
The inevitable question surfaced: if AI can write the code, is there a point to learning it?
The consensus was nuanced.
Syntax memorization might be losing its edge, but structural thinking, breaking down problems into tasks, understanding data flow, security, and architectural trade-offs remains crucial.
Knowing the "why" behind the code is becoming more valuable than the exact keystrokes.
Chloe, standing in for the Google perspective, emphasized the increasing importance of guardrails, especially as consumer-facing AI products grow. Tools are powerful, but they need constraint and design to ensure they’re safe.
Others pointed to DevOps as the rising skill area, understanding how to test, monitor, and ship AI-driven applications.
With models behaving in non-deterministic ways, traditional QA techniques don’t always apply.
Knowing how systems interconnect is more relevant than obsessing over elegant CSS.
Prompting, orchestration, and tool fluency were also highlighted, not just using AI, but using it well.
Context is king: the memory problem
Every developer has hit the wall of AI context limits.
You switch tools, you lose history.
You rerun prompts, and you get wildly different outputs.
The panelists voiced collective fatigue with re-explaining the same setup across different LLMs.
This sparked discussion around memory and continuity.
Tools like Pieces are aiming to solve that by tracking past work, enabling queries like “What was I working on last week?” Others talked about semantic caching, recognizing different ways of asking the same question as equivalent queries, saving both time and cost.
The wish list was clear: smarter assistants that don’t just generate outputs, but remember the user’s goals, constraints, and history across sessions and tools.
What’s next: agents, autonomy, and better interfaces
Looking ahead, the panelists shared the trends that had them most excited.
AI agents were a recurring theme, especially those powered by open standards like the Model Context Protocol (MCP), which allow agents to plug into tools like GitHub or Jira without reinventing the wheel.
More than one panelist celebrated content summarization as an unexpected win.
From Slack rants trimmed into polite responses, to transcripts turned into outlines, AI was quietly smoothing over communication hiccups and saving relationships.
But alongside the enthusiasm came caution.
Several speakers warned against stacking too many tools without understanding what each one was doing. One compared it to grabbing every new Lego brick without building a coherent structure.
The goal isn’t more tools – it’s smarter, leaner systems that amplify creativity and reduce friction.
Where AI still falls short
To wrap things up, the group reflected on the uniquely human parts of developer advocacy that AI still can’t touch.
Brand tone and voice came up immediately. Drafting copy is one thing, but knowing what feels “on-brand” is still a gut check that no LLM can replicate.
Understanding the emotional temperature of a community or deciding how to respond to a weird DM?
That’s still squarely human work.
Test coverage was another area where AI helped, but didn’t fully replace judgment.
Generating boilerplate tests is easy; knowing what’s worth testing isn’t.
And then there’s the logistical sprawl of DevRel itself: event planning, content production, community feedback, workshops.
Some panelists have started cloning parts of themselves, literally using AI to replicate workshop scripts or generate drafts, but acknowledged that the spark, the improvisation, still needs to come from a person.
One particularly memorable anecdote involved someone getting caught using an AI clone of themselves to run a 101 workshop. It almost worked. Almost.
Real humans, real tools, real adaptation
The panel didn’t wrap with dramatic pronouncements or starry-eyed predictions.
Instead, it closed on something simpler: this group of developers is adapting – quickly, creatively, and without pretending they have all the answers.
They’re using AI because it’s useful, not because it’s trendy.
They’re honest about what works, what doesn’t, and what still feels weird.
And they’re building systems where humans are still at the center, even as the machines get louder, faster, and smarter.
