Back
Generative AI vs LLMs: Key differences explained
Generative AI encompasses various forms of creative AI, including text, images, and music, while LLMs specifically focus on language, meaning all LLMs are generative AI, but not all generative AI is classified as LLMs.
Imagine this: You're scrolling through reddit, reading posts on X (aka Twitter), or talking to your coworkers, and words like "Generative AI," "LLM," and "machine learning" keep coming up.
You nod along, but deep down, you might be wondering, What do these words actually mean? And why does everyone seem so excited—or nervous—about them?
If you've ever felt this way, you're definitely not alone (I know I’ve been there 😅). In the last few years, AI has become a hugely popular topic (especially on social media).
As a result, many people seem to be throwing around these “AI words” as if they all mean the same thing.
But what do these terms really mean, and how are they connected? Let’s start with the basics.
What Is generative AI?
You can think of Generative AI (GenAI) as an umbrella term. It’s all about creating something new—hence the “generative” in its name.
Whether it’s images, music, text, or code, GenAI tools are designed to take what they’ve learned and generate original content.
This means they’re not just copying what they’ve seen—they’re creating something entirely new that’s based on the prompt you provide (which is why learning a bit about prompt engineering can really help you get the results you want! 😉).
Here are a few examples of GenAI tools you might recognize:
DALL-E and Midjourney are both great tools for generating images! In the past, I’ve used both to generate images for websites I was working on and for social media banners and posts. For example, my banner on X was generated by MidJourney! (but personally I think DALL-E is the easiest to use when it comes to writing prompts, so that’s what I use now).
SoundRaw is a nice choice for creating royalty-free background music for shorts or videos. I don’t have the best ear for music, but even I can use this tool to make something that sounds decent!
ChatGPT and Claude are excellent for writing and editing text. Basically anything “important” that I write nowadays, I’ll send over to Claude for some feedback. Unfortunately, Claude has some pretty strict rate limits. You can only send about 40 messages per day on the free tier unless you subscribe to a paid plan. That’s why I’d rather just access Claude through Pieces!
Grok is amazing when it comes to generating images, especially images based on your X profile! Recently there have been all kinds of trends like “Draw me as an anime character”, “Generate what you think my room looks like based on my X profile”, or “Generate me as a Lego character”.
Here’s how Pieces looks as a Lego:
If you end up generating an image based on your X profile, we’d love to see it! Just tag us @getpieces
But how does GenAI actually create (or generate) these things? 🤔
To put it simply, it’s trained on massive amounts of data to recognize patterns found in that data. Once it learns those patterns, it can then generate similar (but unique) outputs.
Now that we’ve covered GenAI, let’s talk about LLMs (Large Language Models).
How does an LLM even work?
LLMs are a specific type of generative AI that focuses on text. They're trained on datasets that usually include books, articles, and code (although the type of data really depends on the purpose of the LLM).
Once trained, they can use what they’ve learned to respond to prompts in a way that feels almost.. kinda sorta.. Human.
Here are a few LLMs you might’ve heard of:
GPT (Generative Pre-trained Transformer): Great for help with coding, writing docs, and brainstorming ideas, some examples are ChatGPT 4, ChatGPT 4o (my personal favorite in the ChatGPT family), and most recently ChatGPT o1.
LLaMA (Large Language Model Meta AI): Built for research, some examples are Llama 2 7B and Llama 3 8B (which are both local models that you can access through Pieces). One thing that’s cool about these local models, is that you can use them even when you’re not connected to wifi!
Claude: Known for long-form writing and creative tasks. Claude 3.5 Sonnet probably is my favorite LLM just because of the way it writes compared to other LLMs.
Since LLMs are focused on language-based data and tasks, it makes them ideal for anything text-related (think editing articles, code generation, content creation, language learning, etc.).
Fun fact: Pieces for Developers gives you access to 25 distinct LLMs 🤯.
Don’t like the output one LLM gives you? Just switch to another (you can even switch mid-conversation!).
With Pieces, you’re able to easily move between models like ChatGPT, Claude, and Gemini until you find the response that works best for you.
What are the limitations of GenAI & LLMs?
While GenAI and LLMs can be super useful, they’re not perfect.
They rely on the data they’re trained on, so if that data has biases or incorrect/outdated information, it can affect their output (or the responses you get back).
Another problem is that If you ask them a hard question or just a question that they’re unsure about, they might not admit they don’t know and could just make up an answer instead. This can be especially annoying when you are trying to learn something new and end up going down the wrong path due to these “hallucinations”.
What’s the difference between generative AI vs large language models?
Now we know a bit about GenAI and LLMs, but what's the real difference?
Generative AI is the umbrella term covering everything from text generation to images, music, and more. LLMs are a specific part of GenAI focused on creating and understanding language.
Here's a simple way to think about it: LLMs are generative AI, but generative AI is not always an LLM. Generative AI is the broad category for any AI that creates something new, while LLMs are a type of generative AI focused specifically on text.
That’s why earlier in the article we mentioned ChatGPT and Claude in both the GenAI and LLM sections. ChatGPT is both Generative AI and an LLM. On the other hand, DALL-E, which is used for image generation, is Generative AI but not an LLM.
What is the hierarchy of artificial intelligence?
Now that we understand a bit about Generative AI and LLMs, how do they fit into the bigger picture of “AI” as a whole?
To make sense of it, think of AI as a set of layers (like an onion!), each getting more specific as you get deeper.
The image above breaks it down like this:
Artificial Intelligence (AI)
This is the broadest category and includes anything that mimics human intelligence. This could include conversational AI (ChatGPT), Image generation (DALL-E), the AI that powers self-driving cars, and so much more! This is where it all starts.
Machine Learning (ML)
An essential part of AI that helps systems learn from data and get better over time. The more data the system processes, the more accurate its predictions and decisions become.
Deep Learning (DL)
A type of Machine Learning that uses neural networks to recognize patterns and handle complex tasks.
Generative AI (GenAI)
Built on Deep Learning, this is where AI starts creating new things, like images, music, text, and code.
LLMs (Large Language Models)
A specific type of Generative AI focused on language. LLMs power tools like ChatGPT, Claude, and Pieces!
Integrating generative AI and LLMs in your workflow
If you’ve made it this far, you’re well on your way to understanding AI (congrats!). But you might be thinking, how can I actually use AI in my work or daily life? 🤔
Here are some practical ways to get started:
Create engaging content
Tools like ChatGPT or Claude help you come up with ideas, draft blog posts, and create outlines quickly.
If you enjoy writing your articles/notes in Obsidian, the Pieces Obsidian plugin is super useful. Its long-term memory engine keeps your workflow context accessible, while Pieces copilot is there to answer any questions as you write.
Leverage developer tools
Use code generation tools to save code snippets and generate code directly in tools like VS Code, Chrome, or Obsidian.
Combine Pieces with GitHub Copilot—GitHub Copilot autocompletes your code while Pieces has access to all of your workflow context to answer any questions that you might have.
Generate unique images
Tools like DALL-E, Midjourney, or Grok let you create custom images for presentations or social media.
If the AI-generated graphics aren’t perfect (it is pretty difficult to get them exactly the way you want them to look), you can use Canva to edit them (they even have some AI-powered editing tools available now).
Build interactive chatbots
Chatbots can act as your coding assistant, answer questions about your portfolio, act as a virtual therapist or astrologer, or even help diagnose medical issues (which is probably a terrible idea!).
You can integrate chatbots into your coding projects using APIs like OpenAI’s API (for models like ChatGPT) or Anthropic’s Claude API. Or, you can use one of our SDKs to call your preferred LLM through Pieces OS—for free.
Hopefully, now you have some ideas on how you can integrate AI into your workflow! Honestly, whatever you need “AI help” with, there’s probably something out there for you (if not, then you could even build it yourself).
That said, GenAI and LLMs have quite literally changed everything. And not just for developers.
People who in the past could barely draw stick figures are now creating amazingly detailed works of art (If you don’t believe me take a look at Midjourney’s Explore page).
And those who never thought they’d try coding are now building their own simple websites and games.
These are just a few examples. GenAI and LLMs are making it easier than ever for anyone to create, code, and bring their ideas to life. The tools are here – what will you do with them?