- Brain Doubles Newsletter
- Posts
- 🟠🔵 🟣 Can't Decide Between Claude, ChatGPT, and Gemini?
🟠🔵 🟣 Can't Decide Between Claude, ChatGPT, and Gemini?
My Advice? Don't.


I commented the other day about how I liked to work with three of the largest generative AI LLMs simultaneously. It raised some questions.
Most people feel they need to pick their AI platform. They try a few, find one that clicks, and commit.
Generally, I’m an advocate of using fewer tools—and using them well. But I find myself working across multiple platforms more often, because I've realized something: choosing between them assumes they're interchangeable. They're not.
Each brings different strengths. And when I'm working on something complex—a research project, a tricky concept, writing that needs to hold up—I want multiple perspectives, not just one.
Note: There are many different models available, some trained in specific areas, so don’t limit your choices to these three. These are the ones I happen to use.
Why They're Not Interchangeable
Each major AI platform has been trained differently, using different datasets, algorithms, and design choices. These aren't small technical details. They shape how each platform approaches problems and formulates responses.
Think of it like consulting three different colleagues, each with their own expertise, world-view, and thinking style.
Gemini is my deep research partner, bringing systematic thoroughness that helps me ground abstract concepts in solid evidence. When I need to chase down connections across disparate sources or validate emerging patterns, Gemini's strength in structured exploration proves invaluable.
Claude has become my primary thinking partner. There's a compatibility here that's hard to articulate—perhaps it's the way Claude holds space for nonlinear thinking, or how it maintains nuance without flattening complexity. When I'm developing frameworks or wrestling with concepts that resist neat categorization, Claude stays with me through the messiness.
ChatGPT, my least-used collaborator, brings a specific marketing sensibility to communications. When I need to refine how I express ideas to different audiences or tighten phrasing for clarity without losing substance, ChatGPT's understanding of contemporary communication patterns proves helpful.
Your Experience Will Differ From Mine
No two individuals’ platform experiences will be the same because AI is responsive to the way you work with it. Its responses coalesce around your patterns, questions, phrasing, and style of thinking. It predicts your answers based on the signals you send. Think of it like a cognitive fingerprint. Each one is different.
The roles these platforms play in my workflow may differ from how they work for you. You might find ChatGPT brilliant for deep research, while I use it for communication polish. Or Gemini might become your thinking partner while it's my deep researcher.
Working across platforms yields higher-quality, more well-rounded outputs. You can even ask one platform to critique another, and they will do so, usually in an acceptably objective manner
How This Plays Out for Me
When I start a complex report or research project, I run my initial queries across Claude and Gemini. Each one surfaces different angles, makes different connections, and brings different evidence. It's not about redundancy—it's about seeing the whole picture.
Then, when I'm synthesizing and developing the ideas, I work primarily with Claude. I'll cross-check with Gemini to make sure my thinking holds water. Sometimes I'll run a pass through ChatGPT for a third perspective.
The one constant across all these interactions? Us. Our ideas. Our knowledge. Our critical thinking. Our instinct and experience. Without these vital ingredients, you may get three different versions of workslop.
Should You Pay for All Three?
I choose to use paid versions of these platforms. The newer models typically provide enhanced capabilities, including better reasoning, longer context windows, memory features that maintain continuity across conversations, and more nuanced responses. Whether those features matter depends on your work and how you use them.
But you certainly don't need to pay. Free versions are increasingly sophisticated and may well be just fine for your needs.
You might pay for the one you use most heavily and use free versions of the others for specific tasks. Or maybe you already have Gemini integrated into your Google Workspace—so exploring Claude or ChatGPT separately gives you additional perspectives without doubling up on costs. Perhaps your organization provides a single platform, and you supplement it with free versions of others for personal projects.
And of course, each platform also offers capabilities beyond conversation: image and video creation, advanced formatting, coding support, and more—which is likely to factor into your choices too.
Writing as Conveyance vs. Writing as Craft
Here's something I've been reflecting on as I listen to polarized reactions to AI—the fierce advocates and the equally fierce critics.
I am a writer. But the craft of writing isn't my primary skill or my ultimate goal.
I write to communicate concepts, strategies and ideas. The purpose of my writing is to convey my thinking. Of course I want my words to be eloquent, clear, and engaging—they won't accomplish their purpose if they're not. But even as an author, the craft of writing itself is not my priority. The framing, sharpening, and articulation of my ideas is.
This distinction matters because it reframes what AI partnership means for someone like me vs a career writer. I'm using technology to bridge the gap between the complexity of my thinking and the clarity of my expression. But for an individual for whom wordcraft is the very heart of their skillset, AI presents a significant challenge.
But what extraordinary times to live in, where, with a few keystrokes and some conversation, we can tap into the vast ecosystem of these big brains.

AVAILABLE NEXT WEEK!
The AI@Work Brief
A regular snapshot of AI in the workplace
1. The "AI-Driven Layoff" Is No Longer Theoretical
Companies are now explicitly linking massive job cuts to AI-driven efficiencies, with October seeing the highest number of layoffs in two decades.
The abstract fear of AI-driven job loss became a hard reality in the past few weeks. US companies announced over 153,000 cuts in October, a 20-year high for the month, according to data from Challenger, Gray & Christmas. Unlike previous announcements, leaders are being direct. Salesforce's CEO, for example, confirmed a 4,000-person cut in customer support, stating bluntly, "I need less heads" due to AI. This follows similar, large-scale announcements from Amazon (14,000 corporate roles) and UPS (48,000) where automation and AI were cited as key factors in restructuring.
2. The "Entry-Level Squeeze" Hits Gen Z
New research from Stanford reveals that while experienced workers remain stable, AI is disproportionately affecting the hiring of recent graduates.
The data is in: AI is hitting entry-level, white-collar jobs first. A new study from the Stanford Digital Economy Lab found that since 2022, employment for workers aged 22-25 in "AI-exposed" jobs (like software development and junior analysis) has fallen by 13%. The researchers suggest AI is particularly good at replacing the "codified knowledge" or "book-learning" tasks that are the traditional starting point for a new graduate's career, putting new pressure on how companies must onboard and train young talent.
3. Rise of the "AI-Adjacent" Job: The New Opportunity
A "silent restructuring" is creating a boom in new roles for non-coders, focusing on human expertise to train and validate AI systems.
While some roles are shrinking, a new "AI-adjacent" job market is booming—and it doesn't require a computer science degree. As companies race to build accurate models, they are desperately seeking domain experts to act as AI trainers and evaluators. New, in-demand roles include "Legal Evaluators" (paralegals fact-checking AI-generated contracts), "Medical Annotators" (nurses labeling clinical data), and "Language & Cultural Experts" (linguists ensuring an AI understands cultural nuance). This trend moves human value from doing the task to judging the task.
4. The Great Workplace AI Disconnect
A new EY survey shows employees are eager to use AI, but a lack of leadership and training is creating widespread anxiety and security risks.
There is a massive gap between employee enthusiasm and leadership execution. A new survey from Ernst & Young (EY) found that 84% of employees are eager to embrace "agentic AI" tools. However, 85% of them are self-taught, learning about AI outside of work. This disconnect is creating anxiety (56% worry about their job security) and risk. A separate Gallup poll confirms the leadership gap, finding that the top barrier to adoption is an "unclear use case," and that only 28% of employees feel their manager actively supports their use of AI.
5. From Tool to Teammate: The "Agentic AI" Shift
The conversation has moved from AI as a simple tool (like ChatGPT) to AI as an autonomous "agent" that can manage entire workflows.
The latest "State of AI in 2025" report from McKinsey highlights that the new frontier is "agentic AI." This is the shift from a passive tool that answers a prompt to an autonomous agent you can delegate tasks to—for example, "Book me the most cost-effective travel to our client in Chicago for next Tuesday, ensuring it aligns with our T&E policy." McKinsey's survey shows 88% of organizations now use AI in at least one function, and the race is on to implement these agent-based systems to handle complex, end-to-end business processes.
6. The AI Rulebook Is Here (and It's Banning Your HR Bots)
The EU's landmark AI Act is now in force, and it specifically bans certain AI applications in the workplace, with others now classified as "High-Risk."
After years of debate, the world's most comprehensive AI regulation is here and has immediate "human impact." The EU AI Act, which will affect any company with EU customers or employees, places an outright ban on using AI for "emotion recognition in the workplace." It also places any AI used in hiring—such as CV-sorting software or automated interview bots—into the "High-Risk" category. This means companies using these tools will face strict new compliance, transparency, and human-oversight requirements, effectively ending the era of the "black box" hiring algorithm.
The newsletter for professionals who realize we're on the cusp of something far greater than prompts and productivity hacks.