Matt Shumer’s viral post “Something Big Is Happening” is making the rounds. He compares this moment to February 2020, the calm before COVID changed everything. He says AI just crossed a threshold. That the future is here. That you should spend an hour a day experimenting and always use the most powerful models.
I’ve read it three times. And while I respect the urgency, I think he’s missing some things that matter a lot if you’re trying to actually use AI in your work—not just write about it.
Here’s where I see it differently.
You Don’t Need the Most Expensive Model
Shumer recommends always using the latest, most powerful models from OpenAI and Anthropic. Why is he suggesting throwing more money at these companies, when it is completely unnecessary.
70% of my AI usage is small, fast models. 30% is the large ones.
Here’s the workflow that I an many others follow:
Use Opus 4.6 or GPT-5.2 for planning. create a PRD, break it into sprints, build detailed plans. Then switch to a smaller model like Grok Code or Gemini 3 Flash for the actual building.
Why? Speed. Iteration. Lower cost. And honestly? Better results. The large models aren’t delivering production quality software. This is only being achieved through careful planning, architecture, and small iterations where the developer maintains knowledge share and understanding of what is being built.
This isn’t theoretical. This is how I and many of the devs leveraging AI in the industry operate.
These Models Are Not Smart
Let me tell you about a conversation I had recently.
I asked an AI whether I should walk or drive to the car wash around the corner. It suggested I walk. I then asked it what to do since I had arrived at the car wash but I did not have my car to wash, it replied with: “You have a location-resource mismatch.”
This thing is not intuitive. It doesn’t understand context the way you do. It’s probabilistic—it predicts the next likely token based on patterns. That’s it.
The failure modes are absurd and unpredictable. You can’t outsource to someone you can’t trust to get the job done consistently.
AI is artificial. Synthetic. It simulates human thought, but it does not think like we think. It does not problem-solve like we solve problems. It does not have common sense.
If your job is boring and repetitive, yes—you should pay attention. But if your work requires empathy, intuition, or the kind of judgment that comes from years of domain expertise? AI is not going to touch that. And I’ll go further: we should be intentional about not using it for those things.
Why? Because it doesn’t work. And because it feels demeaning to your most valuable asset, your employees.
The Security Problem No One Wants to Talk About
Here’s something Shumer doesn’t mention: prompt injection.
Here’s how it works: I can put hidden instructions on a website. When an AI agent visits that site, those instructions can tell it to fill out a hidden form with everything it knows about you. Your context. Your goals. Your secrets.
Would you give an intern who’s known to lie—and who follows instructions from anyone—the keys to your house? Access to all your secrets?
That’s what you’re doing when you give these agents broad access without understanding the risks.
An Hour a Day? Be Realistic.
Shumer recommends spending an hour a day experimenting with AI.
I don’t know what world he’s living in, but most people I work with don’t have an hour a day to practice anything. They’re busy. They have jobs. Families. Responsibilities.
Here’s what I tell people: 15 minutes a day is all it takes to start your journey.
Here’s the mental model we teach at Workplace Labs:
Ask yourself: Can AI do this FOR me, WITH me, or help me PREPARE?
- For me: Fully delegate. Let it draft, summarize, generate.
- With me: Collaborate. Use it as a thinking partner, a sounding board, learn, explore, plan, challenge, build
- Prepare: Let it help you get ready for tasks AI cannot do. Challenging conversations, public speaking, physical tasks, interview for a job… You can have it ask you interview questions. Share what another person might think of your words.
⠀The one to lean into most? With.
Here’s the prompt to start with:
I’m working on [describe your task]. I’m a [your role] at [your company/industry]. How can you help me with this? Start by asking me questions.
That’s it. Simple. Effective. Something you can actually do right now.
What I Do
I’m not a VC. I’m not hyping for founder companies.
I’m a senior full-stack developer and AI engineer building production multi-agent systems. I co-founded Workplace Labs with Neil Morelli, a PhD industrial organizational psychologist. Together, we consult with Fortune 500 companies on AI adoption.
We teach teams how to leverage AI more effectively. We help organizations drive healthy adoption—not with hype, but with mindsets, frameworks, methodologies, and practical application.
When someone like Shumer writes a viral post about AI changing everything, I read it carefully. Some of it resonates. But some of it feels disconnected from the reality of actually working with these tools every day.
The future is coming. But it’s not here yet. And the path forward isn’t about spending more money on bigger models or panicking about your job.
It’s about learning how to leverage AI in healthy and effective ways. Understanding their limitations. And building habits that fit your real life.
Keys
- You don’t need the most powerful model for everything—70/30 small/large works better and costs less
- AI doesn’t understand. It predicts. The failure modes are absurd and unpredictable.
- Prompt injection is a real security risk with no known solution—be careful what access you grant
- 15 minutes a day beats avoiding it for whatever excuse you have been telling yourself
- Ask: Can AI do this FOR me, WITH me, or help me PREPARE?
- Lean into the “with”—collaboration is where AI shines
⠀
At Workplace Labs, we make AI adoption practical. Mindsets. Frameworks. Methodologies. It is teams of humans with AI in the loop who are having an impact that matters. Not humans in the loop.