← Back to The Markdown

The great becoming

· 6 min read

Have you noticed that all these 'AI influencers' seem to ride a trend of what's hot and what's not? It's literally the new crypto craze, but instead of coins, it's a new launch, a new Git repo, or a new workflow telling us the latest release is "insane," sharing all the things that "no one talks about" and all "the uncomfortable truths."

One guy with a ring light will explain why AI agents need long-term memory. He will talk about retaining context across sessions. He will explain the drastic limits of context windows.

Naturally, I watch along and nod. And I'm thinking: I built that eight months ago. I called that system, as I described it to Claude, "the literal brain" layer.

Another guy will explain multi-agent systems. They say one agent is entirely useless. They say you need a coordination layer. And that the latest release of OpenClaw is "insane."

Meanwhile, I have eleven agents running on a Mac Mini. They run five business units. They never stop talking to each other. The Mac Mini sounds like my ex's hairdryer. (A lot of learnings from there, which I will share in a follow-up post).

But the absolute best one is the chief of staff agent. They tell you to build a thinking partner. A triage layer between you and the fleet.

I built Bobo in August last year. She was compiled by bringing together all my personal workflows into a Claude Code environment—eight full months before the "chief of staff agent" became a standard YouTube thumbnail format.

And these people on YouTube and X are not just your average engagement farmers who summarize other videos. I am talking about Andrej Karpathy, Tiago Forte, and old mate Tom from the Paperless Movement. These are experts in their fields who have spent considerable time thinking about these topics.

The flattering interpretation is that I am a visionary. A misunderstood pioneer of the frontier. I really prefer this interpretation.

But what's actually happening is quite interesting. On my morning walk through Kings Park, I unpacked a couple of things by rambling to myself. So...

  1. Why is everyone suddenly having the exact same breakthrough at the exact same time?

  2. Is AI a steering wheel forcing everyone into the exact same lane?

  3. And is everyone participating in a magnificent singularity of human thought?


The first one is obvious. You use a tool seriously, and you tinker and tinker away until you push it to the breaking point. And then what? You fix it by hand.

This is not innovation at first. This is actually pure, head-pounding desperation that keeps you up till 3 AM to make the problem go away.

For me, my setup didn't even start with AI as the goal. I just had multiple tools from Notion to Heptabase, and my Apple Notes sprawled everywhere. After my exit, I started consolidating all my docs into one place. Google Docs and Notion weren't flexible enough, so I exported everything, formatted them into Markdown files, and used Claude Code to file them into structured working folders (I use a loose framework called PARA for my notes, so I did the same with my directory).

This triggered everything for me to start putting context and memory systems together for daily notes, memories, and summaries, which now live in my OpenClaw.

The point is, I brute-forced my way through until I found something that works. And I suspect others doing the same are all converging on similar end results because what works, works, and there are only a few ways to make it work well.


Now, everywhere I go, I hear people say they are using AI for literally everything. So, obviously, as you come across problems, you tend to talk to the AI to help you think through a solution, or you just ask the AI and use whatever it spits out.

Well, the thing is that all the AIs are trained on similar datasets. While the variation in responses to the same question might differ on a varying scale, they will always end up regurgitating one of a few abstractions that they conclude is the "correct" answer. And that's what most people run with. Is the convergence of everything a result of the AI guiding our decision-making?

Everywhere you look, it's the same sort of marketing copy, the same hooks, or the same approach to solving problems.

Thousands of people are working on the exact same problems right now. Everyone uses the same five frontier models to brainstorm. The models compress the entire solution space. They land on the exact same patterns. Those patterns are just mathematical probabilities of what sounds reasonable.

Could that explain why so many people are coming to the same conclusions?

I think we all know the answer to that. The real question is, what do YOU do about it?


And the last point is the most interesting, and a wee bit concerning.

So I went to Curtin University yesterday to catch up with my old mentor, the head of engineering. He's been in that role for 17 years, which means he's watched thousands of mechanical engineering students come and go. He has a front-row seat to how the next generation's brains are being wired.

His observation mirrored exactly what I'm seeing on the employer side: newer graduates are increasingly allergic to first-principles thinking. They don't want to wrestle with a problem from the ground up. Instead, they over-rely on tactics, frameworks, and pre-packaged playbooks.

Now, pour AI onto that fire.

We are seeing a rising dependency on tools to do the actual thinking for you. I've conducted north of 1000 recruitment interviews and or the probably over 500 B2B deals I've navigated, the differentiator in the people that i've worked with was always creative problem solving. But what I'm seeing now is a literal degradation of thought. Organizations are obsessed with strict playbooks, and new entrants expect a step-by-step guide for everything.

If an entire generation is trained to skip the "head-pounding desperation at 3 AM" phase and jump straight to an AI generated answer, the muscle of human creativity... well dies. No?

If we stop thinking from first principles, and instead outsource our breakthroughs to the same handful of LLMs trained on the same data, what gives?

Are we about to have our Pluribus moment where we are heading toward a great becoming of ai slop.

Get new posts delivered to your inbox