Today, at some workplace that could be either a startup in San Francisco or a Fortune 500 company, the following process occurred:

  1. 1.

    A human noticed a bug in their administrative webpage, and they asked an LLM to write a Jira ticket for them.

  2. 2.

    This Jira ticket was assigned to a human, who asked an LLM to summarize it and get to work on implementing a fix.

  3. 3.

    This LLM quickly identified and corrected the issue, wrote some unit tests to go along, and opened a PR.

  4. 4.

    Another LLM reviewed the changes, thought for several minutes, and promptly approved the PR.

  5. 5.

    A human maybe looked at it. Everything looks green, so they decided to merge it.

  6. 6.

    An automatic GitHub Action triggered on merge and closed the Jira ticket automatically, with an LLM writing a summary of the changes.

It also happened at a product company that went ride-or-die for agentic worfklows, and the same steps took place but with humans #2 and #3 removed. They're saying that come next fiscal quarter they're even getting rid of the first guy.

Today, at a home located in a vibrant city at the heart of the imperial core, far too small for a single person to live comfortably, this exchange took place:

  1. 1.

    A human began another day of job-searching by opening LinkedIn and asking an LLM to filter out the postings that looked the least fake.

  2. 2.

    The human chose one that seemed promising, and told its agent to get to work.

  3. 3.

    An LLM analyzed the job listing, crawled the corresponding company's website, and swept through their latest posts on LinkedIn.

  4. 4.

    The LLM, using this information and combining it with the human's resume, produced a slightly different resume in PDF format, especially tailored to have a marginally higher than average chance of being reviewed by a human, which was then sent by the human.

  5. 5.

    On the prospective employer's site, an LLM reviewed the resume, thought for 2 seconds, and proceeded to fire up its rejection trigger.

  6. 6.

    A human made a mistake when establishing the above workflow, and the first human received a rejection letter for ${APPLICANT_NAME_HERE}.

The applicant's half was repeated about twenty times that day, while the company's half took place hundreds of times for that one listing alone. When reached out for comment, the HR-egregore had this to say:

"These days, even a single job posting can lead to thousands of people applying, it's impossible to carefully review each and every resume!"

According to the Copilot-provided summary, it then noticed that the stitches on its self-inflicted wounds began to come open once more, so it proceeded to sign another demonic contract to exchange the livelihoods of another ten thousand white-collar workers for a box of bandaids1.

Today, at a university in South America where not even its finest teachers are being paid much higher than minimum wage, there was this back-and-forth:

  1. 1.

    A human prepared an assignment by choosing a video for their students to analyze.

  2. 2.

    Prompted by the human, an LLM wrote the instructions that the students should be following.

  3. 3.

    A large percentage of the class didn't feel like bothering, and so each one of them asked an LLM to summarize the video and write out their "own" impressions on the subject.

  4. 4.

    The first human, when receiving all the responses, asked an LLM-powered tool to review them and detect if any of them had been written by an LLM, and it told them they were all written by humans because those tools are all snake oil and total fucking garbage that doesn't even work0.

  5. 5.

    Some students actually watched the video and wrote their own thoughts, but the LLM flagged them as LLM-written because it figured that a human must have copied the LLM-isms that were first copied from human writing. They had to explain themselves by showing the Google Docs history as proof that, really, they did write that!

No one really cared because this is the new normal.

There are certainly more examples, but just three of these snippets are happening at scale2, worldwide, at an increasingly alarming rate ever since the breezy summer winter of 2022. Too many tasks that would in the past be completed by human thought are now immediately delegated to an LLM, either by choice or by corporate mandate3. Move with the trends or get left behind, we need to increase productivity, this task isn't worth my precious time, and so on and so on.

Last week I watched The Matrix (1999) with two beloved friends, and it was only on this instance, my fourth-or-so rewatch, that I noticed that Neo is an anagram for One. But yes, yes, we've all seen the movie, and we all remember that almost all of humanity has been reduced to batteries that supply the power-hungry machines that have replaced them as the dominant species, and in exchange they get to live their days in a digital utopia, stuck in a perpetual 1999, described by Agent Smith as the peak of human civilization.

If I were to be really cynical, it almost sounds better than what is developing today.

The hype merchants continue to try and convince us that once we achieve Artificial General Intelligence, it will quickly lead us to a post-scarcity utopia. This purported AGI will be fully autonomous, make no mistakes, trivialize every difficult decision, cure a myriad diseases and bring forth technological leaps we could only ever dream of. And of course, it will instantly vaporize millions of jobs, but it's all going to be fine because the social contract will be rewritten and we'll (probably) get an universal basic income.4

But frankly, I just don't see how it's much different from what we have today?

If we were to take a step back from our individual shells and take a look at the aggregate sum of the daily throughput of human intelligence, we would notice that a significant portion of it is already being produced by an intelligence that is decidedly non-human, with decisions being made that have been advised by an intelligence that is decidedly non-human, based on material produced by an intelligence that is decidedly non-human.5

The way things are going, this looks to me like AGI with extra steps; we are already acting as if it's all-knowing and all-solving, so it might as well be. We don't even need the "fully autonomous" part. In the largely-expanding ouroboros of machine-integrated intelligence, no longer are we the inputs and outputs of the collective unconscious, nor are we reduced to being mere batteries. We have become lesser than lithium; today we are the copper circuitry connecting the synthetic intelligence with itself and driving its day-to-day thinking.

Take another half-step back, and the picture looks even worse: the artificial gestalt invading our beloved noösphere is corporate-owned. It's not some force beyond our comprehension pushing it, all of it is an old-fashioned for-profit endeavor. And we're paying for it.

At least the machines in AI-takeover science fiction stories had reasons such as being driven by self-preservation instincts, or developing a hatred of humans, or deciding that our planet is better off without us.

But in the real world, the displacement of human thought is all being done in the name of increasing shareholder value. Ever-larger, ever-growing, never-stopping.

It's all so fucking stupid.

Make sure to support the efforts to colonize the morphogenetic field by pitching in $20 $25 every month.6 Actually, please just use a local model if that's an option for you. At least the Amodei/Altman/Pichai/Jensen/Ellison human centipede won't be seeing your money.