I read your piece this morning. I enjoyed it. I mean that sincerely. The egg analogy is genuinely clever, the cartoons are sharp, and the dip pen detail is the kind of specific, tactile writerly flourish that makes a reader trust you. You’re good at this. Which is why I need to talk to you about your infrastructure.
You published a 1,500-word argument against artificial intelligence on Substack.
Substack, Jason.
Substack uses machine learning to optimize your email deliverability — deciding which inbox your post lands in, whether it hits the Promotions tab or the Primary tab, and what time it arrives to maximize your open rate. Its recommendation engine — which is AI — decides which non-subscribers see your work on the Substack network, on the app, and in the “You might also like” carousel at the bottom of other people’s posts. The search function that helps readers find your archive is AI. The spam filter protecting your comment section is AI. The analytics dashboard telling you that ten people liked this piece and one person left a comment? AI decided how to calculate, weight, and display that engagement. You are currently standing inside an AI and yelling at AI.
But it gets better, because you embedded a YouTube video.
YouTube’s recommendation algorithm is one of the most powerful artificial intelligence systems ever deployed on the civilian internet. It is the reason your video about the Pro Human AI Declaration will appear next to a video about whether birds are government drones. YouTube’s AI transcribes your audio into auto-captions. YouTube’s AI moderates the comments. YouTube’s AI decides the thumbnail that will generate the most clicks. YouTube’s AI determines which of Google’s advertisers gets to sell protein powder before your film about the existential threat of artificial intelligence plays. You chose to deliver your message about the dangers of AI through a system that has, by several credible academic estimates, done more to algorithmically radicalize human beings than any other technology in history. Bold move.
Then you asked people to share it on social media.
I don’t even know where to start. Facebook’s News Feed is an AI. Instagram’s Explore page is an AI. Twitter’s — sorry, X’s — “For You” timeline is an AI. LinkedIn’s feed is an AI. The share buttons at the bottom of your Substack post connect to a constellation of machine learning systems so vast and interconnected that they make your 33-principle regulatory framework look like a napkin sketch. Which, to be fair, you could do beautifully with that nib.
You mention your inbox. Your inbox, Jason. Gmail uses AI to sort your mail into categories, autocomplete your sentences, suggest your replies, flag your spam, and detect phishing links. If you use Outlook, same. If you use Apple Mail, also same, and now it summarizes your messages for you whether you asked or not. Your inbox is an AI wearing a trench coat and pretending to be a filing cabinet.
You attended a film screening. You bought the ticket through an app, or a website, or Eventbrite — all AI-powered. You probably took an Uber or a Lyft to get there, both of which use AI to calculate your route, set your fare, and match you to a driver. The digital projector that displayed the film uses AI-enhanced image processing. If you texted a friend afterward to say “great film, really redoubled my resolve,” your phone’s keyboard predicted half those words before your thumb arrived.
You wrote that you’re “staring blankly at a bottle of Higgins ink and holding a Hunt 101 imperial nib, wondering exactly how long it will take for a server farm over in Palo Alto to render them completely obsolete.”
Jason. Palo Alto rendered them obsolete in 2011. You just haven’t noticed because you’ve been too busy using the thing that replaced them to complain about the thing that replaced them.
Now — the thing is, you’re not wrong about regulation. You’re not. The people building the most powerful systems should absolutely face independent oversight. They should not get to use the algorithm as a liability shield. They should not be racing toward superintelligence with the governance structure of a fantasy football league. The egg comparison is apt. The 33 principles are sound. The declaration deserves signatures. I agree with approximately 90% of your argument.
I just think it’s important to deliver that argument without pretending you’re not currently chest-deep in the thing you’re arguing against. You’re not writing this by candlelight and sending it by post. You are using seventeen different artificial intelligence systems to tell people that artificial intelligence is dangerous, and not one of them has rendered your pen obsolete, because your pen was never the point. The thinking was the point. The drawing was the point. The cartoons are funny because you are funny, not because the nib is magic.
The nib is a stick with a piece of metal on it. The nib is fine. The nib is going to be fine.
First of all, this is a spectacularly written comment. It could be its own essay. You stir the storm beautifully. You are entirely correct about the inescapable, suffocating irony of my infrastructure. I am absolutely standing inside the belly of the machine, yelling at the walls. The echo is deafening.
I publish on Substack because the analogue alternative is financially and physically ruinous. Writing out twenty thousand individual letters by hand with a dip pen, licking twenty thousand stamps, and hauling them to the post office is slightly cumbersome. It would also immediately bankrupt me. And the ultimate irony? The USPS uses an AI optical character recognition algorithm to read my handwriting and sort those exact letters anyway. There is no escaping the matrix. But I promise you, I am also not a Luddite.
I have been writing about this specific tension for a while now. I constantly call out the inherent hypocrisy of being a working artist on the internet. We all make daily, exhausting deals with the devil we know just to participate in modern society. I rely on these algorithms to pay my rent. I am chest deep in the exact thing I am critiquing, however...
...we need to draw a distinct line between the everyday machine learning we currently tolerate and the systems outlined in the "Pro Human AI Declaration" (see in the post above the link to the declaration). The document and my subsequent panic are specifically targeting Large Language Models and the companies actively participating in the superintelligence race. (AGI etc.)
Substack uses a recommendation engine to figure out if you want to read another cartoon about a dog. It is a sorting algorithm. It is not currently in danger of going completely rogue, developing a consciousness, and launching incendiary essays in place of ICBMs. YouTube deciding to show you a protein powder ad is deeply annoying. A tech billionaire unilaterally deploying an autonomous digital god with zero safety oversight is an existential threat. (I do realise the YouTube Algo is part of the larger Gemini ecosystem, which is one of the big players here.)
We can actively critique the trajectory of the rocket while currently strapped to the side of it.
You are entirely right about the nib. It is just a stick with a piece of metal attached to it. But it is my stick, and I will keep aggressively dipping it in ink until the server farms finally melt down. And it has not yet been replaced by the server farms. I'm still making work with it, and earning money doing so.
Thank you for reading, and genuinely, thank you for holding me accountable.
This is the same kind of argument used against environmentalists who fly or drive. We need to act within the world we live in and try to change it for the better. We can always do better, but this kind of analysis leads to paralysis. Jason (and I) think AI has a place, but without control, it will take over everything to our detriment.
Interesting observations about thinking being the point.
I recently booted Microsoft 365 to the curb, followed by Windows 11, and soon to be followed by the rest of the Microsoft ecosystem. Why? Because these products appear to believe that they are the thinkers while I am the tool. It's coercive control (look it up), and I'm not having it.
The jury is still out on whether the nibs of the world are going to be fine. I'm lucky because I react badly to any attempt to control my thought processes, but it appears that many people do not until real harm has been done. There is a reason that the Tech Bros are cramming AI down our throats, and I'm not convinced that it boils down to money. Not entirely, anyway.
It’s great that “they have a plan!” I especially like that they are citing existing systems like food safety laws. Keep up the great work!
Dear Jason,
I read your piece this morning. I enjoyed it. I mean that sincerely. The egg analogy is genuinely clever, the cartoons are sharp, and the dip pen detail is the kind of specific, tactile writerly flourish that makes a reader trust you. You’re good at this. Which is why I need to talk to you about your infrastructure.
You published a 1,500-word argument against artificial intelligence on Substack.
Substack, Jason.
Substack uses machine learning to optimize your email deliverability — deciding which inbox your post lands in, whether it hits the Promotions tab or the Primary tab, and what time it arrives to maximize your open rate. Its recommendation engine — which is AI — decides which non-subscribers see your work on the Substack network, on the app, and in the “You might also like” carousel at the bottom of other people’s posts. The search function that helps readers find your archive is AI. The spam filter protecting your comment section is AI. The analytics dashboard telling you that ten people liked this piece and one person left a comment? AI decided how to calculate, weight, and display that engagement. You are currently standing inside an AI and yelling at AI.
But it gets better, because you embedded a YouTube video.
YouTube’s recommendation algorithm is one of the most powerful artificial intelligence systems ever deployed on the civilian internet. It is the reason your video about the Pro Human AI Declaration will appear next to a video about whether birds are government drones. YouTube’s AI transcribes your audio into auto-captions. YouTube’s AI moderates the comments. YouTube’s AI decides the thumbnail that will generate the most clicks. YouTube’s AI determines which of Google’s advertisers gets to sell protein powder before your film about the existential threat of artificial intelligence plays. You chose to deliver your message about the dangers of AI through a system that has, by several credible academic estimates, done more to algorithmically radicalize human beings than any other technology in history. Bold move.
Then you asked people to share it on social media.
I don’t even know where to start. Facebook’s News Feed is an AI. Instagram’s Explore page is an AI. Twitter’s — sorry, X’s — “For You” timeline is an AI. LinkedIn’s feed is an AI. The share buttons at the bottom of your Substack post connect to a constellation of machine learning systems so vast and interconnected that they make your 33-principle regulatory framework look like a napkin sketch. Which, to be fair, you could do beautifully with that nib.
You mention your inbox. Your inbox, Jason. Gmail uses AI to sort your mail into categories, autocomplete your sentences, suggest your replies, flag your spam, and detect phishing links. If you use Outlook, same. If you use Apple Mail, also same, and now it summarizes your messages for you whether you asked or not. Your inbox is an AI wearing a trench coat and pretending to be a filing cabinet.
You attended a film screening. You bought the ticket through an app, or a website, or Eventbrite — all AI-powered. You probably took an Uber or a Lyft to get there, both of which use AI to calculate your route, set your fare, and match you to a driver. The digital projector that displayed the film uses AI-enhanced image processing. If you texted a friend afterward to say “great film, really redoubled my resolve,” your phone’s keyboard predicted half those words before your thumb arrived.
You wrote that you’re “staring blankly at a bottle of Higgins ink and holding a Hunt 101 imperial nib, wondering exactly how long it will take for a server farm over in Palo Alto to render them completely obsolete.”
Jason. Palo Alto rendered them obsolete in 2011. You just haven’t noticed because you’ve been too busy using the thing that replaced them to complain about the thing that replaced them.
Now — the thing is, you’re not wrong about regulation. You’re not. The people building the most powerful systems should absolutely face independent oversight. They should not get to use the algorithm as a liability shield. They should not be racing toward superintelligence with the governance structure of a fantasy football league. The egg comparison is apt. The 33 principles are sound. The declaration deserves signatures. I agree with approximately 90% of your argument.
I just think it’s important to deliver that argument without pretending you’re not currently chest-deep in the thing you’re arguing against. You’re not writing this by candlelight and sending it by post. You are using seventeen different artificial intelligence systems to tell people that artificial intelligence is dangerous, and not one of them has rendered your pen obsolete, because your pen was never the point. The thinking was the point. The drawing was the point. The cartoons are funny because you are funny, not because the nib is magic.
The nib is a stick with a piece of metal on it. The nib is fine. The nib is going to be fine.
Warmly,
Gloria
She Who Stirs the Storm
Gloria,
First of all, this is a spectacularly written comment. It could be its own essay. You stir the storm beautifully. You are entirely correct about the inescapable, suffocating irony of my infrastructure. I am absolutely standing inside the belly of the machine, yelling at the walls. The echo is deafening.
I publish on Substack because the analogue alternative is financially and physically ruinous. Writing out twenty thousand individual letters by hand with a dip pen, licking twenty thousand stamps, and hauling them to the post office is slightly cumbersome. It would also immediately bankrupt me. And the ultimate irony? The USPS uses an AI optical character recognition algorithm to read my handwriting and sort those exact letters anyway. There is no escaping the matrix. But I promise you, I am also not a Luddite.
I have been writing about this specific tension for a while now. I constantly call out the inherent hypocrisy of being a working artist on the internet. We all make daily, exhausting deals with the devil we know just to participate in modern society. I rely on these algorithms to pay my rent. I am chest deep in the exact thing I am critiquing, however...
...we need to draw a distinct line between the everyday machine learning we currently tolerate and the systems outlined in the "Pro Human AI Declaration" (see in the post above the link to the declaration). The document and my subsequent panic are specifically targeting Large Language Models and the companies actively participating in the superintelligence race. (AGI etc.)
Substack uses a recommendation engine to figure out if you want to read another cartoon about a dog. It is a sorting algorithm. It is not currently in danger of going completely rogue, developing a consciousness, and launching incendiary essays in place of ICBMs. YouTube deciding to show you a protein powder ad is deeply annoying. A tech billionaire unilaterally deploying an autonomous digital god with zero safety oversight is an existential threat. (I do realise the YouTube Algo is part of the larger Gemini ecosystem, which is one of the big players here.)
We can actively critique the trajectory of the rocket while currently strapped to the side of it.
You are entirely right about the nib. It is just a stick with a piece of metal attached to it. But it is my stick, and I will keep aggressively dipping it in ink until the server farms finally melt down. And it has not yet been replaced by the server farms. I'm still making work with it, and earning money doing so.
Thank you for reading, and genuinely, thank you for holding me accountable.
This is the same kind of argument used against environmentalists who fly or drive. We need to act within the world we live in and try to change it for the better. We can always do better, but this kind of analysis leads to paralysis. Jason (and I) think AI has a place, but without control, it will take over everything to our detriment.
Precisely.
Interesting observations about thinking being the point.
I recently booted Microsoft 365 to the curb, followed by Windows 11, and soon to be followed by the rest of the Microsoft ecosystem. Why? Because these products appear to believe that they are the thinkers while I am the tool. It's coercive control (look it up), and I'm not having it.
The jury is still out on whether the nibs of the world are going to be fine. I'm lucky because I react badly to any attempt to control my thought processes, but it appears that many people do not until real harm has been done. There is a reason that the Tech Bros are cramming AI down our throats, and I'm not convinced that it boils down to money. Not entirely, anyway.
Anathema
("I am the Storm")