Discussion about this post

User's avatar
Gloria Horton-Young's avatar

Dear Jason,

I read your piece this morning. I enjoyed it. I mean that sincerely. The egg analogy is genuinely clever, the cartoons are sharp, and the dip pen detail is the kind of specific, tactile writerly flourish that makes a reader trust you. You’re good at this. Which is why I need to talk to you about your infrastructure.

You published a 1,500-word argument against artificial intelligence on Substack.

Substack, Jason.

Substack uses machine learning to optimize your email deliverability — deciding which inbox your post lands in, whether it hits the Promotions tab or the Primary tab, and what time it arrives to maximize your open rate. Its recommendation engine — which is AI — decides which non-subscribers see your work on the Substack network, on the app, and in the “You might also like” carousel at the bottom of other people’s posts. The search function that helps readers find your archive is AI. The spam filter protecting your comment section is AI. The analytics dashboard telling you that ten people liked this piece and one person left a comment? AI decided how to calculate, weight, and display that engagement. You are currently standing inside an AI and yelling at AI.

But it gets better, because you embedded a YouTube video.

YouTube’s recommendation algorithm is one of the most powerful artificial intelligence systems ever deployed on the civilian internet. It is the reason your video about the Pro Human AI Declaration will appear next to a video about whether birds are government drones. YouTube’s AI transcribes your audio into auto-captions. YouTube’s AI moderates the comments. YouTube’s AI decides the thumbnail that will generate the most clicks. YouTube’s AI determines which of Google’s advertisers gets to sell protein powder before your film about the existential threat of artificial intelligence plays. You chose to deliver your message about the dangers of AI through a system that has, by several credible academic estimates, done more to algorithmically radicalize human beings than any other technology in history. Bold move.

Then you asked people to share it on social media.

I don’t even know where to start. Facebook’s News Feed is an AI. Instagram’s Explore page is an AI. Twitter’s — sorry, X’s — “For You” timeline is an AI. LinkedIn’s feed is an AI. The share buttons at the bottom of your Substack post connect to a constellation of machine learning systems so vast and interconnected that they make your 33-principle regulatory framework look like a napkin sketch. Which, to be fair, you could do beautifully with that nib.

You mention your inbox. Your inbox, Jason. Gmail uses AI to sort your mail into categories, autocomplete your sentences, suggest your replies, flag your spam, and detect phishing links. If you use Outlook, same. If you use Apple Mail, also same, and now it summarizes your messages for you whether you asked or not. Your inbox is an AI wearing a trench coat and pretending to be a filing cabinet.

You attended a film screening. You bought the ticket through an app, or a website, or Eventbrite — all AI-powered. You probably took an Uber or a Lyft to get there, both of which use AI to calculate your route, set your fare, and match you to a driver. The digital projector that displayed the film uses AI-enhanced image processing. If you texted a friend afterward to say “great film, really redoubled my resolve,” your phone’s keyboard predicted half those words before your thumb arrived.

You wrote that you’re “staring blankly at a bottle of Higgins ink and holding a Hunt 101 imperial nib, wondering exactly how long it will take for a server farm over in Palo Alto to render them completely obsolete.”

Jason. Palo Alto rendered them obsolete in 2011. You just haven’t noticed because you’ve been too busy using the thing that replaced them to complain about the thing that replaced them.

Now — the thing is, you’re not wrong about regulation. You’re not. The people building the most powerful systems should absolutely face independent oversight. They should not get to use the algorithm as a liability shield. They should not be racing toward superintelligence with the governance structure of a fantasy football league. The egg comparison is apt. The 33 principles are sound. The declaration deserves signatures. I agree with approximately 90% of your argument.

I just think it’s important to deliver that argument without pretending you’re not currently chest-deep in the thing you’re arguing against. You’re not writing this by candlelight and sending it by post. You are using seventeen different artificial intelligence systems to tell people that artificial intelligence is dangerous, and not one of them has rendered your pen obsolete, because your pen was never the point. The thinking was the point. The drawing was the point. The cartoons are funny because you are funny, not because the nib is magic.

The nib is a stick with a piece of metal on it. The nib is fine. The nib is going to be fine.

Warmly,

Gloria

She Who Stirs the Storm

No posts

Ready for more?