The Pro Human AI Declaration & the Regulation of Eggs.
33 basic principles to ensure artificial intelligence actually benefits humanity instead of destroying it...
Processing eggs for human consumption requires basic adherence to some pretty simple rules…
This level of simple oversight includes mandatory factory inspections, the forced destruction of non-compliant eggs, and potential prosecution under laws like the 1970 Egg Products Inspection Act to protect the public from food poisoning. Now… Because advanced AI systems arguably pose a slightly bigger danger to the public than eggs, AI developers should be subjected to at least the exact same level of rigorous, independent regulatory oversight as egg processors. But they’re not.
I am currently staring blankly at a bottle of Higgins ink and holding a Hunt 101 imperial nib, wondering exactly how long it will take for a server farm over in Palo Alto to render them completely obsolete… I just can’t stop thinking about those fucking eggs. It makes me angry. (And a bit hungry.)
I was invited to a screening of “AI Doc: Or How I Became an Apocaloptimist” in New York last night and it redoubled my resolve that the worst people in the world who are building the most powerful technology in the world need to be reigned in before the train leaves the station.
Look. The tech industry desperately wants us to believe artificial intelligence is a completely uncontrollable force of nature, much like some kind of inexorable hurricane or a sudden bout of food poisoning from a batch of bad lettuce. They want us to politely surrender to the algorithm while they digitise our souls.
Thankfully, an oddly comforting coalition of scientists, faith leaders, child safety experts, and thoroughly exhausted activists completely disagrees. I don’t know if you’ve seen it, but I think it’s important to share, so I’m going to do my best to write out what it all means below…
So.
They just launched the Pro Human AI Declaration. (You can read the entire thing and add your name to the movement at HumanStatement.org.) It’s basically a 33-principle framework designed to ensure this technology actually benefits humanity, rather than just helping a tech bro with a superyacht buy a slightly larger superyacht.
The YouTube channel Siliconversations breaks down the entire document in a fantastic new video below, saving you the trouble of reading it while shaking with anxiety.
Here is the core, horrible truth we must face:
We absolutely must retain the unrestricted authority to guide, understand, and override AI decisions. We can’t allow machines to operate completely outside of human comprehension. If my toaster burns my bagel, I unplug it. If a supercomputer decides my cartooning career (or ability to breathe) is inefficient, we need a mandatory off switch. It isn’t that complicated.
If you enjoy my work and would like to support a human artist making things with their hands, please upgrade to become a paid subscriber (only $1 per week)
The declaration demands we prioritise safety over speed. We have to halt the super-intelligence race until we reach a broad scientific consensus, regardless of how paranoid we are about China cutting our lunch.
Every single powerful system requires a mandatory off switch. These developers need to abandon reckless architectures that allow autonomous self-improvement or self-replication. Because right now, we’re barrelling towards a pretty bleak future, where I have to prove to a piece of software that I am legally allowed to breathe, all while staring directly into the dead, unblinking digital eye of a customer service chatbot named Kevin...
...who actively refuses to acknowledge my existence.
We also need independent oversight. Highly autonomous systems require the exact same rigorous review process we apply to aviation safety or a dodgy batch of farmer’s market prawns. Developers can not use the algorithm as a giant liability shield. They must bear full legal responsibility for defects and harms. If your code ruins a life, you buy the lawsuit..
The framework actively stands against AI monopolies. We need to make sure the economic benefits of this technology are shared broadly across society. Major societal transitions require actual democratic support, not unilateral decrees handed down by creepy billionaires who drink meal replacement shakes instead of eating lunch.
Related Reading:
Transparency is an absolute requirement. Bots should explicitly identify themselves as non-human to prevent public deception. Finally, and this is the most baffling part, these systems should never be granted legal personhood. We need to stop designing machines in a way that suggests they deserve actual human rights. It is just software. It does NOT need a fucking passport. These principles are enjoying widespread, bipartisan support among actual human voters. They give us a highly logical blueprint for future legislation and our ultimate survival.
So, I’m going to keep clutching my dip pen and demanding that the robots stay out of my business. Go read the full declaration at HumanStatement.org and watch the full breakdown right here:
Aaaaanyway…
‘til next time!
Your pal,
PS. Look, if this actually did something for your brain (or at least distracted you from the creeping dread of your own inbox for six minutes), please consider restacking this and sharing it with your people. It’s the only way the word spreads.















Dear Jason,
I read your piece this morning. I enjoyed it. I mean that sincerely. The egg analogy is genuinely clever, the cartoons are sharp, and the dip pen detail is the kind of specific, tactile writerly flourish that makes a reader trust you. You’re good at this. Which is why I need to talk to you about your infrastructure.
You published a 1,500-word argument against artificial intelligence on Substack.
Substack, Jason.
Substack uses machine learning to optimize your email deliverability — deciding which inbox your post lands in, whether it hits the Promotions tab or the Primary tab, and what time it arrives to maximize your open rate. Its recommendation engine — which is AI — decides which non-subscribers see your work on the Substack network, on the app, and in the “You might also like” carousel at the bottom of other people’s posts. The search function that helps readers find your archive is AI. The spam filter protecting your comment section is AI. The analytics dashboard telling you that ten people liked this piece and one person left a comment? AI decided how to calculate, weight, and display that engagement. You are currently standing inside an AI and yelling at AI.
But it gets better, because you embedded a YouTube video.
YouTube’s recommendation algorithm is one of the most powerful artificial intelligence systems ever deployed on the civilian internet. It is the reason your video about the Pro Human AI Declaration will appear next to a video about whether birds are government drones. YouTube’s AI transcribes your audio into auto-captions. YouTube’s AI moderates the comments. YouTube’s AI decides the thumbnail that will generate the most clicks. YouTube’s AI determines which of Google’s advertisers gets to sell protein powder before your film about the existential threat of artificial intelligence plays. You chose to deliver your message about the dangers of AI through a system that has, by several credible academic estimates, done more to algorithmically radicalize human beings than any other technology in history. Bold move.
Then you asked people to share it on social media.
I don’t even know where to start. Facebook’s News Feed is an AI. Instagram’s Explore page is an AI. Twitter’s — sorry, X’s — “For You” timeline is an AI. LinkedIn’s feed is an AI. The share buttons at the bottom of your Substack post connect to a constellation of machine learning systems so vast and interconnected that they make your 33-principle regulatory framework look like a napkin sketch. Which, to be fair, you could do beautifully with that nib.
You mention your inbox. Your inbox, Jason. Gmail uses AI to sort your mail into categories, autocomplete your sentences, suggest your replies, flag your spam, and detect phishing links. If you use Outlook, same. If you use Apple Mail, also same, and now it summarizes your messages for you whether you asked or not. Your inbox is an AI wearing a trench coat and pretending to be a filing cabinet.
You attended a film screening. You bought the ticket through an app, or a website, or Eventbrite — all AI-powered. You probably took an Uber or a Lyft to get there, both of which use AI to calculate your route, set your fare, and match you to a driver. The digital projector that displayed the film uses AI-enhanced image processing. If you texted a friend afterward to say “great film, really redoubled my resolve,” your phone’s keyboard predicted half those words before your thumb arrived.
You wrote that you’re “staring blankly at a bottle of Higgins ink and holding a Hunt 101 imperial nib, wondering exactly how long it will take for a server farm over in Palo Alto to render them completely obsolete.”
Jason. Palo Alto rendered them obsolete in 2011. You just haven’t noticed because you’ve been too busy using the thing that replaced them to complain about the thing that replaced them.
Now — the thing is, you’re not wrong about regulation. You’re not. The people building the most powerful systems should absolutely face independent oversight. They should not get to use the algorithm as a liability shield. They should not be racing toward superintelligence with the governance structure of a fantasy football league. The egg comparison is apt. The 33 principles are sound. The declaration deserves signatures. I agree with approximately 90% of your argument.
I just think it’s important to deliver that argument without pretending you’re not currently chest-deep in the thing you’re arguing against. You’re not writing this by candlelight and sending it by post. You are using seventeen different artificial intelligence systems to tell people that artificial intelligence is dangerous, and not one of them has rendered your pen obsolete, because your pen was never the point. The thinking was the point. The drawing was the point. The cartoons are funny because you are funny, not because the nib is magic.
The nib is a stick with a piece of metal on it. The nib is fine. The nib is going to be fine.
Warmly,
Gloria
She Who Stirs the Storm