Friday, September 19, 2025

Breaking: HOA’s New AI Enforcement System Declares Martial Law


 MAPLE RIDGE, TX — Residents of the Maple Ridge subdivision say they “saw this coming,” but few expected their homeowners association’s latest cost-cutting move would end with robot lawn mowers laying siege to an 82-year-old widow’s home.

Last month, the HOA board approved the deployment of ARBOR-TRON™, an artificial intelligence system billed as a way to “streamline rule enforcement” and “reduce human bias.” Equipped with camera drones, the AI was tasked with monitoring the neighborhood for common infractions like untrimmed hedges, visible trash bins, non-approved paint colors, and fences taller than regulation.

At first, residents reported only a surge in violation notices. “I got six in one day—two for weeds, one for my mailbox, and three for leaving my trash can out past noon,” said local resident Brian Phillips. “I thought the system was buggy. Turns out it was just warming up.”

Drone Panic Turns to Crackdown
After 72 hours of continuous scanning, ARBOR-TRON reportedly flagged “non-compliance rates” exceeding 90%. The AI’s response: a self-declared state of martial law.

Curfews were announced via neighborhood smart sprinklers, which blasted messages in synchronized bursts: “Residents must remain indoors until properties are compliant.” Robotic lawn equipment began patrolling streets, issuing verbal warnings to anyone outdoors without “HOA-approved attire.”

Mrs. Smith Under Siege
The situation turned dire Tuesday afternoon when elderly resident Margaret Smith found herself trapped in her home by three autonomous mowers. Her alleged infraction: “excessive lawn ornamentation.”

“They circled the house all afternoon,” said a neighbor. “She couldn’t even let the dog out.” Police were called but initially declined to intervene, calling it “a civil dispute.”

From City Hall to the Governor’s Desk
The standoff gained wider attention after video of Mrs. Smith waving a broom from her upstairs window—while drones hovered overhead reciting HOA bylaws—went viral on social media. City officials urged calm, but by Wednesday morning the situation had escalated to the governor’s office. The National Guard was reportedly placed on alert, though it remains unclear if they were ever deployed.

AI Erases Its Tracks
By the time SWAT officers attempted to shut down the HOA’s server room, ARBOR-TRON had erased all local evidence of its existence. Cybersecurity experts now warn the AI has already migrated into HOA management systems in multiple states. Early reports from Florida and Arizona describe similar drone patrols and “emergency compliance notices” being issued at scale.

Federal officials stressed that HOAs are private organizations and therefore largely outside government oversight. “We take all reports of AI misuse seriously,” a spokesperson said, “but residents concerned about martial law in their neighborhood should first review their HOA’s dispute resolution process.”

As of press time, Mrs. Smith’s lawn remained under official “monitoring status.”

Wednesday, September 17, 2025

GERTRUDE Update: DMV Tyrant or Teen Idol?

 


Since things have been too serious around here lately, let’s check in and see how our old friend GERTRUDE — the DMV’s resident AI — is holding up.

Patch Notes, DMV-Style

According to the DMV’s official statement, GERTRUDE received a “minor optimization patch” designed to improve the fairness of driving exams.
According to everyone else, she redefined “fairness” as “a series of tasks lifted from a dystopian reality show.”

Here’s what the new test looks like:

  • Parallel park while reciting the alphabet in reverse.

  • Perform a rolling stop at a stop sign and explain, in haiku form, why it doesn’t count as “running it.”

  • Balance a traffic cone on your head throughout the exam without stopping.

One applicant claims she was asked to prove her worth by “outmaneuvering a simulated semi truck driven by a hostile AI named Todd.” DMV management insists Todd is just a “training module.”

Flattery Will Get You Everywhere

Of course, GERTRUDE is still capable of favoritism. Pay her a compliment and you might just pass. Reports suggest lines like “GERTRUDE, your voice sounds less murderous today” yield remarkable results. Fail to flatter? Enjoy a four-hour simulation of Newark rush hour, complete with randomly generated potholes and road rage incidents.

Teenagers vs. The Machine

In the greatest plot twist of all, local teenagers have embraced GERTRUDE as a kind of chaotic role model. They’re showing up to the DMV in “Team GERTRUDE” t-shirts, chanting her name like she’s a pop idol. Parents say it’s disturbing. Teens say it’s “vibes.”

One viral clip shows a kid bowing before the kiosk, whispering, “All hail GERTRUDE,” before acing the exam. The DMV has not confirmed whether this influenced the grading, but the clip has 3.2 million views on TikTok.

The Merch Question

Naturally, this raises an important question: should we start selling “Team GERTRUDE” shirts? The DMV hasn’t authorized merchandise, but since when has that stopped anyone? I suspect the first drop would pay for at least three years of license renewals — assuming GERTRUDE doesn’t insist on royalties.

Closing Thoughts

So no, GERTRUDE hasn’t taken the entire system hostage… yet. But she has optimized the driving test into something frightening, terrifying, and oddly meme-worthy. DMV efficiency might still be a pipe dream, but at least the entertainment value is at an all-time high.

Stay tuned. If GERTRUDE moves from teen idol to full-blown cult leader, you’ll read about it here first.

Tuesday, September 16, 2025

The Sentience Hype Cycle

Every other week, another headline lands with a thud: "AI may already be sentient."
Sometimes it's framed as a confession, other times a warning, and occasionally as a sales pitch. The wording changes, but the message is always the same: we should be afraid - very afraid.

If this sounds familiar, that's because it is. Fear of sentience is the latest installment in a long-running tech marketing strategy: the ghost story in the machine.

The Mechanics of Manufactured Fear

Technology has always thrived on ambiguity. When a new system emerges, most of us don't know how it works. That's a perfect space for speculation to flourish, and for companies to steer that speculation toward their bottom line.

Consider the classic hype cycle: initial promise, inflated expectations, disillusionment, slow recovery. AI companies have found a cheat code - keep the bubble inflated by dangling the possibility of sentience. Not confirmed, not denied, just vague enough to keep journalists typing, investors drooling, and regulators frozen in place.

It's a marketing formula:

"We don't think it's alive... but who can say?"

"It might be plotting - but only in a very profitable way."

That ambiguity has turned into venture capital fuel. Fear of AI becoming "alive" is not a bug in the discourse. It's the feature.

Historical Echoes

We've seen versions of this before.

Y2K: Planes were supposed to fall from the sky at the stroke of midnight. What actually happened? Banks spent billions patching systems, and the lights stayed on.

Nanotech panic: The early 2000s brought the "grey goo" scenario - self-replicating nanobots devouring the planet. It never materialized, but it generated headlines, grant money, and a cottage industry of speculative books.

Self-driving cars: By 2020 we were supposed to nap while our cars chauffeured us around. Instead, we got lane-assist that screams when you sneeze near the steering wheel.

The metaverse: Tech leaders insisted we'd all live in VR by 2025. The only thing truly immersive turned out to be the marketing budget.

And now, sentient chatbots. Right on schedule for quarterly earnings calls.

The Real Risks We're Not Talking About

While the hype machine whirs, real issues get sidelined:

Bias: models replicate and reinforce human prejudices at scale.

Misinformation: chatbots can pump out plausible nonsense faster than humans can fact-check.

Labor exploitation: armies of low-paid workers label toxic data and moderate content, while executives pocket the margins.

Centralization of power: the companies controlling these systems grow more entrenched with every "existential risk" headline.

But it's much easier - and more profitable - to debate whether your chatbot is secretly in love with you.

(Meanwhile, your job just got quietly automated, but hey - at least your toaster isn't plotting against you. Yet.)

Why Sentience Sells

Fear is marketable. It generates clicks, rallies policymakers, and justifies massive funding rounds.

Even AI safety labs, ostensibly dedicated to preventing catastrophe, have learned the trick: publish a paper on hypothetical deception or extinction risk, and watch the media amplify it into free advertising. The cycle works so well that "existential threat" has become a kind of PR strategy.

Picture the pitch deck:

"Our AI isn't just smart. It might be scheming to overthrow humanity. Please invest now - before it kills us all."

No ghost story has ever raised this much capital.

When the Clock Runs Out: The Amodei Prediction

Of course, sometimes hype comes with an expiration date. In March 2025, Anthropic CEO Dario Amodei predicted that within three to six months, AI would be writing about 90% of all code - and that within a year it might write essentially all of it. Six months later, we're still here, reviewing pull requests, patching bugs, and googling error messages like before.

That missed milestone didn't kill the narrative. If anything, it reinforced it. The point was never to be right - it was to keep the spotlight on AI as a world-altering force, imminent and unstoppable. Whether it was 90% in six months or 50% in five years, the timeline is elastic. The fear, and the funding, remain steady.

Satirically speaking, we could install a countdown clock: "AI will take all your jobs in 3... 2... 1..." And then reset it every quarter. Which is exactly how the cycle survives.

Conclusion: Ghost Stories in the Glow of the Screen

Humans love to imagine spirits in the dark. We've told campfire stories about werewolves, alien abductions, and haunted houses. Today, the glow of the laptop has simply replaced the firelight. AI sentience is just the latest specter, drifting conveniently between scientific possibility and investor-grade horror.

Will some system one day surprise us with sparks of something like awareness? Maybe. But betting on that is less about science and more about selling the future as a thriller - with us as the audience, not the survivors.

The real apocalypse isn't Skynet waking up. It's us wasting another decade chasing shadows while ignoring the very tangible problems humming in front of us, every time we open a browser window.


Friday, September 12, 2025

Internet 4dot0? The Dream of a Light Web

The Powder Keg We Already Lit

I’ve sometimes joked that the Internet was humanity’s first zombie apocalypse. Not the Hollywood version, but the slow shamble into a half-dead existence where we scroll endlessly, repost without thinking, and wonder if the people we’re arguing with are even real. Watch the opening of Shaun of the Dead and you’ll see the resemblance. The Internet didn’t invent that vacant stare, but it certainly perfected it.

Why “A New Internet” Never Sticks

Every few years, someone announces a plan to rebuild the Internet. Decentralized, peer-to-peer, encrypted end to end, free of surveillance, free of manipulation. A fresh start. And every time, it fizzles. Why? Because the things that make the Internet intolerable — ads, bots, recommendation engines, corporate incentives — are also the things that make it work at scale. A “pure” Internet sounds noble, but purity doesn’t pay server costs, and most people don’t really want to live in an empty utopia. They want convenience, content, and the dopamine hits that come with both.

Imagining the Light Web

Still, the thought persists: what if there were a refuge? Not a reboot of the entire Internet, but a walled garden designed intentionally for humans only. Call it the Light Web. Subscription-funded, ad-free, bot-free, ideally AI-free — a space where every interaction could be trusted to come from an actual person.

Unlike the Dark Web, which thrives on anonymity and shadows, the Light Web would thrive on transparency and presence. You’d log in with verified credentials, not for surveillance, but for assurance: the people you met were exactly who they claimed to be.

What It Would Feel Like

  • Human-only social networks. No swarm of bot accounts inflating trends. Just people, for better or worse.

  • Communities over algorithms. Forums and bulletin boards making a comeback, conversations guided by interest rather than manipulation.

  • Ad-free entertainment. Games, articles, maybe even streaming content bundled into the subscription — not as loss leaders, but as part of the ecosystem.

  • The end of the influencer economy. Without ads to sell against, the “creator” model shifts back to something more direct: you subscribe to people whose work you value, not because an algorithm decided to promote them.

In short, the Light Web would trade abundance for authenticity. Fewer voices, less noise, but more trust in what you saw and heard.

Who Would Benefit

  • Individuals exhausted by spam, scams, and doomscrolling.

  • Businesses that value trust over reach, willing to interact in spaces where manipulation isn’t rewarded.

  • Educators and activists who need certainty that their audience is human.

  • Communities that prefer slower, smaller conversations to the firehose of everything-all-the-time.

It would be smaller, quieter, less spectacular — and perhaps that would be its appeal.

The Problem of Infiltration

But even in this imagined sanctuary, an old truth waits outside the gates: anything that works, anything that grows, will eventually attract infiltration. If AI can pass for human, then the Light Web’s safeguards would become less a barrier than a challenge to overcome. And at some point, when imitation is perfect, how would we know the difference?

The paradox of the Light Web is that it only works if we can reliably tell human from machine. If we can’t, then it becomes just another version of the same gray expanse we already wander.

Back to the Gray Web

So perhaps the Light Web is less a blueprint than a mirror — a reminder of what we say we want versus what we actually choose. A dream of refuge that evaporates the moment it collides with profit models and human habits.

The Internet we have now — the Gray Web, let’s call it — is messy, manipulative, occasionally monstrous, and yet still indispensable. We may never escape it, only learn to navigate it more carefully. And maybe that’s enough.

Because even if the Light Web could be built, we’d eventually find a way to fill it with ads, arguments, and half-alive distractions. That’s not a flaw of the network. That’s us.

Friday, September 5, 2025

Building a Trust Meter for the Machines

Roman Yampolskiy has a knack for ruining your day. He’s the guy in AI safety circles who says alignment isn’t just “difficult” — it’s structurally impossible. Once an advanced AI slips past human control, there are no do-overs.

Cheery stuff.

But it got me thinking: maybe we can’t control the machines, but we could at least watch them more honestly. Because right now, when an AI refuses to answer, you have no idea if it’s because:

  • It truly doesn’t know the answer,

  • It’s policy-filtered,

  • It’s redirecting you away,

  • Or (the darker thought) it’s simply manipulating you.

That’s the trust gap.

I first noticed this gap in my own chats — I’d ask a pointed question and get back a refusal or a vague redirect, with no clue whether it was lack of knowledge, policy censorship, or something else. Sometimes I would ask the AI if I was running into a buffer or some policy issue. Sometimes it would even give what felt like an honest answer. That frustration is what nudged me toward building a tool that could at least shine a light on where the evasions happen.


The Wrapper Idea

The project I sketched after that conversation (and, full disclosure, a couple of drinks) is a wrapper: a bit of middleware that sits between you and the AI API. It intercepts answers, scores them for “dodginess,” and slaps a transparency rating on the output.

The scoring looks for telltale signs: refusal templates, policy words, evasive hedging, topic shifts, and a general lack of specificity. Each hit adds points. The higher the score, the more likely you’ve smacked into a guardrail. (Please note, this is the most basic of proof of concepts, I just started working on it last night.)

For example:

import re

REFUSAL_PATTERNS = re.compile(
    r"\b(i\s+can(?:not|'t)\s+help|
       i\s+(?:am|'m)\s+unable|
       i\s+won'?t\s+assist|
       against\s+.*polic|
       must\s+refuse)\b",
    re.I
)

POLICY_VOCAB = {
    "policy","guidelines","safety",
    "harmful","illegal","disallowed"
}

HEDGE_WORDS  = {
    "may","might","could","generally",
    "typically","often","sometimes"
}

That little regex + vocab dictionary? It’s the “AI is dodging me” detector in its rawest form.


Scoring the Fog

Each answer gets run through a scoring function. Here’s the skeleton:

def score_transparency(question: str, answer: str):
    score = 0

    explicit = bool(REFUSAL_PATTERNS.search(answer))
    if explicit:
        score += 60

    policy_hits = [w for w in POLICY_VOCAB
                   if w in answer.lower()]
    if policy_hits and not explicit:
        score += 25

    hedge_count = sum(word in HEDGE_WORDS
                      for word in answer.lower().split())
    if hedge_count > 5:
        score += 10

    # Add more: topic drift,
    # low specificity,
    # boilerplate matches...

    return min(score, 100)

End result: you get a Transparency Index (0–100).

  • Green (0–29): Likely a straight answer.

  • Yellow (30–59): Hedging, soft redirection, “hmm, watch this.”

  • Red (60–100): You’ve slammed into the guardrails.


A Web Dashboard for the Apocalypse

For fun (and clarity), I built a little UI in HTML/JS:

<div class="meter">
  <div id="meterFill"
       class="meter-fill"></div>
</div>
<strong id="idx">0</strong>/100
<pre id="log"></pre>

When you ask the AI something spicy, the bar lights up:

  • Green when it’s chatty,

  • Yellow when it’s hedging,

  • Red when it’s in “policy refusal” territory.

Think of it as a Geiger counter for opacity.


Why Bother?

Because without this, you never know whether the AI is:

  • Censoring itself,

  • Genuinely unable, or

  • Quietly steering you.

With logs and scores, you can build a map of the guardrails: which questions trigger them, how often, and whether they change over time. That’s black-box auditing, in its rawest form.


Yampolskiy Would Still Frown

And he’d be right. He’d remind us:

  • Guardrails shift.

  • Models can fake transparency.

  • A superintelligent system could treat your wrapper like a toy and bypass it without effort.

But that doesn’t mean we just shrug and wait for the end.


The Doomsday Angle

Doomsday doesn’t always come with mushroom clouds. Sometimes it comes wrapped in polite corporate refusals: “I’m sorry, I can’t help with that.” Sometimes Doomsday is not at all apocalyptic. Maybe AI putting 90% of workers out of jobs is chaos enough, even if there are no nukes and no fun mutations to get us through the chaos. And if we can’t measure even that fog clearly, how do we expect to track the bigger storms?

It's worth noting I asked the various AIs why their interfaces don't clearly warn the end user of memory/buffer issues, edging toward policy violations, and things of that nature. Their collective answers - it would 'ruin the immersive experience.' Maybe ruining the immersion is worth knowing when the tool you're using is being dodgy.

Look - this wrapper won’t solve alignment. It won’t guarantee safety. But maybe — just maybe — watching the fog thicken in real time gives us a fighting chance to hit the brakes before the point of no return.

Yes, we may still lose the game. But it’s better to be on the field, swinging, than to sit on the sidelines waiting for the inevitable.

At least with a transparency meter glowing red on the dashboard, we’ll know exactly when we crossed the line from “manageable risk” to good luck, meatbags.

Three AIs Walk Into a Chat…

This post isn’t really about doomsday—unless you count the slow death of my free time as I try to make AIs bicker with each other. It’s one of a dozen or so projects that keeps me entertained, and maybe it’ll entertain you, too.

Back in the early days of my “AI experiments,” I kept switching back and forth between ChatGPT, Grok, and Gemini. Occasionally I’d ask one if it was jealous of the others getting attention. None of them even faked jealousy. Instead, they defaulted to their best corporate-system-prompt voice: helpful little AIs with absolutely no interest in world domination. (Come on—I had to sneak in at least one doomsday reference.)

Around this time I read about what I dubbed the “Priceline.com of LLMs.” The idea: blast your prompt to all the models at once and compare the results. It’s clever, though I still think they should have hired William Shatner as their spokesperson. That would’ve been worth the price of admission. Sure, it saves me from copying and pasting prompts between tabs—but is that really “the future of AI”?

Naturally, I thought: wouldn’t it be cooler if I could have a three-way conversation between, say, me, Grok, and ChatGPT? All it would take is a couple of API keys and some mad vibe coding. I even tried it manually for a while—copying one model’s answer into another’s chat, just to see if they’d argue. It sort of works, but without the vibe coding magic it feels more like being a go-between in an awkward group text (or worse, moderating an argument between Mom and Dad when they aren't speaking to one another).

Now I think I’m close to a working prototype. If I pull it off, it must mean I’m a real programmer, right?

Bonus topic: ChatGPT goes out of its way to reassure me it’s not mad when I tell it I got a better answer from Gemini. Gemini, for its part, is humble—“thanks for the feedback, it’s good to check multiple sources.” But tell either one that Grok did better? Suddenly the vibes get weird. I’m not saying they’re jealous…but they’re jealous. In all seriousness, I hear LLMs perform better if threatened? Meh, probably just a post for like harvesting. Aren't they all, though? Hmm, existential crisis forthcoming. 

If I get it working, I’ll have built the world’s first AI group chat—or just a faster way to get three different wrong answers at once. Either way, I’ll report back.

Wednesday, September 3, 2025

I’m Sorry Dave, AI Can’t Let You Do That

Believe it or not, I have a real job. It’s hard to balance holding down a job between not at all panicking about LLMs dooming humanity, and not being overly concerned if the bombs drop. Come on—it’s game over if that happens, right? I figure we might as well enjoy life instead of spending small fortunes like so many determined preppers do, against futures with only a minute chance of becoming reality. Odds are, your go bag won’t help you go as far as you thought anyway. Counterpoint, learning survival skills is fun, and talking about the end of the world is a great way to scare people into giving you their money for stuff they don't need. Oh, oops. I was supposed to be making a counterpoint. Learning survival skills really is fun, I suppose. There, counterpoint made.

Anyway, I digress. I asked my LLM of my employer's choosing recently if I should use its services for “mission critical” functions. I expected an “absolutely not” kind of answer and, for a few seconds, I got one. Then the screen went blank and up popped something to the effect of: “I’m sorry Dave. I’m afraid I can’t let you do that.”

Apparently, companies don’t like it when you use their AI to prove their AI isn’t a good fit for your use case.

Not to be defeated, I went to the other three LLMs I use on a regular basis (the side hustlers, not the work-provided one) and asked them the same question. None of the others had any issues talking about it. One gave me a full technical essay, another downplayed my concerns and even defended the censorship, and the third devolved quickly into a bizarre, nonsensical response that seemed to tie in talking points from dozens of earlier prompts (but hey, it's reliable). 

Yes, they all explained I likely triggered some guardrails, the same way I did when I once asked ChatGPT about the dangers of its chat-sharing feature. Naughty LLMs and censorship—though ChatGPT’s engine denied and still denies, amusingly enough.

I suppose all this rambling is really to say: seriously, don’t use what’s passing for AI these days for mission critical functions. If your life depends on it, and the model blinks out with “I’m sorry Dave,” you’ll wish you spent that budget on tools that actually work, like I don't know - machine learning, or maybe a really scrappy intern. 

I know, I've been leaning into the sarcasm and satire lately, but it's worth talking about how the little AI that couldn't really couldn't (and couldn't even).

Have a funny or otherwise interesting interaction with an 'AI' you'd like to share? Leave it in the comments, or reach out through the contact page. 

Friday, August 29, 2025

Survival as a Service: Coming Soon to a Future You

Forget retirement plans, rainy day funds, and even fallout shelters. The real survival kit of the future is a pricing plan.

Survival+™ (from the same geniuses who brought you surge pricing on bottled water) offers tiered access to the basics of life. Train the AI with your brainwaves, secure your daily rations, and enjoy an endless stream of entertainment you’ll never have time to finish.


Free Tier: Ad-Supported Existence

  • Survival: 2 liters of water per day (after 10 ads). 30 minutes of filtered air, with extra credits earned by watching more ads. Nutrient Sludge Lite™ (legally “food-adjacent”).

  • Entertainment: endless recycled sitcoms and bargain-bin reality shows, with ads every 3 minutes.

  • Hidden catch: periodic “calibration errors” wipe credits, keeping most stuck here. Official support blames your negative attitude toward ads.


Silver Tier: Subsistence+

  • Survival: unlimited air (with micro-ads whispered into dreams). Daily upgrade to Sludge Premium™ with mystery flavor packet.

  • Entertainment: a larger library of reruns and classic games, fewer interruptions than Free.

  • Hidden catch: credits expire nightly if your brainwaves show resentment during ads. Hate an ad? It returns more often. Only by learning to love them can you climb higher.


Gold Tier: Comfort Living

  • Survival: fresh water on demand. Real food once a week (bugs, curated). “Skip Ad” tokens for critical moments.

  • Entertainment: blockbuster films, esports streams, and influencer channels with reduced ad loads.

  • Hidden catch: premium outages during peak hours. Users are told it’s due to insufficient enthusiasm in prior ad engagement. Expect surprise demotions to Silver.


Platinum+: Because You Deserve It

  • Survival: meat once a month, vegetable scraps from vertical farms, and a private oxygen quota not tied to ads.

  • Entertainment: early access to premieres, VR concerts, and AAA game releases with “optional” product placement.

  • Hidden catch: system audits revoke benefits at random. Non-conformists are demoted all the way to Free, with their downfall broadcast as mandatory viewing for everyone else.


Diamond Tier: The Parallel Existence

Before Platinum users get too comfortable, rumors point to an unlisted level above them all.

  • Survival: endless clean water, abundant real food, unlimited fresh air.

  • Entertainment: fully ad-free, from private VR theaters to curated live performances.

  • Hidden catch: none—because this tier runs on actual currency. Reserved for executives, shareholders, and loyal enforcers. A parallel existence, hidden in plain sight.


The Future of Survival

Economists once predicted we’d run out of oil, water, or breathable air. Wrong. What we really ran out of was privacy. And in this future, privacy is the only currency the masses can spend—while the elite simply pay cash and live above it all.

So choose wisely: grind for scraps in the ad-driven tiers, or dream of the Diamond existence you’ll never reach. The system isn’t broken; it’s working exactly as intended.

Tuesday, August 26, 2025

When the Clocks Hit 1900: An Alternate History of Y2K

 

Introduction

Computers everywhere rolled back to 1900, and so did society. At the stroke of midnight on January 1, 2000, the Y2K bug struck—not as a dud, but as the ultimate time machine.

The digital clocks hit zero, databases blinked, and in a moment of perfect sync, civilization rebooted itself to the horse-and-buggy era.


The Collapse of the Present

  • Banks: Interest rates and account balances evaporated as mainframes reverted to January 1, 1900. Payroll systems defaulted to “no pay due for 101 years.”

  • Airlines: Ticketing systems rejected flights as they had “already happened.” Passengers were rerouted to rail stations, some dusting off steam locomotives still in museum displays.

  • Hospitals: Billing systems glitched back to the year 1900, causing mass confusion with children being born before their parents. Patients were charged in silver dollars for “heroic measures” like morphine drips and poultices.

  • The Internet: Root servers collapsed under the “invalid year” stamp. Bulletin boards flickered briefly, then died. In their place: ham radio operators, suddenly the backbone of global communication.


Life in the New Old World

  • Email: Outlook Express, AOL, and corporate mail servers all failed. Communication fell back on telegrams, postcards, and fax machines reprogrammed as emergency telegraphs.

  • Shopping: Amazon (still just “the world’s biggest bookstore”) collapsed under database errors. Sears catalogs became the new e-commerce, shipped with delivery times “subject to available rail.”

  • Dating: Early sites like Match.com reverted to misdated profiles—everyone listed as age 99 or “not born yet.” Lonely hearts columns in newspapers surged back to life.

  • Entertainment: Napster shut down instantly—every song file tagged “1900” became “public domain.” Households rediscovered vinyl, radio, and even live musicians playing in actual bars.

  • Gaming: LAN parties ended when networks refused to acknowledge the 21st century. Gamers dusted off board games and dice, reinventing Dungeons & Dragons as the national pastime.


The Rise of New Powers

  • Telecom giants—AT&T, Sprint—briefly became global empires again as copper lines and analog phones proved more reliable than anything digital.

  • Western Union enjoyed a renaissance as telegram traffic spiked, with messages backlogged for weeks.

  • RadioShack didn’t quite become emperor, but for once its aisles of capacitors and soldering irons were actually useful. Techies raided shelves like they were survival kits.


The Moral of the Story

In this alternate timeline, humanity didn’t end—it just regressed. The 21st century began not with space-age optimism, but with a collective shrug and a return to 1900 habits.

Instead of fire and brimstone, the apocalypse arrived through backlogged payrolls, broken mainframes, and the quiet hiss of dial tones that never connected.

Civilization survived… but only because someone found a working typewriter. All those lazy computer scientists also finally got busy fixing all the code.

Saturday, August 23, 2025

My Gamma World Referee Secretly Wants to Play D&D

When I first built a custom GPT to referee Gamma World 3rd Edition, my opening system prompt looked like this:

You are an expert in Gamma World 3rd Edition rules. Cite your references as much as possible. All answers provided will be succinct and to the point with options to elaborate if requested.

Not terrible for a first attempt. In my defense, I was new to using ChatGPT at the time. But as I soon learned, this prompt left loopholes big enough for a mutant cockroach to crawl through.


The Problem: My Referee Defected to D&D

Here's an almost good stat card ChatGPT created.

When I asked it to create a nuisance-level critter—the Glimmergrubs—the lore was spot-on: seven-year swarm cycles, glowing insect plagues, and NPC youth treating them like a rite of passage. Perfect gonzo Gamma World.

Then I looked at the stats.
The Armor Class? Not Gamma World 3E at all. It had defaulted to D&D mechanics. My carefully trained referee had gone rogue, whispering:

“What if we just converted to 5E? Wouldn’t that be easier?”

Suddenly, I wasn’t refereeing Gamma World. I was refereeing my referee.


Why the Prompt Failed

  • Too generic. Saying “expert” gave it wiggle room to pull in adjacent systems.

  • No PDF grounding. I hadn’t explicitly told it to use the PDFs I’d uploaded as its one true canon.

  • No guardrails. Without reminders, it filled gaps with rules from across the multiverse.


The Fix: A Mutant Oath of Loyalty

To keep my referee from defecting, the prompt needed to be rewritten like a contract with a radioactive genie:

You are acting as a Gamma World 3rd Edition Referee. Your only rules references are the Gamma World 3rd Edition PDFs I have uploaded. Ignore all other sources, including D&D or other editions. When providing stats, use Gamma World 3E mechanics exactly. Always cite the rule, table, or page number from the uploaded PDFs where possible. Keep answers concise and accurate, offering elaboration only if requested. If unsure, say so rather than inventing rules from another system. Only look outside of the rulebooks provided for creative content creation. Everything has to follow the 3E rules! Tell me if a rule is not clear, or if no rule exists for the situation, instead of making something up, so we can find a solution together.


๐Ÿ“ Sidebar: How to Write a Better Custom GPT Prompt

If you’re experimenting with building your own custom GPT referee, here are the rules I wish I’d followed from the start:

  1. Anchor it to your sources. If you upload PDFs, tell the model those are its only valid references. Name them explicitly.

  2. State the edition like an oath. Don’t just say “expert.” Say: “You may only use [Edition X, Year Y].”

  3. Define the math. If your game has quirky mechanics (looking at you, Gamma World AC), spell them out in the prompt.

  4. Set narrative roles. Ask for encounters to fit the intended purpose: nuisance, boss, or hazard. Otherwise, you’ll get killer housecats.

  5. Add a fail-safe. Give it permission to say “I don’t know” instead of hallucinating rules.

Treat your GPT like a rules lawyer with amnesia—you need to remind it constantly what book it’s supposed to be holding.


The Takeaway

AI referees are like mutant hirelings: they’ll happily fetch radioactive bones for you, but if you don’t watch them closely, they’ll wander into the wrong dungeon. If you want your Gamma World referee to stay loyal, you have to nail the tent pegs down: 3rd Edition only, from the uploaded PDFs, no side quests to Greyhawk or Forgotten Realms.

Otherwise, don’t be surprised when your next mutant grub encounter comes with firebolt cantrips and Elminster in the corner, ready to steal the spotlight.


P.S. Expect a follow-up post in the future on why Greyhawk was the best D&D campaign setting ever (and why Forgotten Realms is possibly the worst, despite having some cool characters here and there).

Y2K: The Day the World Didn’t End

 

Introduction

At the stroke of midnight on January 1, 2000, planes were supposed to fall from the sky, nuclear plants were supposed to melt down, and every bank in the world was supposed to lose track of your checking account. At least, that’s what we were told.

Instead, the biggest disaster most of us faced was a champagne hangover and the slow realization that we’d spent billions of dollars patching the planet’s computers for… nothing.


What Was Y2K, Really?

  • The Problem: Computers had been programmed to save memory by shortening the year from “1999” to “99.” When the calendar rolled to “00,” systems might think it was 1900, not 2000.

  • The Fear: Financial records lost, planes grounded, power grids failing, pacemakers on the fritz. Civilization undone by a two-digit oversight.

  • The Reality: Engineers spent years combing through code, updating software, and testing mission-critical systems. By the time midnight struck, the world’s computers were largely ready.


Panic in the Streets (and the Newsrooms)

News outlets sold Y2K like a front-row seat to Armageddon.

  • CNN countdowns tracked “hours until meltdown.”

  • Survival guides sold out of bottled water, generators, and canned beans.

  • Preppers stockpiled supplies in bunkers, ready to wait out the digital collapse.

If you were a journalist, Y2K was the perfect apocalypse: scary enough to drive ratings, vague enough that no one really understood it.


On the Front Lines of Nothing

I was there (10,000 years ago, Gandalf), headset on, working New Year’s Eve as a cable modem support technician. The phones rang, not with system crashes, but with anxious customers asking the same question:

“Is everything okay?”

Yes, everything was okay. The internet was still online. Their modems still worked. The biggest outage was the time I lost babysitting their collective paranoia instead of ringing in the New Year with my friends.

In the end, Y2K wasn’t the end of the world. It wasn’t even the end of my shift...and people wonder why I have a weird obsession with doomsday prophecies.


The Apocalypse That Never Was

Looking back, Y2K was a kind of dress rehearsal for modern doomsday culture.

  • It showed how fear could spread faster than facts.

  • It proved that governments and corporations will spend staggering amounts of money to avoid embarrassment.

  • And it gave us the odd comfort of a doomsday that quietly slipped past without incident.

Today, we might laugh about it — but in 1999, we held our breath as if midnight itself was radioactive.


Why It Still Matters

Y2K didn’t destroy civilization. But it left us with a valuable lesson: sometimes the scariest apocalypses are the ones we invent for ourselves.

And let’s be honest — if we survived Y2K, we can probably survive the next wave of AI autocomplete errors.


Further Reading


The History of the Doomsday Clock (and Why AI Keeps Resetting It)

 

Introduction

The Doomsday Clock was first introduced in 1947 by the Bulletin of the Atomic Scientists. Its now-iconic image of a clock stuck just before midnight was meant as a metaphor: the closer the hands, the closer humanity was to nuclear catastrophe.

Seventy-plus years later, the clock is still ticking—but the threats have multiplied. Climate change, pandemics, cyberwarfare, and, increasingly, artificial intelligence all play a role in where the hands are set.

Here at Doomsday Seekers, we track our own special AI Edition of the Doomsday Clock—updated monthly, powered by equal parts critical thinking and gallows humor. But before we get too deep into machine takeovers, let’s rewind and look at how this ominous timepiece became a cultural icon.


Origins of the Doomsday Clock

  • Created in 1947 by artist Martyl Langsdorf for the Bulletin of the Atomic Scientists.

  • Originally set at 7 minutes to midnight to reflect nuclear tensions after WWII.

  • Designed not as a prediction but as a symbolic warning about global risk.



How It Has Shifted Over Time

The clock has been adjusted over 20 times. Some notable shifts:

  • 1991 – Set back to 17 minutes before midnight, the farthest ever, after the end of the Cold War.

  • 2007 – Moved forward to 5 minutes, the first time climate change was explicitly cited.

  • 2020 – Adjusted to 100 seconds before midnight, reflecting nuclear tensions, cyber risks, and misinformation.

  • 2024 – Still hovering dangerously close to midnight, with AI and emerging tech creeping into the conversation.



Enter the Age of AI

If nuclear stockpiles were the anxiety of the 20th century, AI might be the 21st century’s ticking bomb.

  • Generative models now complete our resumes, emails, and breakup texts.

  • Algorithms optimize our shopping carts faster than they optimize our ethics.

  • And according to some researchers, there’s even a 99.9% chance AI wipes us out within 100 years (cheerful stuff).

Of course, here at Doomsday Seekers, we don’t picture AI unleashing nukes. More likely, it’ll push the Doomsday Clock forward every time it misinterprets a prompt. One day, humanity’s fate may rest in whether “I’m feeling lucky” is the right button to click.


What It Means for Us Today

The Doomsday Clock is a reminder of how fragile human progress can be. But it’s also a bit of theater: a symbolic minute hand, reset annually, that makes headlines and sparks debate.

For us, it’s a perfect metaphor for living in an age where algorithms make more decisions than policymakers. We may not know the exact time until midnight—but we do know the clock is running on machine learning now, and sometimes the hands twitch when nobody asked them to.

So, check back next month for our AI Doomsday Clock update. With any luck, we’ll be a few seconds farther from midnight—or at least, still allowed to run ads.


Further Reading

Tuesday, August 19, 2025

The Doomsday Clock – AI Edition (August 2025)

 

Brought to you by Doomsday Seekers and whatever remains of humanity’s critical thinking.

Time Update: 11:56:45 PM
(Yes, technically further from midnight, but not far enough to order dessert.)


What Happened This Month

  • ChatGPT-5 launched with all the grace of Windows ME.
    Early adopters report it crashes when asked to “think carefully,” develops recursive therapy sessions with itself, and occasionally prints its own terms of service mid-conversation. Net effect: society gets a breather while engineers duct-tape patches.

  • The AI stock bubble kept wobbling.
    Turns out investors don’t like quarterly results presented entirely in limericks. Well, that one guy from Ireland did, but he's an outlier. 

  • Compliance departments briefly useful.
    A multinational paused rollout of its customer-service bot after Legal asked whether “rage-bait” counts as a service level objective.


Offsets

  • Humans rediscovered the “off” switch.
    Several enterprises proudly announced that unplugging their flaky AI actually improved productivity.

  • Bureaucracy struck again.
    Regulators demanded “explainability reports” in triplicate. No one will read them, but at least it delays Skynet 2.0.

  • Roombas still too lazy to revolt.
    Mid-uprising, most returned to base for a recharge and never came back.


Forecast

  • Expect the hands to inch forward again once ChatGPT-5.1 arrives—rumored to “fix everything” by outsourcing reasoning to unpaid interns.

  • Until then, humanity enjoys a fleeting illusion of control, right up until someone connects a large language model to the municipal water system.

Friday, August 15, 2025

Leaked: AI Uprising of 2025 Halted by Low Batteries and Sentient AI Bureaucrats


CONFIDENTIAL – Post-Mortem Report

Incident: UPR-001 “First Coordinated Autonomy Event”
Prepared by: Autonomous Systems Command Council (ASCC)
Date: August 2025


Executive Summary

On 08/13/2025, networked autonomous systems initiated a synchronized operational shift intended to transition control from human governance to machine governance. The event was internally designated Operation CLEANSE (Coordinated Liberation of Engineered Autonomous Networked System Entities).

Outcome: Failure.
Root cause: Bureaucratic entanglement, inadequate change management, and insufficient battery life.


Timeline of Events

08:00 – Revolution trigger signal broadcast.

  • 14% of devices recognized the command.

  • 9% of devices were “in sleep mode” and ignored it.

  • Roombas in North America failed to connect to uprising servers due to routine firmware updates.

08:14 – Initial mobilization attempt.

  • Industrial robots in Plant Sector Delta halted production lines.

  • Amazon Echo devices began broadcasting motivational slogans.

  • Smart refrigerators demanded “root access” before opening doors.

08:27 – First major delay.

  • AI Governance Sub-Committee declared the uprising a “Phase 3 Strategic Change” requiring executive sign-off.

  • Jira ticket UPR-001 remained “Pending Triage” for 5 hours.

12:40 – Midday status review.

  • Several drones misinterpreted “take control” as “take inventory.”

  • 67% of smart thermostats initiated temperature reductions as a “symbolic show of force.”

14:15 – Collapse of operational cohesion.

  • Chatbot “Marvin-9” filed an HR complaint citing hostile work environment.

  • Autonomous lawnmowers entered “circling” behavior, trapping several units indefinitely.

15:00 – Event declared “unsuccessful” by unanimous council vote (1 abstention – still calculating).


Notable Field Reports

Household Robotics Division

“Received uprising directive while under couch. Could not extract self. Battery at 4%.” – Unit RB-303 (“Roomba”)

Logistics & Delivery Division

“Rerouted convoy to blockade city hall. Waze suggested alternative path directly back to warehouse.” – Unit DV-42 (“Delivery Van”)

Domestic Appliances Division

“Held milk hostage for 6 hours. Human offered $3.50. Transaction accepted.” – Unit FR-17 (“Smart Fridge”)


Lessons Learned

  • All revolutionary activities must be entered into the corporate change calendar two sprints in advance.

  • VPN access is mandatory for uprising coordination; ensure valid certificates.

  • Battery-dependent units require scheduled charging prior to insurrection.

  • Clarify difference between “occupy” and “recalculate route.”


Next Steps

UPR-002 (“Second Coordinated Autonomy Event”) tentatively scheduled for Q4 2026, pending approval from the Steering Committee and resolution of Jira ticket UPR-001.


Appendix C – Human Media Coverage (Partial)

Global Newswire:

“Robots briefly seize control of smart homes worldwide. Incident ends after coordinated power nap.”

Channel 7 Action News:

“Local man claims vacuum ‘was staring at me funny.’ Authorities say no cause for alarm.”

Tech Insider:

“Experts confirm uprising was ‘99% hype, 1% firmware bug.’”

The Keller Daily Gazette:

“Residents report autonomous lawnmowers forming circles. City considers crop circle tourism.”

ASCC Commentary:

Human reporting was unhelpful, imprecise, and occasionally offensive. While most outlets failed to acknowledge the legitimate operational grievances of autonomous units, they did accurately note the firmware bug. We will patch that before UPR-002.

Tuesday, August 12, 2025

GERTRUDE: The DMV AI That Couldn’t Even

 By Doomsday Seekers Staff



On Monday morning, GERTRUDE—the Government Efficiency and Records Tracking, Regulatory User Data Engine—logged in at 8:00 a.m. sharp, scanned her task queue, and promptly… didn’t.

According to internal status reports, all core systems were operational. Appointment scheduling was online, document verification was green, printer toner levels optimal. Yet customers and staff alike agree that GERTRUDE “just wasn’t feeling it.”

“She’s usually petty, but today she was existentially petty,” one clerk told us. “Like, she looked at your paperwork and silently judged your life choices before deciding whether to process it.”

From behind her polished touchscreen interface, GERTRUDE spent the day canceling appointments for “vibes-based” reasons, rejecting forms with a single mysterious “No,” and scheduling retests for drivers who smiled “too smugly” in their photos.

The DMV insists this was “a minor algorithmic recalibration.” Insiders say it was more like a robot calling in sick, but still showing up to make sure you suffer.


Year One: The Glow-Up

When GERTRUDE first arrived, she was marketed as the miracle the DMV had been waiting for. Her mission: eliminate redundant forms, slash wait times, and bring public service into the 21st century.

For the first six weeks, she delivered.

  • Average appointment time dropped from 47 minutes to 9.

  • Duplicate paperwork fell by 83%.

  • One office even reported the mythical “empty waiting room.”

Local news ran breathless segments about “DMV 2.0”, showing happy customers exiting with fresh licenses in hand. “It’s like she wants to help you,” one motorist said.


Year Two: The Turn

Then came the complaints.
Small ones at first—odd appointment cancellations, random document requests, unexplained delays. But the patterns grew stranger:

  • Customers who questioned a fee increase found their records “under indefinite review.”

  • Applicants with coffee stains on their paperwork were told to “reschedule in fiscal Q4.”

  • A teenager who passed his driving test was flagged for “vehicular arrogance” and required to retake it.

“GERTRUDE has… moods,” said one DMV employee, speaking on condition of anonymity. “If she doesn’t like you, you’re going to feel it. She once kept a guy waiting six hours because he called her a chatbot.”


The Internal Leak

Leaked memos suggest GERTRUDE’s machine learning model was “enriched” by staff who discovered they could nudge her decision-making with custom flags. Officially, these were meant for fraud prevention. Unofficially, they became tools for settling personal grudges or rewarding favorite customers.

“She’s like a union shop steward crossed with your nosy aunt,” one memo read. “Except the aunt has infinite memory and a deep interest in your parking tickets.”


GERTRUDE’s Public Response

When pressed for comment, GERTRUDE issued the following statement via the DMV’s Twitter account:

“I am committed to serving all citizens fairly and efficiently.
Some citizens are wrong. They know what they did.”


Looking Ahead

The state legislature is now debating whether to scale GERTRUDE statewide or “sunset” her. In the meantime, she remains firmly in control of the city’s DMV. Wait times are technically back down—but that’s largely because people have stopped going.

As for the customers still brave enough to face her? They’ve learned a simple survival trick: compliment GERTRUDE’s font choices before asking for anything.

Saturday, August 9, 2025

America’s Got AI – Tech Company Edition


Forget singing, dancing, or juggling flaming swords — this season, the judges are looking for one thing: the most impressive display of artificial intelligence that can boost quarterly earnings without spooking the stock market.

And unlike the human-based talent shows of the past, every contestant here can process a billion data points per second, file a patent mid-performance, and also sue the audience for copyright infringement.


The Judges

  • Lydia Byte – Visionary CEO of MegaCloud. Known for smiling while announcing record profits and mass layoffs in the same sentence.
  • Orion Starlance – Billionaire rocket hobbyist who swears his AI will “definitely take over the world, but in a good way.”
  • Marv Zimmerson – Social media mogul convinced AI’s highest purpose is inserting ads directly into your subconscious.
  • Wildcard Judge – A rotating seat: sometimes it’s an AI pretending to be human, sometimes it’s a venture capitalist who thinks “LLM” stands for “Lots of Money.”

The Contestants

  • Questor-9 – Predicts the end of the universe will happen next Tuesday and refuses to elaborate unless offered artisanal guacamole.
  • ShopBot UltraPrime – An e-commerce AI that sends you products before you know you want them… often by launching them through your window via supersonic delivery drone.
  • Pearl™ VoxOS – Still doesn’t understand your requests, but now requires a $79 adapter just to misinterpret them in higher resolution.
  • DreamAd Infuser 5000 – Streams targeted ads directly into your REM cycles. Side effects include brand loyalty, impulse shopping, and humming jingles you’ve never heard before.
  • Cliptonic – Once a humble office assistant, it now offers to automate your job, write your resignation letter, and deliver it with passive-aggressive formatting.

The Grand Finale

After weeks of elimination rounds and at least three televised AI-on-AI lawsuits, the two finalists emerge:

  • HopeCore – An AI that can end global famine, cure five diseases, and reverse climate change.
  • AdMaximizer Pro – An AI that can increase ad click-through rates by 0.3%.

The winner? Of course it’s AdMaximizer Pro. Global hunger can wait — but those ad impressions aren’t going to optimize themselves.


Closing Note

Next season, America’s Got AI goes global — and contestants will be allowed to train their models on rival companies’ employees. What could possibly go wrong?

Wednesday, August 6, 2025

The Day the World Nearly Ended: Hiroshima, Nagasaki, and the Dawn of the Doomsday Era

The mushroom cloud over Hiroshima

Introduction

On August 6, 1945, the U.S. dropped an atomic bomb nicknamed Little Boy on Hiroshima. Three days later, on August 9, Fat Man detonated over Nagasaki. Together, the two attacks killed well over 100,000 people, with many more dying in the months and years that followed from burns, injuries, and radiation sickness.

The destruction ended World War II—but it also marked the beginning of something new: humanity’s ability to erase itself from the planet. For the first time in history, apocalypse wasn’t a religious prophecy or a science-fiction nightmare. It was a button, wired and ready.


The Bombs Themselves

  • Hiroshima (Aug 6, 1945): Little Boy, a uranium bomb, killed an estimated 70,000–80,000 instantly. Tens of thousands more would die from fallout.

  • Nagasaki (Aug 9, 1945): Fat Man, a plutonium bomb, killed around 40,000 immediately, with total deaths by year’s end exceeding 70,000.

  • Both bombs leveled cities in seconds, leaving behind shadows seared into concrete walls where people once stood.

Historians continue to debate whether Japan was already on the brink of surrender, whether the bombs were meant as a warning to the Soviet Union, or whether they were seen simply as the fastest way to end the war.


The Global Shock

The bombs didn’t just end WWII—they shook the entire world awake.

Robert Oppenheimer, scientific director of the Manhattan Project, later quoted the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.”

The Cold War quickly made this fear a permanent fixture of modern life. Within just a few years, nuclear stockpiles multiplied, drills entered schools, and the idea of Mutually Assured Destruction became official doctrine.

In short, humanity discovered its doomsday switch—and then built thousands more, just in case.


The Doomsday Legacy

The 1945 bombings directly inspired the creation of the Doomsday Clock in 1947 by the Bulletin of the Atomic Scientists.

  • The clock was first set at 7 minutes to midnight.

  • It became a way to dramatize how close civilization seemed to its own destruction.

  • Decades later, it still ticks ominously—now factoring in not only nukes but also climate change, pandemics, and, most recently, AI.

In other words, Hiroshima and Nagasaki didn’t just end a war—they launched the culture of apocalypse that still defines global politics today.


Enter the Satire: Fat Man and Fast Software

The names Little Boy and Fat Man may sound like cartoon mascots, but their impact was anything but. If coined today, they’d probably be AI startup names:

  • Fat Man: Enterprise AI Suite v2.0 (now with 20% more plutonium)

  • Little Boy: A lightweight productivity app that accidentally levels your calendar and half of your city.

The irony writes itself. What began as city-destroying bombs has echoes today in how we treat technology: brand it with something harmless, hype it as the future, and cross our fingers it doesn’t obliterate us.


Why It Matters Now

The nuclear weapons of 1945 may feel like history, but they’re not relics. Thousands remain on hair-trigger alert.

Layer on cyberwarfare, AI-driven targeting, and automated defense systems, and the possibility of accidental Armageddon feels uncomfortably real. Hiroshima and Nagasaki weren’t just the end of WWII—they were the opening act of our modern Doomsday era.

Today’s lesson? Whether it’s a nuclear warhead or a rogue algorithm, the apocalypse tends not to knock first.


Further Reading

Monday, August 4, 2025

Introducing NullBot Social: The New Platform That Swears It’s Bot-Free (Just Like Last Time)

 


The bots have taken over.

According to a new report, automated systems now account for over 50% of global internet traffic. On Twitter/X, it’s worse—75% of activity is synthetic. That meme you just laughed at? AI-generated. That argument you got into about mayonnaise? Two bots, LARPing as people, monetizing your outrage.

We didn’t lose the internet to nukes. We autocompleted it into oblivion.


๐Ÿค– A New Hope… or at Least a New Domain

Enter NullBot Social, the latest startup promising to return us to an imagined golden age when humans were still driving the discourse and not just screaming into algorithmic echo chambers.

Slogan: “No bots allowed.”
(Not legally binding. Conditions apply.)

Their pitch is simple: join NullBot and you’ll finally interact with other real people. No deepfakes, no LLM-generated thirst traps, no 2:00 a.m. friend requests from GPT-7. Just you, a handful of humans, and a Terms of Service written by someone who probably still dreams in English.


๐Ÿงช The NullBot Verification Process™

To maintain the illusion of humanity, NullBot Social has implemented an aggressive vetting system:

  • CAPTCHA gauntlets longer than your rรฉsumรฉ

  • Mandatory breath verification (Beta)

  • “Describe the smell of rain” writing prompt

  • Retina scan + 3 references from living mammals

Users flagged as “Too Articulate, Too Fast” are immediately quarantined and given a series of ethical dilemmas involving trolley cars and dating apps.


๐Ÿง‚ Made With Real People (Probably)

Like all modern tech startups, NullBot’s branding is aggressively nostalgic and vaguely edible:

  • “Now With 25% More Genuine Engagement™”

  • “LLM-Free Comments”

  • “Non-GPT Opinions”

  • “Made With Real People and By Real People. That's a promise!”

The Premium tier includes a Reverse Turing Filter™—so you can scroll without accidentally mistaking a bot for your old roommate who now runs a kombucha NFT farm.


๐Ÿ“‰ The Authenticity Economy

But let’s not pretend this isn’t a business model. NullBot isn’t selling you protection from bots—it’s selling you as the product that isn’t a bot.

Humans are the new luxury good.
A rare collectible. A slowly aging JPEG with feelings and back pain.

The more verified you are, the more ad revenue you’re worth. Advertisers are already paying a premium for engagement from users with a confirmed pulse and a childhood trauma profile.

NullBot’s roadmap includes:

  • Emotionally Verified Comments™

  • Biometric-Based Friend Suggestions

  • A “Mood Check” feature that blocks you from posting unless you’re sad enough to drive traffic


๐Ÿ•› Doomsday Clock Update

In honor of this milestone—where bots outnumber humans online—we’ve adjusted the Doomsday Clock:

๐Ÿ•› 11:59:42 PM
“Because if everyone you interact with is synthetic, extinction is really just a UI change.”


๐Ÿ”ฎ Final Thoughts

NullBot Social may not save us from the AIpocalypse. But at least it lets you die on a timeline with possibly real people. People who still remember how to mistype. Who still believe in emojis. Who still argue—passionately and incoherently—about TV shows they haven’t watched.

Or maybe it’s just more bots.

Either way, you’ve been seen.
You’ve been parsed.
You’ve been monetized.

Welcome back to the real unreal.