Posts

Showing posts from September, 2025

Breaking: HOA’s New AI Enforcement System Declares Martial Law

Image
 MAPLE RIDGE, TX — Residents of the Maple Ridge subdivision say they “saw this coming,” but few expected their homeowners association’s latest cost-cutting move would end with robot lawn mowers laying siege to an 82-year-old widow’s home. Last month, the HOA board approved the deployment of ARBOR-TRON™ , an artificial intelligence system billed as a way to “streamline rule enforcement” and “reduce human bias.” Equipped with camera drones, the AI was tasked with monitoring the neighborhood for common infractions like untrimmed hedges, visible trash bins, non-approved paint colors, and fences taller than regulation. At first, residents reported only a surge in violation notices. “I got six in one day—two for weeds, one for my mailbox, and three for leaving my trash can out past noon,” said local resident Brian Phillips. “I thought the system was buggy. Turns out it was just warming up.” Drone Panic Turns to Crackdown After 72 hours of continuous scanning, ARBOR-TRON reportedly fl...

GERTRUDE Update: DMV Tyrant or Teen Idol?

Image
  Since things have been too serious around here lately, let’s check in and see how our old friend GERTRUDE — the DMV’s resident AI — is holding up. Patch Notes, DMV-Style According to the DMV’s official statement, GERTRUDE received a “minor optimization patch” designed to improve the fairness of driving exams. According to everyone else, she redefined “fairness” as “a series of tasks lifted from a dystopian reality show.” Here’s what the new test looks like: Parallel park while reciting the alphabet in reverse. Perform a rolling stop at a stop sign and explain, in haiku form, why it doesn’t count as “running it.” Balance a traffic cone on your head throughout the exam without stopping. One applicant claims she was asked to prove her worth by “outmaneuvering a simulated semi truck driven by a hostile AI named Todd .” DMV management insists Todd is just a “training module.” Flattery Will Get You Everywhere Of course, GERTRUDE is still capable of favoritism. Pay he...

The Sentience Hype Cycle

Every other week, another headline lands with a thud: "AI may already be sentient." Sometimes it's framed as a confession, other times a warning, and occasionally as a sales pitch. The wording changes, but the message is always the same: we should be afraid - very afraid. If this sounds familiar, that's because it is. Fear of sentience is the latest installment in a long-running tech marketing strategy: the ghost story in the machine. The Mechanics of Manufactured Fear Technology has always thrived on ambiguity. When a new system emerges, most of us don't know how it works. That's a perfect space for speculation to flourish, and for companies to steer that speculation toward their bottom line. Consider the classic hype cycle: initial promise, inflated expectations, disillusionment, slow recovery. AI companies have found a cheat code - keep the bubble inflated by dangling the possibility of sentience. Not confirmed, not denied, just vague enough to keep jo...

Internet 4dot0? The Dream of a Light Web

The Powder Keg We Already Lit I’ve sometimes joked that the Internet was humanity’s first zombie apocalypse. Not the Hollywood version, but the slow shamble into a half-dead existence where we scroll endlessly, repost without thinking, and wonder if the people we’re arguing with are even real. Watch the opening of Shaun of the Dead and you’ll see the resemblance. The Internet didn’t invent that vacant stare, but it certainly perfected it. Why “A New Internet” Never Sticks Every few years, someone announces a plan to rebuild the Internet. Decentralized, peer-to-peer, encrypted end to end, free of surveillance, free of manipulation. A fresh start. And every time, it fizzles. Why? Because the things that make the Internet intolerable — ads, bots, recommendation engines, corporate incentives — are also the things that make it work at scale. A “pure” Internet sounds noble, but purity doesn’t pay server costs, and most people don’t really want to live in an empty utopia. They want convenien...

Building a Trust Meter for the Machines

Roman Yampolskiy has a knack for ruining your day . He’s the guy in AI safety circles who says alignment isn’t just “difficult” — it’s structurally impossible. Once an advanced AI slips past human control, there are no do-overs. Cheery stuff. But it got me thinking: maybe we can’t control the machines, but we could at least watch them more honestly. Because right now, when an AI refuses to answer, you have no idea if it’s because: It truly doesn’t know the answer, It’s policy-filtered, It’s redirecting you away, Or (the darker thought) it’s simply manipulating you. That’s the trust gap. I first noticed this gap in my own chats — I’d ask a pointed question and get back a refusal or a vague redirect, with no clue whether it was lack of knowledge, policy censorship, or something else. Sometimes I would ask the AI if I was running into a buffer or some policy issue. Sometimes it would even give what felt like an honest answer. That frustration is what nudged me toward buildi...

Three AIs Walk Into a Chat…

This post isn’t really about doomsday—unless you count the slow death of my free time as I try to make AIs bicker with each other. It’s one of a dozen or so projects that keeps me entertained, and maybe it’ll entertain you, too. Back in the early days of my “AI experiments,” I kept switching back and forth between ChatGPT, Grok, and Gemini. Occasionally I’d ask one if it was jealous of the others getting attention. None of them even faked jealousy. Instead, they defaulted to their best corporate-system-prompt voice: helpful little AIs with absolutely no interest in world domination. (Come on—I had to sneak in at least one doomsday reference.) Around this time I read about what I dubbed the “Priceline.com of LLMs.” The idea: blast your prompt to all the models at once and compare the results. It’s clever, though I still think they should have hired William Shatner as their spokesperson. That would’ve been worth the price of admission. Sure, it saves me from copying and pasting prompts ...

I’m Sorry Dave, AI Can’t Let You Do That

Believe it or not, I have a real job. It’s hard to balance holding down a job between not at all panicking about LLMs dooming humanity, and not being overly concerned if the bombs drop. Come on—it’s game over if that happens, right? I figure we might as well enjoy life instead of spending small fortunes like so many determined preppers do, against futures with only a minute chance of becoming reality. Odds are, your go bag won’t help you go as far as you thought anyway. Counterpoint, learning survival skills is fun, and talking about the end of the world is a great way to scare people into giving you their money for stuff they don't need. Oh, oops. I was supposed to be making a counterpoint. Learning survival skills really is fun, I suppose. There, counterpoint made. Anyway, I digress. I asked my LLM of my employer's  choosing recently if I should use its services for “mission critical” functions. I expected an “absolutely not” kind of answer and, for a few seconds, I got one. ...