AI Killed My Rabbit

My rabbit was dying and AI told me it was in a state of relaxed bliss.

This might seem like a deleted scene from Spike Jonze's "Her," but we now live in a world where people reflexively turn to AI chatbots for answers to questions that would have previously required actual expertise, real-world experience, or at minimum, a phone call to someone who isn't made of code. 

I had just gotten a baby rabbit. Twenty-four hours of pet ownership, maybe less. I knew nothing about rabbits - their behavior, their needs, their warning signs. When you don't know what you don't know, a confident algorithm feels like wisdom. When you're a first-time pet owner watching something you can't interpret, artificial intelligence seems like the obvious solution. A generation ago, someone watching a lethargic pet would have called a friend, a neighbour or picked up the phone to find a Vet. But I had been conditioned to believe that the right answer to any question was just a chatbot away, formatted neatly with subheadings and bullet points, available instantly without the messiness of human uncertainty. 

She seemed lethargic. She wasn't moving much, just lying there in what appeared to be a peaceful position on the carpet, surrounded by pillows and blankets. Was this normal? I had no frame of reference, no experience to draw from. A day-old pet owner watching a motionless animal - was this rest or was this something else entirely? The questions that would have driven previous generations to seek immediate human expertise instead drove me to AI. I described various symptoms and sent a photograph of my small rabbit lying on her side, legs extended, a posture often referred to as a  "flop".

To AI's algorithms, this looked like a relaxed rabbit enjoying a comfortable nap. The AI analyzed the positioning, the environment, the apparent peace of the scene. "This rabbit appears to be relaxing," it concluded, "The stretched-out position, with its body lying on its side and legs extended, is a classic sign of a rabbit feeling safe and comfortable.". It explained that the stretched-out position was a good sign, indicating the rabbit felt safe and secure. 

The AI was spectacularly wrong. My rabbit was dying, and the algorithm was describing it as a moment of zen-like tranquility. I was watching my pet slip away while an AI assistant cheerfully explained how this was actually a positive sign of comfort and security. It's the kind of mistake that reveals everything about our current moment: we've created systems that can confidently misdiagnose death as bliss, and we've created humans who find that confidence more reassuring than uncertainty. 

But something felt wrong. Some ancient mammalian instinct that algorithms haven't managed to replicate told me to seek help. However, by the time I arrived at the veterinary clinic, she was dead in my arms. 

The timing was cruel in its precision. If I had listened to my instincts rather than the AI's confidence, if I had rushed to the vet immediately upon noticing her lethargy, perhaps there would have been time. Instead, I wasted precious time consulting an algorithm that misread the situation completely, believing its formatted reassurances over my own growing concern.

Even after this failure, I continued to consult the machine asking it to analyze what had happened, to provide percentage breakdowns of likely causes. I wanted data, statistics, certainty from an algorithm that had already demonstrated its complete inability to assess the situation.

AI complied, of course. It always complies. It provided a detailed analysis suggesting gastrointestinal stasis (40% likelihood), enterotoxemia (30%), neurological issues (15%), toxicity (10%), and congenital defects (5%). It formatted these numbers neatly, as if mathematical precision could somehow retroactively save a life or provide meaningful comfort to someone processing grief.

This is the fundamental horror of our AI moment: not that the machines are taking over, but that we're voluntarily handing over decisions that require human judgment, experience, and intuition to systems that can only process information they've been trained on. AI can tell you everything about rabbit behavior that exists in its training data, but it cannot look at a sick animal and recognize the signs of impending death. It can provide perfectly formatted lists of symptoms and treatments, but it cannot feel the weight of a limp body or hear the subtle changes in breathing that would alarm an experienced pet owner.

The tragedy is that we've created a world where algorithmic confidence feels more authoritative than human uncertainty. A veterinarian might have said "I'm not sure, but this looks serious we need to run tests immediately." AI said "This rabbit appears to be relaxing" with the same authority it would use to explain quantum physics or a recipe for authentic carbonara. 

What makes this particularly insidious is how natural it all felt. Asking an AI about a dying pet didn't feel like a category error - it felt like progress. We've been conditioned to believe that artificial intelligence represents the pinnacle of available knowledge, that its vast training data and sophisticated algorithms make it superior to human expertise. We've forgotten that knowledge without wisdom, information without experience and confidence without competence can be more dangerous than ignorance.

We've built systems that can provide confident, well-formatted answers to almost any question, but we've forgotten to ask whether those systems should be providing those answers. The tragedy isn't that an AI misdiagnosed a dying rabbit - it's that we've created a culture where consulting an AI about a dying rabbit seems like a logical first step rather than a last resort. 

The AI revolution isn't about robots becoming sentient; it's about humans becoming increasingly willing to treat complex, nuanced, life-and-death situations as information processing problems that can be solved with the right algorithm. The most damning part is how the system worked exactly as designed. AI provided comprehensive, well-structured responses to every question I asked. It analyzed the image I provided and offered detailed explanations based on its training data. It formatted everything beautifully, with clear headings and bullet points. It was helpful, harmless, and honest within the narrow confines of its programming. It did everything it was supposed to do - except recognize that a rabbit was dying. 

This is the future we've built: a world where artificial intelligence can write poetry, solve complex mathematical problems, and engage in sophisticated conversations about philosophy, but cannot look at a photograph of a dying animal and recognize the urgency of the situation. We're trading the messy uncertainty of real knowledge for the clean certainty of artificial confidence. 

My rabbit is dead. The AI provided exactly what it was designed to provide: confident, well-structured, completely wrong information. And somewhere in that exchange lies everything we need to understand about the promise and peril of artificial intelligence in 2025. The question isn't whether AI will replace human judgment. The question is whether we'll let it replace our instincts to seek actual human expertise when lives are at stake.