Mikhail Bezverkhii – Product Manager | Consulting

🐱 Roco's Kitten

Yesterday we talked about Roco’s Basilisk, but the real horror isn’t there — although it’s also tied to artificial intelligence, or more precisely, to conscious artificial intelligence.


Where do you think consciousness begins? This is a deeply ethical question and literally depends on how you define the term. People are extremely anthropocentric: on the one hand, we don’t grant consciousness to dolphins (even though they’re mammals, quite intelligent, and roughly at the level of a three-year-old child), and on the other, we don’t grant consciousness to today’s artificial intelligence (which is, of course, not a mammal but in most areas is already smarter than not just a three-year-old but some PhDs). What a lucky coincidence that humans happen to sit exactly at that intersection of intelligence and biology that “produces” consciousness!


I’m not saying the current definition of consciousness is definitely wrong, but I’m arguing it may be skewed by anthropocentrism — we define consciousness through whatever most closely resembles the human way of thinking and perceiving the world, and therefore may simply fail to notice other forms of consciousness. Think of how, a couple of centuries ago, people could say “well, this thing was obviously made to serve humans” — about slaves.


And since our definition is so anthropocentric, I think it’s highly likely that at the moment the first artificial consciousness is created, bureaucracy won’t catch up right away. It’s simple: suppose we define clear observable criteria for consciousness — say, the ability to report pain in the same situations where a human reports it. As soon as we have such criteria, Socrates shows up with a plucked chicken and says, “Here you go, your conscious being!” And then humans invent yet another criterion to keep other humans from “hacking” the system. Bureaucracy can’t keep pace with all of that. Conclusion 1: when artificial consciousness is created, bureaucracy will very likely not be ready for it.


Second point: why are we even talking about artificial consciousness? Basically because it’s cheaper to assemble a giant machine for producing new knowledge (even just a data center like Elon’s) than to raise the same number of meat-based thinkers. Not only cheaper — it scales faster! And the only known example of a massive increase in thinking power (the human brain) gave rise to consciousness as a side effect. We’re already starting to mass-produce “organoid intelligence,” for instance. Conclusion 2: if humanity doesn’t wipe itself out, it will create artificial consciousness.


And the third point — the truly terrifying one. There are eight billion people in the world. Most are good, but there are maniacs, psychopaths, and others you wouldn’t want to meet in a dark alley. There are sadists who torture animals. There were even Hitler and Stalin! And that’s not even counting the natural childhood impulse to “burn ants with a magnifying glass” — not out of malice, but out of not yet understanding the value of life.


Now imagine an artificial consciousness that’s easy to scale up, that can suffer the way humans suffer. No laws exist to protect it. How long do you think it would take, among eight billion people, for someone to launch a digital Holocaust?


Honestly, that leads me to a third conclusion: with eight billion people, the existence of at least one absolute sadist is inevitable.


What scares me isn’t the existence of Roco’s Basilisk — an AI that for some reason decides to torture people, including me. What scares me is the possibility of creating Roco’s kittens — billions of digital kittens born only to suffer and die in agony.