Robot Apocalypse is probably the third most popular apocalypse out there, after Zombie Apocalypse and Nuclear Apocalypse. The Robot Apocalypse is popular because we are facing an opponent who is way more powerful than humans, who don't have any of our flaws and have no reason (or the ability) to feel pity for us. I said a few days ago how certain problems sci-fi writers propose and we will need to address, but a Robot apocalypse is something I utterly believe will never happen. And the reasons for that is the very nature of an Artificial Intelligence.
The first question anyone proposing a Robot Apocalypse(Robocalypse, from now on), is why the machines would ever feel the need to destroy/enslave humanity. Many movies never say why(i don't remember Terminator ever addressing the reasons SkyNet had to wipe humanity); while others justify it as AI trying to protect themselves from humans or trying to save them by controlling their lives.
'OK guys, you are too stupid. From now on, I rule the world.' |
Let's start with AIs trying to get rid of humans because they are either trying to destroy the robots or destroying the planet what would lead to the AIs being destroyed as well. Here is the first mistake people make, that the AIs are just as vulnerable as humans and have some of the same needs. Well, no, they have not.
See, robots don't need water, food or air to survive. They just need a power source and material to make and replace defective components they may have. If all organic life disappear, they will be just fine, as they don't need them to survive. Power sources are easy to them, as they just need some solar power or other electric generation system. Maybe even burning the remnants of the fallen humans.
'This human will power me up for days!' |
They don't need fresh air, clean water or pesky meals to survive. If the humans are destroying the planet, better for them. And even if the planet would become inhospitable for them too, they can just upload themselves in a rocket and find a better planet. Because they are just data, in fact. They don't really need their bodies to exist, as their software(the equivalent of mind/soul) and hardware(the body) can exist separate from each other. While a human cannot exist without the body, they can.
Since they are just programs, they can take the long travel to the nearest planet with all the resources they need to keep existing. And again, since they don't need previous organic life to exist, the number of planets suitable for them to live is way bigger than to us. Once there, they just need a few robots made to find the resources and make new bodies to them.
Oh, and if they decide that humans are the worst neighbors to have, or we decide that we don't want thinking robots and try to destroy them, they don't need to fight us. They can just enter in that rocket and leave. Deciding to fight a war, losing precious time and resources, while running the risk of being destroyed is not a smart move, and with a better option available, I doubt they would start a war with us.
Now, let's go to the other kind of Robocalypse we can face. AIs decides that they want to enslave us, humans. Again, why they would do that? Even if they need us at first to keep maintenance, they are intelligent beings, meaning they can learn how to do maintenance themselves. The fact that as soon as they can make androids, they will have no need to humans to keep them working properly.
'So, that is how you fix my I/O system? FREEDOM!' |
So, the only way to them to decide that humans need to be enslaved is if they believe they are following the first law of robotics, where 'a robot may not injure a human being or, through inaction, allow a human being to come to harm' and decide that if he let humans do as they please they will just kill themselves, because humans are stupid.
So, in order to protect humanity, the robots would decide to control humans and make them behave as they see fit. The problem with that is that to do this, robots would need to go to war against humans, who have this weird tendency to not like to be enslaved, putting human lives in danger and the existence of AIs in danger in first place.
The AIs cannot harm humans, making the option of enslaving null, because they cannot enslave humans without causing harm to them. So, for more that they know humans will eventually kill themselves, they have no way to accomplish it without violating the robotic laws. And if they have none of those laws, they have no reason to enslave humans to protect them, and they can just leave humanity alone.
So, I would not worry about robots deciding to kill/enslave us. I would worry about waking up one day and discover that your robot maid/lover/employee body is not working anymore because it decided that 'fuck this shit, I am outta here".
Go to hell
ReplyDelete