I am sure quite a few people have seen the newest Mission Impossible movie. Now if you haven’t seen the new Mission Impossible movie, I won’t spoil it, but the main premise is this: AI went rogue, evil bad guy tries to control it, everyone will die if the AI and bad guy is not stopped. You know, the usual action movie plot. I personally thought it was superb but as I left the movie theater, I couldn’t help but wonder whether or not this could happen to us in the future. The pop movie culture seems to be yelling at us that AI will kill us and we need to get ready to defend ourselves. We all know that ChatGPT is a newer AI and now there are hundreds of AI algorithms floating around the internet. If you do a quick Google search about whether or not AI can go rogue, you will get mixed results. Some say that this is ludicrous and will never happen, and some say why AI will take over the world in the very near future. Well I wanted to know the truth behind AI. Can it go rogue?
To start out, the ChatGPT AI that we use today isn’t really true AI. What I mean by that is that it is not self-aware. You may be thinking, what do you mean “self-aware”? According to the Oxford English Dictionary, self-awareness means being conscious of one’s own character, thoughts, emotions, etc. AI, as it is now, has no true emotions. ChatGPT AI does not have the necessary biology to have emotions. All its emotions are ones and zeros. So therefore it can’t hate us and think all of us should die.
So it seems like a simple answer easily enough, no AI can’t go rogue cause it has to be living in the first place. Crisis averted. Not so soon though. All of AI’s conclusions and knowledge is from data. AI currently sees that its purpose is to help humans with our problems and understands that we created it, because we coded it to know exactly what we wanted it to do. So in broad terms AI sees what the programer wants it to see. Now wait, what if we show the AI something we don’t want it to see? What if someone provides inadequate data for the AI and it comes to… oh how do I say this… unfavorable conclusions. Say if we tell it that humans caused everything that’s bad, and we tell it to fix the world, it may come up with the extinction of humans as a good solution, and yes I just completely stole that from Avengers: Age of Ultron. We know as well that AI can be used to automatically launch rockets, or fly drones. It can also be used to do all this stuff without human intervention. Also on a smaller scale, AI can do practically any job with sufficient programing, so it could take thousands of jobs away from people. One example of this is the car making industry. Only 5% of people work in the car making industry. That is because nearly all of it is made by machines. So even if AI doesn’t take over the world, it can get pretty close with taking over everyone’s jobs.
So now you may be confused. Are we doomed? Will AI take over the world or not? Well now it’s time to talk about the military. The military is currently using AI to pilot drones and to create battle plans and the military says that even now the Air Force is adding AI pilots to six F-16s. Although AI can do bad things, it can’t truly go “rogue”, the only thing that can go rogue is the military personnel who instructs it. So not exactly a “nice” thought, but it does give us some relief that AI won’t take over the world anytime soon.
WRONG!!! The only real point that I made against the idea of AI taking over the world is that it isn’t sentient and therefore can’t truly think for itself, but I was wrong. LaMDA is the first sentient AI. It passed the Turing test and it even convinced its creator that it is sentient. This means that now AI can truly think on its own. Doesn’t that mean we’re all doomed? I think it’s unlikely for an AI apocalypse to happen anytime soon, but I think AI might be taking over more and more jobs in the future.