If you try to advocate for firearms regulation in America, you will hear that “Guns don’t kill. People do!”
Firearms supporters will bring up tons of examples of how anything can be good or bad. It all depends on how you use something. 99% of the people will use their kitchen knives to cut their potatoes. Only a sick 1% will use such tools to stab their family members.
Should that peaceful 99% be deprived of an essential tool to prevent the madness of that insane 1%? Of course not!
However, this argument always fails to address an essential aspect to determine whether something should be subject to regulation or even a ban. What was that tool designed for?
The reason why I’ve always been in favour of regulating or even banning guns for private citizens is that guns were designed specifically to kill. Then, as humans are very creative, other secondary reasons emerged to make sense of guns possession: sports, collection, or just hunting, so to kill “just” animals, that our species still regards with Nazi disdain.
However, despite these creative alternative uses of firearms, their existence is just motivated by the need of humans to kill other humans. The conflict in Ukraine is here to remind us that.
If we apply the same reasoning to AI we get another dire picture.
What was AI designed for?
Well, AI got immediate traction in marketing. What better application than intelligent data harvesting to find out what consumers want?
Then AI soon moved to find out patterns in how people behave in public spaces. Official excuse: improve public safety. Real reason: mass surveillance. That’s how China began to invest billions in AI.
Finally, the other big use of AI shares the same fate as every new technology: military. Just like anything else, AI is being used to kill.
The rest are just petty uses that make more harm than good anyway. From writing books, to painting, to sketching, to composing music, better and better till the point that human artists won’t be needed anymore. Awesome!
Already DALL-E, by the notorious startup OpenAI, promises to create amazing photorealistic images by just typing a sentence. You describe what you want and DALL-E visualizes it in seconds! No more need for your fellow artist from Fiverr. Millions of artists unemployed overnight very soon.
And what about Deepfake? The confidence that video evidence can give, vanished in the haze forever. Deepfake, the magic tool to create realistic videos of people who never shot them. Deepfake, the invention nobody needed. Deepfake, the creation that gave too many losers the chance of creating any revenge porn they wanted to.
How can humans be so self-destructive? I can understand alcohol consumption, cigarettes, maybe even drugs, but I don’t understand this compulsive development of AI!
When I hear keynotes from AI entrepreneurs, I hear an unsettling sadistic enthusiasm in their voices. They sound like they are looking forward to destroying our careers, our freedoms, our lives.
If you hear from AI fans, they will tell you that AI is inevitable “because it makes so much money!”
I just wonder what we will do with all of that money in the hands of a sick few. Because AI as it is really risks digging an insurmountable ditch between the rich and the poor. A ditch that will stand there forever through the eons. Just a few ultra-rich controlling the best algorithms and billions of poor who will have no chance to rebel against the new regime. Because the new regime will know everything about the poor. They will know even how they think! Modern society will be wiped out. The Age of Enlightment, started in Paris in the 18th century, will end. Technology will bring us back to the Ancient Regime when a narrow noble class used to rule over everyone by divine rights. In the near future, these rights will be conferred by techno-deities. After all, tech companies nowadays look more and more like churches than proper businesses and their customers are more and more fanatics, just look at Apple. Cupertino makes products without any practical use, but they are worshipped like relics by Apple fans. Mind you to criticize their apple-god!
Yes, of course, we can find countless good uses for AI, like aiding (not replacing) doctors in the diagnosis process, or recognizing from the voice tone if someone is going to commit suicide, but these are just a tiny fraction compared to all of the other destructive use cases for AI.
And guess what? The vast majority of the investments go to the destructive cases, not to the good cases. Helping doctors to spot early-stage cancer? Nah! That’s for compassionate losers! Creating the ultimate military pilot-less aircraft? That’s stuff for real men, buddy! Because nobody wants to lose the I-have-it-longer game! The gun I mean…
Alex Joonto (on LinkedIn Alessandro Giuntini_) is a chameleon who works for a fantastic crypto company called __SwissBorg __during the day. During the night, he works on writing articles, conceiving ideas, and preparing to deliver his first book: Thank You, President Corona! You can follow him on __LinkedIn _only. No, he doesn’t have any Instagram or TikTok…