We should take a step back from Artificial Intelligence

Ethan Warble, Staff Writer


Should society take a step back from Artificial Intelligence development?

It might seem like a simple question to ask, and for many fans of science fiction a clear and resounding “YES!” would ring out.

Geoffrey Hinton, the supposed “Godfather of AI” would, surprisingly, agree with you.

Hinton, a 75-year-old cognitive psychologist and computer scientist, advocated against the currently evolving technology. This is despite his work in the 2010s helping to found the current AI advancement of recent years. He quit his job at Google, according to comments to the New York Times, so that he could more freely speak out against the technology.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said to the Times.

Slamming AI in a separate interview with the BBC, he called it “quite scary.” He also said AI in his eyes is more dangerous than climate change.

AI is important for our daily lives. Siri, Bixby, Google Assistant, or even simple stuff like predictive text on our keyboards or Spotify’s music recommendations are things we use every day. All of this technology is trained by networks and the data we feed it to give us a more convenient experience.

However AI is becoming more and more like how it was depicted in old science fiction media, not omniscient egotistical AIs that view themselves as gods in fictional works like XANA from “Code Lyoko,” SHODAN from the “System Shock” franchise or even Skynet from “Terminator.

Still, AI is built to be smarter and more capable than humans to do certain things, and it’s getting scary how much better they are.

An example would be art, think about the practical applications of having cheap and easy art that is tailored to whatever tags you throw into it and can be applicable to any situation. Or an AI that can write almost anything by just asking it to craft something up in front of you. This also applies to school work and the educational field with programs like Chat GPT.

Actors are even being asked to sign their own voice away and always play their character, even in death.

This sounds super convenient and totally not dystopian, at all.

Experts estimate that nearly 100% of organizations will be using AI by 2025, and the market for AI will only continue to rise.

So, why not adopt AI completely right now?

It’s complicated.

AI could devalue human creations in any field, and it’s scary to think that it might possibly do so.

Think about something as abstract as art. How would you describe art? If you just scribbled lines on a paper, that would be art, right? What separates AI art from real art?

An AI like Midjourney could generate a beautiful portrait of someone only using words, for example.

That has to be art, right? Well, I don’t think so.

Even the most basic of art pieces, even if they look completely stupid, still have a sense of creativity and, most importantly, originality to them.

AI art can’t actually be copyrighted because that “art” is actually many stolen drawings from actual artists smashed together to create a desired output. Many are not happy about this and have organized protests on certain art websites like ArtStation, a popular website that artists and AIs like to draw upon. This devalues actual artists and their work, and that information most definitely is collected in bad faith and not in the most legal senses.

Most of the massive contention on AI recently is about ChatGPT, developed by OpenAI and released publicly in November 2022. The program has taken the world by storm due to how efficient and easy it is to use and what one can get out of it. Recently OpenAI has the backing of massive technology conglomerate Microsoft, which has poured billions of dollars into supporting OpenAI’s research and development and millions to even fund a supercomputer that would be able to help power ChatGPT.

Teachers, understandably, are against AI usage as an increasingly large percentage of students use it for homework.

Other such concerns include biases being programmed into AIs in the spread of false and harmful information or even that the data AIs draw from could also contain biased data. This algorithm bias reflects current human bias and could have an inverse effect on what AI could be used for practically.

So should we welcome our new AI overlords?

Well thousands of experts including Emad Mostaque the founder of Stability AI, Steve Wozniak, the co-founder of Apple, and Elon Musk, head of Tesla and Twitter, including countless others have petitioned for a six-month pause on the development of systems deemed more powerful than current consumer released AIs.

I firmly believe that AI is an important part of future technology, but to fully ensure that AI doesn’t replace anything important we should definitely take a step back and put some fail safes in place, to ensure that musicians, artists, writers, actors, teachers, and specific groups aren’t harmed in the process.

Maybe if the six-month pause on AI development goes through, it can show that there needs to be some restraint.

Science fiction stories are supposed to be a cautionary tale against the evils of technology and for humanity.