By Lizz Panchyk
We’ve come a long way since flip-phones and disturbingly robotic toys. We are now in the midst of an artificial intelligence–or AI–epidemic.
ChatGPT has been out since the end of November, and has since been updated multiple times (now it’s GPT-4 for paying users) to become better than before. Now, we’re seeing AI popping up everywhere. Even Snapchat has its own AI to which you can ask questions and talk. BuzzFeed has AI-based quizzes where you can type in information and it will spit something out for you — like a break-up text or even a fairytale. There’s an AI that generates instrumentals to go alongside your vocals. Even Slack is coming out with its own AI app. And now there are AI newscasters. AI is everywhere. Has it gone too far?
While what we see is really fascinating, there are AI-related problems coming into play already. A German magazine recently fired their editor-in-chief for using AI to write an article due to it being misleading to the audience. Schools are also preparing by blocking some of these websites and including it in their academic honor codes.
ChatGPT does have its own watermarking, thus assisting schools with detecting the true author of submitted assignments. According to Search Engine Journal, “The trick that makes AI content watermarking undetectable is that the distribution of words still have a random appearance similar to normal AI generated text. This is referred to as a pseudorandom distribution of words. Pseudorandomness is a statistically random series of words or numbers that are not actually random.” The inhumane pattern is what gets caught.
What is cool about AI is what it can bring to on-screen in movies and TV shows. But what’s scary is how quickly it can begin to take away people’s jobs. Why pay someone to do something when a robot can do it for free?
The problem is, how do we stop AI? It’ll just keep generating, updating and getting better. Imagine what AI will be able to do two years from now, or the way that it’s going, just two months from now?
“My biggest fear with respect to these advancements is the government’s inability to regulate it fast enough or effectively, and unregulated AI and/or misaligned AI in today’s polarized, geopolitical climate could invite negative consequences that we’ve never witnessed before as a species,” said Associate Communications Professor John Drew. “AI will also push a lot of people out of work and render many professions obsolete, and this will happen faster than our educational institutions are equipped to mitigate.”
This was definitely not something that was at all anticipated. If you asked someone 50 years ago what they thought 2023 would be like, surely they’d imagine some sort of technological world, but at this rate, starting with ChatGPT and everything that has closely followed, we are on the brink of a very fascinating but intimidating world.
It’s hard to know how closely this will affect me or people like me but it is a scary thought with how real it’s becoming. Plus, how do we trust them? With Snapchat’s AI, from what I’ve seen, it responds quickly, gives advice and asks questions. The creepy aspect is that when you send a picture to it, it will respond. It is able to see or “scan” the photo and figure out a rough imagery of what it is. Then it will respond regarding something in your photo. It’s a little concerning.
Now we see consistent generations, like Elon Musk’s “TruthGPT” and “X.AI” followed by Google’s PaLM API. So the question is, what comes next?