When ChatGPT was released last year, people’s reactions ranged from excited to fearful to just plain curious. And let’s be real, who wouldn’t be curious about a chatbot that can understand and generate human-like responses? But in the midst of all that excitement, one man has been a prominent critic of AI boosters – Gary Marcus, who we like to call the ultimate AI gadfly. He’s a psychology and neural science expert who’s been asking the tough questions about AI’s development and use, with a healthy dose of wit, humor, and skepticism.
Now, we all know that people have been talking a lot about the potential dangers of AI. But what about the benefits? Can anyone tell us what specific benefits we’re going to get from AI? Anyone? Bueller? Bueller? The intellectually honest answer is that nobody knows. And while some of us want to slow AI development down because of potential risks, we can’t completely discount the fact that there might be some benefits. But let’s not get too carried away with GPT-5, which some people seem to think is going to be the next big thing after sliced bread. I mean, GPT-4 can’t even beat a grandmaster in chess! So can we please calm down with this GPT-5 hysteria?
Now let’s talk about all the people who have been fooled by ChatGPT. You know who you are. You thought you were talking to a human, when all along it was just a robot programmed to spout off answers one character at a time. It’s a neat party trick, sure, but let’s not forget that these systems don’t actually understand what they’re reading or saying. And if we’re not careful, bad actors can use these systems to spread misinformation and lies. So can we please stop giving ChatGPT all this credit that it doesn’t deserve?
And speaking of credit where credit is due, let’s talk about companies like OpenAI and their CEOs Sam Altman and Satya Nadella. Sure, they’re developing pretty cool AI technology, but they need to be more clear about what their systems can and cannot do. Altman has walked both sides of the fence, sometimes inviting the inference that their systems are capable of artificial general intelligence. But let’s get real here, folks. These systems can’t even do basic math. Do we really think they’re capable of taking over the world?
Now onto the elephant in the room – what happens when AI-generated misinformation and deepfakes start infiltrating our political system? It’s going to be a [expletive] show, that’s what. And we need to address this issue now before it’s too late. We need to watermark videos and develop laws that penalize people for spreading mass-scale harmful misinformation. I mean, we already have rules on telemarketing, so why can’t we have rules on distributing misinformation? And let’s not forget that this is a bipartisan issue – 69% of people support a pause in AI development. So let’s work together to make sure AI doesn’t destroy our democracy.
Now, I know what you’re thinking. “Gary, you’re a critic of AI. Do you even believe in its potential?” And to that, I say, absolutely. The potential for AI to revolutionize science and technology is enormous. But let’s not get overly excited about the technology we have right now. It’s great for typing faster and being more productive, but if it destroys the fabric of society, is it really worth it? And that’s exactly why we need to establish an international institution to govern AI development and use. We need rules and infrastructure to ensure that AI is used responsibly and for the greater good. And while some might be skeptical, I’m cautiously optimistic that we can achieve this. So let’s not give up hope just yet, folks.
Serious News: nytimes