Professor Emily M. Bender is on a mission: She wants us to know that ChatGPT’s apparent wonder is more than a parrot. Not just any parrot, but a “stochastic parrot”. “Stochastic” means she chooses word combinations based on a calculation of probabilities, but doesn’t understand anything she says. It’s hard to chat with ChatGPT or Bing and keep in mind it’s a parrot and only a parrot. But for Bender, a lot of bad things hinge on that awareness: “We’re in a delicate moment,” she says. And she warns: “We are interacting with a new technology and the whole world needs to quickly balance its literacy to figure out how to deal with it.” Its message, in a nutshell, is: Please, this is a machine that does one thing very well, but nothing more.
Bender, a computational linguist at the University of Washington, felt it could happen as early as 2021, when he published a now-celebrated academic paperAnd On the “dangers of stochastic parrots”: “We didn’t say this was going to happen. We said that this can happen and we should try to avoid it. This was not a prophecy. It was a warning. There we just talked a little bit about how dangerous it is to make something look like a human. It is better not to copy human behavior because it can cause problems”, the 49-year-old Bender told EL PAÍS by videoconferencing. “The more people are aware, the easier it will be to see great language models as simple text synthesis machines, and not as something that generates ideas, thoughts or feelings. I think (sus creadores) they want to believe that it is something else”, he added.
That false humanity has many problems: “It will make us believe. and does not take responsibility. He has a tendency to make things happen. If it shows a text that is true, it is by chance”, he assures. “Our societies are a system of relationships and beliefs. There are risks if we start to lose the faith that has no responsibility. As individuals interacting with it, we need to be careful what we do with its trust. The people making this should stop making it look like humans. I shouldn’t speak in the first person,” she added.
low potential terminator
The labor of making them more human probably isn’t free. Without him, ChatGPT’s propaganda would have been more sober: it would not have given the feeling of a potential terminator, of a wary friend, of a visionary saint. “They want to create something that looks more magical than it is. It seems magical to us that a machine can be so human, but in reality it is a machine creating the illusion of being human,” says Bender. “If someone is in the business of selling technology, the more magical it looks, the easier it is to sell,” he adds.
Researcher Timnit Gebru, who co-authored the Parrot article with Bender, who was fired from Google for the reason, mourned on Twitter as Microsoft’s president admitted in a documentary about ChatGPT that “it’s a person No, it’s a screen.”
If someone is in the business of selling technology, the more magical it sounds, the easier it is to sell.”
However, the hype isn’t just due to the company that made the chatbot speak as a human. There are AI applications that create images and soon videos and music. It’s hard not to promote these developments, even though they are all based on the same type of pattern recognition. Bender asks for something difficult for the media and the way social media is structured today: context. “You can do new things and still not overdo it. You may ask: is this AI art or is it just image synthesis? Are you synthesizing images or are you imagining that the program is an artist? You can talk about technology in a way that puts people at the center. To counter the hype it is a matter of talking about what is actually being done and who is involved in creating it,” he says.
It should also be taken into account that these models are based on an unimaginable amount of data that would not be possible without decades of feeding the internet with billions of texts and images. There are obvious problems with this, according to Bender: “This approach to language technology relies on having data at the scale of the Internet. In terms of fairness between languages, for example, this approach is not going to scale to every language in the world. But it’s also an approach that’s basically mired in the fact that you have to deal with internet-scale data that includes all kinds of junk.”
That bullshit doesn’t just include racism, Nazism, or sexism. Also in serious pages, rich white men are over-represented or words associated with widely seen titles such as “Islam” or the countries from which immigration comes are sometimes spoken in the West. This is at the heart of all these models: re-educating them is an extraordinary and perhaps endless task.
humans are not
Parrot has not only made Bender famous. Sam Altman, founder of OpenAI, creator of ChatGPT, has tweeted many times that we are stochastic parrots. Perhaps we humans also reproduce what we hear after probabilistic calculations. This way of undermining human capabilities allows the perceived intelligence of machines to inflate, the next step for OpenAI and other companies in a field that almost lives in a bubble. Ultimately this will allow you to raise even more money.
“Work on artificial intelligence is tied to viewing human intelligence as something simple that can be quantified and people can be classified according to their intelligence,” says Bender. This classification allows us to set future milestones for AI: “There is ‘artificial general intelligence’, which doesn’t have a great definition, but is something that can learn flexibly. And then there’s still ‘artificial superintelligence’, which I heard about the other day, and which should be even smarter. But it’s all imaginary.” The leap between AI and a machine that actually thinks and feels is still extraordinary.
On February 24, Altman published a post titled “Planning for AGI (Artificial Intelligence) and beyond”. noisy went to twitter To wonder, among other things, who are these people to decide what benefits the whole of humanity.
It’s just gross from the get-go. They think they are actually in the business of developing/shaping “AGI”. And they feel they are positioned to decide what “benefits all of humanity”. pic.twitter.com/AJxExcxDY3
— @email@example.com at Mastodon (@emilymbender) February 26, 2023
This upgrade to ChatGPT allows Altman to potentially present his posts in an almost realistic way. “Sam Altman really believes he can build an autonomous intelligent entity. To keep that faith, he’s going to have to take existing technology and say yes, it looks pretty close to the kind of autonomous intelligent agents he envisions. I think it is harmful. I don’t know if they believe what they’re saying or if they’re being cynical, but they sound like they do,” Bender says.
If the belief that AIs do more than they seem, that they are smarter, spreads, more people will admit that they slip into other decision-making areas: “If we believe that true artificial intelligence exists , then we are also more likely to believe that surely we can make automated decision systems that are less biased than humans, when in fact we cannot,” Bender says.
“Like an Oil Spill”
One of the most talked about possibilities for these text models is whether they will replace search engines. Microsoft is already trying it with Bing. The various changes that have been implemented in your model since its arrival are testimony to its difficulties. Bender likes to compare it to an “oil spill”: “It’s a metaphor I hope will stick. One of the pitfalls with these text synthesis machines that can answer questions is that they are going to inject nonsensical information into our information ecosystem in a way that will be difficult to detect. It looks like an oil spill – it will be difficult to clean. When companies talk about how they’re constantly making progress and improving their accuracy, it’s like BP or Exxon saying, ‘Look how many birds we saved from the oil that poured on them.’
OpenAI wants to talk about the future. But I’d rather talk about how we control what we’ve built now.”
Bender says, while we’re talking about that impossible future, we’re not paying attention to the present. “OpenAI wants to talk about how we ensure AI will be beneficial to humanity and how we govern it. But I’d like to talk about how we control what we’ve built now and what we need to do so that it doesn’t cause problems today, rather than distracting from what would happen if we had these autonomous agents ,” he believes.
He hasn’t lost hope that some sort of regulation will arrive, partly because of the computational effort these models require. “It takes a lot of resources to build one of these things and get it running, which gives a little more leeway for regulation. We need a regulation around transparency. OpenAI is not openly talking about it. Hope this helps people understand better.”
science fiction is not the only future
Bender often hears that, despite holding a master’s degree in computational linguistics, she is an angry woman complaining about technology: “I don’t get offended when people tell me that because I know they’re wrong. . However, they also show an attitude of believing that there is a predetermined path that science and technology should take us, and this is the path that we have learned from science fiction. This is a self-destructive way of understanding what science is. Science is a group of people who expand and explore different things and then talk to each other, not people who follow a straight path, trying to be the first to reach the end.
Bender has one last message for those who believe this path will be accessible and simple: “What I’m about to say may be sarcastic and simplistic, but maybe they’re just waiting to get to that point. Where these models are fed up with so much data that at that moment I decide to spontaneously become conscious.” For now, that’s the plan.
you can follow country technology In Facebook And Twitter or sign up here to receive our weekly newspaper,
Subscribe to continue reading
read without limits