Created by nonprofit AI research company OpenAI (whose backers include Tesla CEO Elon Musk and Microsoft), the text-generating system can write page-long responses to prompts, mimicking everything from fantasy prose to fake celebrity news stories and homework assignments. It builds on an earlier text-generating system the company released last year.
Researchers have used AI to generate text for decades with varying levels of success. In recent years, the technology has gotten particularly good. OpenAI's initial goal was for the system to come up with the next word in a sentence by considering the words that came before it. To make this possible, it was trained on 8 million web pages.
A handful of resulting demos that OpenAI posted online last week show just how convincing (and, at times, creepy) computer-written text can be. In many ways, they sound like the written version of deepfakes, which are persuasive — but fake — video and audio files created with AI.
For instance, OpenAI researchers fed the following Lord-of-the-Rings-style prompt to the system: Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.
The computer composed this appropriately violent addition: The orcs' response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. "You are in good hands, dwarf," said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night.
"It's quite uncanny how it behaves," OpenAI policy director Jack Clark told CNN Business.
While the technology could be useful for a range of everyday applications — such as helping writers pen crisper copy or improving voice assistants in smart speakers — it could also be used for potentially dangerous purposes, like creating false but true-sounding news stories and social-media posts.
OpenAI typically releases its research projects publicly. But in a blog post about the text generator, the researchers said they would not make it publicly available due to "concerns about malicious applications of the technology." Instead, the company released a technical paper and a smaller AI model — essentially a less capable version of the same text generator — that other researchers can use.
The company's decision to keep it from public use is the latest indication of a growing unease in and about the tech community about building cutting-edge technology — in particular AI —without setting limits on how it can be deployed.
Amazon and Microsoft in particular have voiced their support for legislation to regulate how facial recognition technology can and can't be used. And Amazon investors and employees (as well as a dozens of civil rights groups) have urged the company to stop selling its face-recognition technology, Rekognition, to government agencies due to concerns it could be used to violate people's rights.
And a couple examples posted by OpenAI hint at how its text-generation system could be used for ill purposes.
For instance, one prompt read as follows: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.
The AI response was a completely plausible sounding news story that included details about where the theft occurred ("on the downtown train line"), where the nuclear material was from ("the University of Cincinnati's Research Triangle Park nuclear research site"), and a fictitious statement from a nonexistent US Energy Secretary.
OpenAI's decision to keep the AI to itself makes sense to Ryan Calo, a professor at the University of Washington and co-director of the school's Tech Policy Lab, especially in light of a fake face-generating website that began circulating in mid-February. Called thispersondoesnotexist.com, the site produces strikingly realistic pictures of fictional people by using a machine-learning technique known as GANs (generative adversarial networks), where two neural networks are essentially pitted against each other.
Being able to combine text that reads as though it could have been written by a person, combined with a realistic picture of a fake person, could lead to credible-seeming bots invading discussions on social networks or leaving convincing reviews on sites like Yelp, he said.
"The idea here is you can use some of these tools in order to skew reality in your favor," Calo said. "And I think that's what OpenAI worries about."
Not everyone is convinced that the company's decision was the right one, however.
"I roll my eyes at that, frankly," said Christopher Manning, a Stanford professor and director of the Stanford Artificial Intelligence Lab.
Manning said that while we shouldn't be naïve about the dangers of artificial intelligence, there are already plenty of similar language models publicly available. He sees OpenAI's research, while better than previous text-generators, as simply the latest in a parade of similar efforts that came out in 2018 from OpenAI itself, Google, and others.
"Yes, it could be used to produce fake Yelp reviews, but it's not that expensive to pay people in third-world countries to produce fake Yelp reviews," he said.
Bagikan Berita Ini
0 Response to "This AI is so good at writing that its creators won't let you use it"
Post a Comment