These are just a few of the many quirky things Janelle Shane, an optics research scientist in Louisville, Colorado, has trained artificial intelligence to come up with in her free time.
Although her experiment is just for fun, it's a reminder how humans are still a key part of the AI training process and it highlights the limitations and potential biases that come along with that.
She collects lists, such as Halloween costume ideas and horror-movie titles, and uses them to teach an algorithm to generate its own examples, such as "Sexy banana" and "The Haunting of Flesh Show."
Shane, 34, posts the wacky creations on her blog, AI Weirdness, and has a growing following online among the AI community, including Google's AI head Jeff Dean.
Companies already use artificial intelligence in many ways, from understanding what you're saying to Alexa when you talk to your smart speaker to detecting credit-card fraud.h
Increasingly, companies are using AI for more creative endeavors, such as a series of Toyota car ads made by working with IBM's Watson AI system. Meanwhile, Burger King also released a series of funny-sounding ads with phrases it claimed were whipped up by AI, including "The Whopper lives in a bun mansion, just like you," as part of a marketing stunt to poke fun of the technology.
For Shane's AI creations, she uses a type of machine-learning algorithm known as a neural network, which is modeled after the way neurons work in a brain.
She gives the neural network a long list of what she wants it to imitate, such as names of snakes. It pores over this training data to figure out, for example, the order of letters and words that might typically show up in a snake name. The algorithm compares its predictions about what a snake name may look like with the data Shane gave it. It uses that information to learn over time and get closer to actual snake name suggestions.
Results can be chuckle-worthy -- take "Texan farter snake" -- but also show many of AI's strengths and weaknesses. A neural network can quickly learn about a simple concept, but it is dependent on the data that us humans feed it, for better or worse.
For the past two years Shane has trained a neural network to come up with Halloween costume suggestions. It picked up on the word "sexy" both times because the term was mentioned repeatedly in the lists it was trained on, including "sexy nurse."
The algorithm's interpretation of good sexy costumes, including "sexy lamp" and "sexy pumpkin pirate," might not be what you would expect to see at a costume store.
"It's not solving the question of, 'What is the best answer?'" she said. "It's solving the question of, 'What did the humans in my data set do?'"
It also highlights the importance of keeping an eye out for biases in data and AI -- something Shane agrees is tricky because "bias is everywhere." While it's not a big deal if there are biases in the data used to train algorithms for Shane's hobby, it can have serious implications in the real world. AI-assisted decisions about whom to offer a mortgage or a job could be biased against certain types of people, if there are biases in the data used to train an algorithm.
Tech companies have faced criticism in recent years for bias issues related to their AI. In 2015, Google apologized after a black software developer tweeted that Google Photos labeled images of him and his friend as gorillas. More recently, Google, Facebook, Microsoft and others have said that they're working to spot and remove bias in AI by setting standards for how to develop the technology and rolling out tools to measure an algorithm's bias.
To get a sense for how well a neural network could write tech and business headlines, I gave Shane over 8,000 CNN Business headlines published over the past year. She used them to train her algorithm to come up with other suggestions.
The results reflect the reality of tech and business in 2018 with some weird twists. There were imaginary stock-market moves ("Premarket stocks surge on report of Philadelphia Starbucks Starbucks Starbucks"), big-name companies' nonsense business dealings ("Facebook is buying a big big deal"), and plenty of how-to pieces ("How to buy a nightmare").
Arjun Chandrasekaran, a graduate student at Georgia Tech who studies AI and humor, said Shane's work — which he likens to an AI form of MadLibs — highlights an important point about current AI capabilities: the humor comes from the mistakes the neural network makes while working on a real task (such as generating names of real snakes), rather than from coming up with results that are meant to be funny. Researchers are studying that kind of computational humor, he points out, but figuring out what makes a thing funny is really hard.
Shane's handpicked curations also show that, even as AI is rapidly improving and fears are rising that such technology will supplant jobs, it still has a lot to learn.
"These kinds of algorithms that today we're calling artificial intelligence are so simple and limited and hyper-focused on one task," she said. "They're nothing like these science fiction AIs that react to the world more like humans and can hold complex thoughts and react to new situations and things. We're miles and miles away from that."
This was particularly evident when Shane attempted to get a neural network to come up with its own jokes.
"How many engineers does it take to change a light bulb?" it asked.
The AI-generated answer? "A star, an alligator and because they are bees."
It's very clear this neural network still has a lot of work to do before it can take its act on the road.
Bagikan Berita Ini
0 Response to "This quirky experiment highlights AI's biggest challenges"
Post a Comment