How DeepMind thinks it can make chatbots safer

Estimated read time 2 min read

[ad_1]

Some technologists hope that one day we will develop a superintelligent AI system that people will be able to have conversations with. Ask it a question, and it will offer an answer that sounds like something composed by a human expert. You could use it to ask for medical advice, or to help plan a holiday. Well, that’s the idea, at least. 

In reality, we’re still a long way away from that. Even the most sophisticated systems of today are pretty dumb. I once got Meta’s AI chatbot BlenderBot to tell me that a prominent Dutch politician was a terrorist. In experiments where AI-powered chatbots were used to offer medical advice, they told pretend patients to kill themselves. Doesn’t fill you with a lot of optimism, does it? 

That’s why AI labs are working hard to make their conversational AIs safer and more helpful before turning them loose in the real world. I just published a story about Alphabet-owned AI lab DeepMind’s latest effort: a new chatbot called Sparrow.

DeepMind’s new trick to making a good AI-powered chatbot was to have humans tell it how to behave—and force it to back up its claims using Google search. Human participants were then asked to evaluate how plausible the AI system’s answers were. The idea is to keep training the AI using dialogue between humans and machines. 

In reporting the story, I spoke to Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab.

She told me that one of the biggest hurdles in safely deploying conversational AI systems is their brittleness, meaning they perform brilliantly until they are taken to unfamiliar territory, which makes them behave unpredictably.

“It is also a difficult problem to solve because any two people might disagree on whether a conversation is inappropriate. And even if we agree that something is appropriate right now, this may change over time, or rely on shared context that can be subjective,” Hooker says. 

Despite that, DeepMind’s findings underline that AI safety is not just a technical fix. You need humans in the loop. 

[ad_2]

Source link

You May Also Like

More From Author