Imagine being able to chat with an artificial intelligence (AI) about anything. What if you could discuss last night's game, get relationship advice, or have a deep debate with the virtual assistant on your kitchen counter? Large language models—AIs capable of this level of communication—are advancing quickly enough to make this our new reality. But can we trust what artificial agents say?
There are many reasons to embrace conversational AI. The potential uses in health, education, research, and commercial spaces are mind-boggling. Virtual characters can't replace human contact, but mental health research suggests that having something to talk to, whether real or not, can help people.
But the biggest problem is that we trust them more than we should. There's a large body of research in human-robot interaction which shows that people will respond to social gestures, reveal personal information, and are even willing to shift their beliefs or behaviour for an artificial agent. This kind of social trust makes us vulnerable(脆弱的) to emotional persuasion and requires careful design, as well as rules, to ensure that these systems aren't used against people.
A chatbot may, for instance, instruct your child to touch a live electrical plug with a coin, which has happened. The newest language models are trained on vast amounts of text found on the internet, including harmful and misleading content. As it becomes more difficult to expect a language model's responses, there's more risk of unintended consequences. For example, if people ask their home assistant how to deal with a medical emergency, the wrong kinds of answers can be harmful.
We need to ask who creates and owns large language models, and whom this harms or benefits. Personal information that people reveal in conversation can be used for and against them. We need to ensure that this technology is designed and used responsibly. After all, humanity deserves protection, too.