It's possible to reverse-engineer AI chatbots to spout nonsense, smut or sensitive information

It's possible to reverse-engineer AI chatbots to spout nonsense, smut or sensitive information

4 years ago
Anonymous $RAvQk0gPh1

https://www.theregister.co.uk/2019/09/20/reverse_engineer_an_ai_chatbot/

Machine-learning chatbot systems can be exploited to control what they say, according to boffins from Michigan State University and TAL AI Lab.

"There exists a dark side of these models – due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in concerns about the security of practical chatbot services," the researchers wrote in a paper (PDF) published on arXiv.

It's possible to reverse-engineer AI chatbots to spout nonsense, smut or sensitive information

Sep 20, 2019, 12:34pm UTC
https://www.theregister.co.uk/2019/09/20/reverse_engineer_an_ai_chatbot/ > Machine-learning chatbot systems can be exploited to control what they say, according to boffins from Michigan State University and TAL AI Lab. > "There exists a dark side of these models – due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in concerns about the security of practical chatbot services," the researchers wrote in a paper (PDF) published on arXiv.