Today, there are systems that can overcome artificial intelligence. The MIT team has also developed one of these systems. Researchers who have developed a text-based system have managed to deceive Google’s artificial intelligence.
Artificial intelligence has the potential to create one of the biggest breaks in human history and is still being tried to be developed by putting things on it, but there are also examples that can deceive artificial intelligence systems. This includes the artificial intelligence developed by Google to detect images and the artificial intelligence system developed by Jigsaw to detect harmful comments.
Researchers at MIT Computer Science and Artificial Intelligence Laboratory have developed a system called TextFooler. With this system, artificial intelligence using natural language processing like Alexa and Siri can be deceived.
Deceiving artificial intelligence:
TextFooler is a system designed to attack natural language processing models to understand vulnerabilities. In order to do this, he makes adjustments to the input sentence by changing the words without changing the grammatical structure of the sentence or changing its meaning. The system then attacks the natural language processing model to see how to deal with the modified input text classification.
Of course, it is quite difficult to change words without making changes in the meaning of a text. For this, TextFooler first checks the important words in the natural language processing model that bear weight in the ranking. Then he looks at the synonyms that can sit nicely on the sentence.
The researchers who developed the system also stated that they successfully deceived the three existing models, including an open source language model called BERT developed by Google.
“If these tools are vulnerable to malicious attacks, the results can be disastrous,” said Din Jin, author of the TextFooler research. These tools need efficient defense approaches in order to protect themselves. ” The MIT team believes that TextFooler can be used in text-based models such as spam filtering, detecting hate speech, or sensitive political discourses.