Recently, researchers at the University of Helsinki, Finland, developed a computer capable of predicting human thoughts by monitoring brain signals. The technology uses a process similar to the imagination to translate the recorded data into entirely new images, never seen before.
The method, called “neuroadaptive generative modeling” (from English, neuroadaptive generative modeling), is based on a new brain-computer interface (BCI or ICC, in English) and allows bidirectional communication between the human brain and an external device. To test their efficiency, 31 volunteers were invited, who observed hundreds of different portraits of people created by artificial intelligence (AI), while monitoring their brain activities.
The scientists asked the experiment participants to focus on certain facial features in the portraits, such as older faces or who were smiling. The information obtained during the observation and monitoring of brain signals was transmitted to a neural network, which compared the results in search of equivalent combinations between what the guinea pigs focused on and the observed images.
As a result, the neural network was able to define what types of faces the participants were thinking about and, thus, serve as a basis for comparison in tests with the computer model. At the end of the tests, the images created by the project and the faces with facial characteristics observed in focus by the guests were compared, and obtained 83% accuracy in the similarity, which affirmed the efficiency of the method.