If in the past actions performed by machines depended entirely on commands developed by humans, new developments in the technological field prove day after day that such limitations are definitely becoming a thing of the past. A team of researchers from Zhejiang University, China, this week reported remarkable advances in robotics with the help of neural networks and artificial intelligence and presented a robot dog that taught itself to defend itself from outside influences.
To exemplify the complexity reached, imagine that a child does not start walking only following movement directions, and what determines his success is constant trial and error. Slippery floors, tangled carpets, unevenness, everything hinders the process, but, step by step, it gets there. That was the team’s big leap.
Jueying, the name of the device, with the virtual training of countless situations, even when kicked or pushed, manages to recover from the fall regardless of the terrain in which it passes or the reason for the imbalance – without the prior need of the development of thousands and even millions of people. lines of code.
Zhibin Li, one of those responsible for the project, explains that computational models used were based on reward systems. Eight skills make up the project. Once improved, they integrated with each other, thus enabling a dialogue of experiences applied in the situations faced by the robot dog and generating a kind of brain. Below, you can see this “conversation”.
“The AI approach is very different in that it captures hundreds of thousands or even millions of attempts,” says Li. “So, in the simulated environment, I can create all possible scenarios. I can create different environments or different configurations. For example , the robot can start in a different posture, like lying on the floor, standing, falling, and so on. “