Microsoft announced on Tuesday (1st) a new tool called Video Authenticator, capable of identifying manipulations in videos known as deepfake. The project analyzes each frame of videos and generates a manipulation score with a percentage indicating the chances of the media having been changed.
The service is part of the Democracy Defense Program and was developed by the Microsoft Foundation R&D team, which has partnerships with the AI Foundation and uses public data from Face Forensics ++. The purpose of the tool is to defend democracy from threats fueled by disinformation. The announcement was made just before the United States presidential elections, which take place on November 3, but the intention is to feed it for long-term use.
Video Authenticator will be able to display a percentage of real-time confidence in each frame of a video, detecting subtle editing elements, such as color fading and grayscale, “which cannot be detected by the human eye”.
Microsoft knows that deepfake creation methods are advancing in sophistication and that detection methods still have failure rates, so it expects to fuel its technology and in the long run “look for stronger methods to maintain and certify the authenticity” of online publications.
“There are few tools today to help assure readers that the media they are viewing came from a reliable source and has not been changed,” the company said in a statement. One of the new technologies is a browser extension that checks certificates and combines hashes to inform the reader if the content is authentic or has been changed.
Another system should allow content producers to add hashes and digital certificates to the media, acting as a digital watermark located in the metadata.