Hot question: In a rare (nowadays) story about non-generative artificial intelligence, the French National Assembly approved the use of AI to help with video surveillance of the 2024 Olympic Games in Paris. The move was taken despite opposition from human rights groups who say its use is a potential violation of civil liberties and paves the way for the future use of invasive algorithm-based video surveillance across Europe.
According to The Reg, the French government has adopted article 7 of the pending law on the Olympic and Paralympic Games of 2024, allowing the use of automated analysis of surveillance video from stationary cameras and drones.
It is claimed that the system detects certain suspicious events in public places, such as abnormal behavior, pre-determined events and influx of people.
While the AI surveillance plan may be challenged in the supreme constitutional court, France hopes to become the first country in the European Union to use such a system.
France appears to have ignored the warning of 38 civil society organizations who expressed their concerns about the technology in an open letter. They say the proposed surveillance measures violate international human rights law because they contradict the principles of necessity and proportionality and pose unacceptable risks to fundamental rights such as the right to privacy, freedom of assembly and association and the right to non-discrimination. .
The letter contains a warning that in the case of the introduction of an artificial intelligence system, this will create a precedent for unreasonable and disproportionate surveillance in public places.
“If the purpose of algorithm—controlled cameras is to detect specific suspicious events in public places, they will necessarily record and analyze the physiological characteristics and behavior of people present in these places, such as their body position, gait, movements, gestures or appearance,” the open letter says. “Isolating individuals from the background, without which it would be impossible to achieve the goal of the system, would mean “unique identification”.”
As is often the case when observing AI, there are concerns of discrimination. “The use of algorithmic systems to combat crime has led to excessive policing, structural discrimination in the criminal justice system and excessive criminalization of racial, ethnic and religious minorities,” the groups add.
Mher Hakobyan, Amnesty International’s advisor on artificial intelligence regulation, said the decision puts France at risk of becoming a dystopian surveillance state forever.
The Regulatory Commission of the French National Commission for Informatics and Freedoms (CNIL) supported the bill on the condition that no biometric data is processed, but privacy advocates do not believe that this is possible.
Daniel Leifer, a policy adviser for the digital rights organization Access Now, said: “You can do two things: object detection or human behavior analysis — the latter is biometric data processing.”