How Russia Built Its Digital Gulag
Digital Conflicts is a bi-weekly briefing on the intersections of digital culture, AI, cybersecurity, digital rights, data privacy, and tech policy with a European focus.
Brought to you with journalistic integrity by Guerre di Rete, in partnership with the University of Bologna's Centre for Digital Ethics.
New to Digital Conflicts? Subscribe for free to receive it by email every two weeks.
N.12 - 16 July 2024
Author: Andrea Daniele Signorelli
In this issue:
The Russian Digital Gulag
The European Digital Identity
The European Union against Elon Musk and X
How Russia Built Its Digital Gulag
Andrei Chernyshov had just entered the Moscow metro, on his way to a May 2022 protest against the war in Ukraine. Within minutes, the 51-year-old – who had participated in another demonstration just a week before – was detained by law enforcement officials, who took him to a police station without further explanation.
Chernyshov wasn’t followed or even recognized by any police officers. Instead, he was identified and reported by Sfera, a facial recognition system used throughout the Moscow metro network.
According to OVD, a Russian human rights group, more than 140 people were arrested in 2022 alone as a result of alerts from Sfera, which captures metro passengers as they pass through the turnstiles and compares their images with those in its database. If the system finds a match, it alerts the police, who can intervene immediately.
Launched in September 2020, Sfera was introduced as a tool to quickly identify known criminals and thieves. In an authoritarian state like Russia, few were surprised when – as the digital rights group Roskomsvoboda explained to Wired USA – law enforcement began uploading photos of opposition leaders, prominent activists, and, over time, many people who had simply participated in protests unfavorable to the government. A tool promoted as a deterrent to crime quickly became a digital battering ram of repression, used to arrest or simply intimidate opponents of the Putin regime.
The facial recognition system, active across the entire Moscow metro network, is just one part of a vast and ever-expanding surveillance network known as SafeCity. The origins of this system, which we will discuss shortly, date back to the early 2010s. It was during the major protests of 2011-12, which were largely organized online, that the Kremlin decided to take seriously the threat posed by digital communications.
Soon, laws were enacted to block undesirable websites, while other laws required telephone companies and internet service providers to store phone calls and messages sent over their networks and to share that information with the police upon request. In 2014, new and stricter "anti-extremism" laws came into force, targeting social media users based on the content they posted, shared, or even just liked.
During this period, algorithms capable of quickly identifying undesirable content on online platforms began to be deployed, while Facebook, Instagram, Twitter and the Russian social network VKontakte were used by law enforcement to collect photographs of activists and undesirables. These images are likely part of the databases used for facial recognition today.
In addition, NTech launched the FindFace app in 2016, allowing anyone to recognize faces captured with their smartphone by comparing them with images collected from VKontakte. This disturbing app, which was downloaded by more than 1 million people in a few months and is no longer active, proved to be primarily a useful tool for training the algorithm now used in all facial recognition systems in Russia.
Thanks to these early experiments, in 2017 the city of Moscow finally announced the launch of SafeCity, initially equipped with 160,000 cameras (which has since grown to around 250,000), more than 3,000 of which had facial recognition capabilities (which has now nearly doubled). SafeCity, of which Sfera is only one part and which is managed by the Ministry of Transport, was launched in time for the 2018 FIFA World Cup in Russia and was further expanded in 2020, ostensibly to protect public health by quickly identifying those who violated the lockdowns during the Covid-19 pandemic.
Surveillance cameras, however, are not the only element. According to the Moscow municipality's website, SafeCity collects data from 169 different digital sources, using voice recognition devices, data collection systems based on the geolocation of mobile phones, automatic license plate recognition, and much more.
The technological infrastructure was designed – according to Wired – by Russian companies such as the aforementioned Ntech, Tevian, Rostec, and VisionLabs, but components for SafeCity were also supplied by US giants such as Nvidia and Intel (which have stated that they stopped exporting to Russia after the invasion of Ukraine), South Korean companies such as Samsung (which also stopped exporting), and Chinese companies such as Hikvision.
The work of Russian companies involved in SafeCity has been richly rewarded: in 2022 their business volume grew by 30-35 percent over the previous year, partly due – as Kommersant reports – to trade agreements with Middle Eastern, Southeast Asian and South American countries. Those companies can also count on numerous subsidies and tax exemptions guaranteed by the "Strategic Plan for Artificial Intelligence".
Meanwhile, SafeCity continues to grow and expand. In 2020, the government announced its intention to invest $1.3 billion to create similar systems across Russia, starting with cities like St. Petersburg, Novosibirsk and Kazan. Another more recent goal is to centralize video feeds from all cameras across Russian territory to monitor people of interest on a national level (the purpose is also to identify people trying to avoid conscription).
In addition, Rostec is reportedly developing an algorithm capable of analyzing video footage and information circulating in the media and on social networks to predict the formation of anti-government demonstrations. "Russian authorities should halt the expansion of their irresponsible and unregulated facial recognition systems," said Hugh Williamson, Europe and Central Asia director at Human Rights Watch, in a report. "Privacy concerns outweigh the purported security benefits. Fundamental rights must also be protected from the Kremlin's technological abuses".
These words are met with total indifference by the Kremlin: dissidents who in the past tried to appeal to the European Court of Human Rights are now faced with Russia's decision to withdraw from this institution, while the law known as "Experiments with Artificial Intelligence" allows the use of these technologies without complying with personal data regulations. Opposition lawmakers who have requested more information about the operation, databases and effectiveness of these systems (which have already led to the arrest of people mistaken for others) have encountered a wall of secrecy.
Thus, this opaque network of control based on artificial intelligence algorithms continues to restrict the operating space of opponents and dissidents, both online and in the physical world, creating what is increasingly being called a "digital gulag."
The European Digital Identity
By 2030, 100% of European Union citizens should have a digital identity system: a smartphone wallet to store ID cards, driving licenses or health cards. The launch is planned for 2026, but there are still many obstacles to overcome, as evidenced by a call for new projects that may overlap with ongoing experiments that will end in 2025. Then there's the issue of security: the more personal documents are stored on smartphones, the more important it is to ensure that they are adequately protected. According to experts, the technical standards adopted so far are not sufficient. The risk is that the European digital identity could become a nightmare for our privacy.
The European Union against Elon Musk and X
Elon Musk's social media site X has been accused by the European Union of violating its rules on online content. The EU's tech watchdog highlighted the potential for "verified" blue tick accounts to deceive users, as anyone can pay for verification, leading to abuse by malicious actors. The investigation, under the Digital Services Act (DSA), could result in X being fined up to 6% of its global revenue and forced to change its operations in the EU. Musk and X CEO Linda Yaccarino defended the platform, arguing that they’ve implemented a democratized verification system. The regulator found that the blue tick system misled users about the authenticity of accounts. X can now defend itself or make changes to comply with the DSA. EU Commissioner Thierry Breton emphasized the need for trustworthy verification, while the Commission denied Musk's censorship claims, stating that its goal is to ensure a safe and fair online environment.