Digital Conflicts is a bi-weekly briefing on the intersections of digital culture, AI, cybersecurity, digital rights, data privacy, and tech policy with a European focus.
Brought to you with journalistic integrity by Guerre di Rete, in partnership with the University of Bologna's Centre for Digital Ethics.
New to Digital Conflicts? Subscribe for free to receive it by email every two weeks.
N.6 - 10 April 2024
Authors: Carola Frediani and Andrea Daniele Signorelli
Index:
The role of AI systems in the Gaza War
France, the Olympics and the all-seeing electronic eye
AI-generated garbage books
Transparent methods to build reader trust
AI AND WAR
The role of AI systems in the Gaza War
Following the Hamas' attack on 7 October and the start of the war in Gaza, the Israeli Defense Forces (IDF) accelerated the identification of targets. To meet the demand for new targets to strike – according to a lengthy report by The Guardian, largely based on an investigation published by two Israeli media outlets critical of their current government, Local Call and +972 Magazine – the IDF allegedly relied on Lavender, a system that generates “a database of individuals judged to have the characteristics” of Hamas or Palestinian Islamic Jihad (PIJ) militants.
According to the investigation, Lavender played a central role in the bombings of Palestinians, especially in the early stages of the war. Lavender’s influence on military operations was such that the military treated the AI system’s decisions "as if it were a human decision", according to +972 Magazine. The investigation was authored by Yuval Abraham, who previously co-authored the documentary "No Other Land" (which won an award at the Berlin International Film Festival).
"During the first weeks of the war, the army almost completely relied on Lavender, which clocked as many as 37,000 Palestinians as suspected militants — and their homes — for possible air strikes", +972 Magazine writes, claiming that the army authorized officers to adopt target lists identified by Lavender without any obligation to thoroughly review the reasons why the system made those choices or examine the raw intelligence data on which they were based.
Lavender's system complements another artificial intelligence system known in the media as "Habsora/Gospel," which has been discussed previously. A key difference between the two systems lies in the definition of targets: while Gospel identifies buildings and structures from which militants would operate, Lavender identifies individuals and places them on a kill list. The investigation also claims that there is an additional system to identify and strike the target when returning home.
In response to this investigation, the IDF issued a press release, stating that its operations comply with the rules of proportionality under international law. Lavender, according to the IDF, is only a database used "to cross-reference intelligence sources" and is not "a list of confirmed military operatives eligible to attack". The IDF also stated that they do not use "an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist". Also, the IDF outright rejected the claim regarding any policy to kill tens of thousands of people in their homes.
However, if confirmed in detail, this investigation raises enormous and troubling questions about the role that AI systems are playing or could play in warfare. These systems tend to be black boxes in how they are designed and operate. Especially in conflict scenarios, they become even more opaque, with no external controls or audits. We know that, on many occasions, claims about the accuracy of these systems have been refuted as soon as they have been subjected to independent verification.
All of this has caught the attention of the online tech community. Some have called for immediate attention to the use of AI in the military. Others demand information about the IDF’s vetting process for these systems. Brian Merchant, on the other hand, comments that "AI is not terrifying because it's too powerful, but because it lets operatives defer responsibility to the system, and lets leaders use it to justify nearly any level of violence they already desired to undertake".
"AI as a pretext for deadly violence", linguist and critic of AI hype Emily Bender also commented. Claudio Agosti, from the non-profit AI Forensics, also warns against falling into the narrative that "the AI is doing the job, not your fault". Likewise, for Meredith Whittaker (Signal), "we MUST ensure that AI is not used to facilitate computational escape from culpability".§
The use of AI technology, according to an analysis by the WashPost, is “still only a small part of what has troubled human rights activists about Israel’s conduct in Gaza. But it points to a darker future”.
Mona Shtaya, a non-resident fellow at the Tahrir Institute for Middle East Policy, told The Verge that "the Lavender system is an extension of Israel’s use of surveillance technologies on Palestinians in both the Gaza Strip and the West Bank. Shtaya, who is based in the West Bank, (says) that these tools are particularly troubling in light of reports that Israeli defense startups are hoping to export their battle-tested technology abroad".
The Stop Killer Robots coalition (which calls for an international law on autonomous weapon systems and to maintain human control over the use of force) stated that the "reports of Israeli use of target recommendation systems in the Gaza strip (are) deeply concerning from a legal, moral and humanitarian perspective. Although the Lavender system, like the Habsora/Gospel system, is not an autonomous weapon, both raise serious concerns over the increasing use of artificial intelligence in conflict, automation bias, digital dehumanization, and loss of human control in the use of force".
AI AND SURVEILLANCE
France, the Olympics and the all-seeing electronic eye
The Olympic Torch will arrive in Marseille on 8 May. From there, it will begin to cross the entire French territory (including overseas territories) to arrive in Saint-Denis, the Paris suburb where the Olympic Village is located, on 26 July. On the same day, the Paris 2024 Opening Ceremony will take place on the banks of the Seine: a ceremony that has always celebrated brotherhood and peace between peoples.
But there is little hope that peace will be the protagonist again by then. On the contrary, the prevailing feeling is that the two Olympic weeks will be enveloped in a climate of tension that is already beginning to permeate the French capital.
A tension exacerbated by France's demographic characteristics. France is the European nation with the largest number of Muslim inhabitants (5.7 million, over 8% of the population) and also home to the largest Jewish community (750,000 people). "Certain groups could use (the ceremony) to send a message", Lukas Aubin, a sports geopolitics expert, told Politico. "It’s going to be a very tense moment".
It won't be the only one either, considering that Paris will be the center of the world for the next weeks, with an estimated 3.5 million people expected to visit.
Authorities have announced that Paris will be subject to "unprecedented" security measures. During the Olympics, 30,000 police officers will be deployed in the French capital, joined by 20,000 soldiers and as many private security guards. In some areas of the city, it will be necessary to show a QR code in order to move freely.
The city will also be kept under control by the surveillance tool in which the institutions are placing their greatest hopes: artificial intelligence. Specifically, Paris will be constantly monitored by hundreds of electronic eyes, equipped with image recognition software capable of alerting law enforcement in case of anything out of the ordinary.
These software programs – developed by companies such as Videtics, ChapsVision, and Wintics – have been trained to detect eight types of events: vehicles traveling in the opposite direction of traffic, the presence of people in prohibited areas, abandoned packages, sudden movements of crowds, the use of firearms, overcrowding, fires, and bodies lying on the ground.
Thanks in part to pressure from civil rights groups, facial recognition will not be used. Because of its invasion of privacy, frequent errors, and potential for abuse, facial recognition is considered a red line that should not be crossed. In fact, it should have been blocked by the nascent AI Act (the European law on artificial intelligence) before exceptions were introduced, including for the "prevention of terrorist attacks".
These exceptions have also been requested by France, raising concerns that the crossing of the "red line" of facial recognition is only a matter of time: "Software that enables AI-powered video surveillance can easily enable facial recognition. It's simply a configuration choice", explained Katia Roux of Amnesty International France.
France seems unwilling to deprive itself of this technology, especially as it is said to have popular support. A recent survey – reported by the Washington Post – claimed that 74% of French people support the use of AI-based surveillance tools even on the streets; a percentage that rises to 89% in the case of stadiums.
Once the network of smart cameras is in place, it is highly unlikely that French authorities will decide to dismantle it once the big Olympic event is over. Activists fear that, regardless of the use of facial recognition, surveillance tools that have an excessive impact on civil liberties will be irreversibly deployed. In a country like France, which has been hit by numerous and serious terrorist attacks and is aware of the chaos unleashed by the Gilets Jaunes (Yellow Vests), the temptation to exploit the Olympics and citizens' fears to introduce extensive digital surveillance may be too strong to resist.
AI AND GOOGLE BOOKS
AI-generated garbage books
After ads on Amazon, academic papers, and online articles, texts generated by AI are now filling Google Books, the tool developed by the tech company to allow text searches in digitized old books or books on the market. This was noticed by the 404Media outlet, which conducted a series of searches using a very simple system. They searched on Google Books for the phrase: "As of my last knowledge update", which is associated with answers generated by ChatGPT. The search returned dozens of books containing that phrase, most of which were generated by AI.
In addition to flooding the internet with garbage, one of the side effects of this phenomenon – according to 404Media – is that it could have an impact on the Google Ngram Viewer, a tool used in research to track the frequency of words or phrases present in a given period in books published and scanned by Google, books ranging from 1500 to 2019. In practice, it is a tool that academics use to research culture and human language.
”If AI-generated books start informing Ngram viewer results in the future, the meaning of these results will change entirely. Either they will be unreliable for teaching us about human-made culture, or they say something perhaps more bleak: that human-made culture is being replaced by AI-generated content", the author writes.
JOURNALISM
Transparent methods to build reader trust
The Reuters Institute has an interview with Julia Angwin (formerly of ProPublica and now founder of Proof News) about journalism in this period of intense technological change: "I wanted to put the methods – and not the topic – first", says Angwin, "because the methods are increasingly important in an era where no one trusts journalism. It's just not enough to just say ‘I'm a tech journalist, and you should trust what I say’".
Making methods transparent is something we need, the renowned investigative journalist continues, "in an era where AI is creating all kinds of plausible text, where we have all sorts of misinformation purveyors, and where some newsrooms are writing press releases as news articles. This information landscape is very polluted, so focusing on the methods is a way to build trust with the audience. (...) the default position for smart people these days is not to trust anything unless they can find a way to trust it”.
Angwin also argues something I strongly agree with: that journalism should focus less on witnessing and more on analysis.
In her founding letter, she writes: “Awash in information, people need help making sense of all this witnessing and storytelling. Are the stories on their news feeds actually representative of what is happening in the world? Are they outliers being blown out of proportion? Analysis is particularly important in today’s world where power is so often cloaked in opaque and complex systems that require hard work to unravel”.
Regarding the lawsuit filed by The New York Times against Microsoft and OpenAI (creator of ChatGPT) for alleged copyright infringement, Angwin says: "I really appreciate the Times’ lawsuit. It’s a move that will benefit everyone in journalism, if they set a precedent and there’s not a settlement. Other outlets just sold off their archive for money. If you think about the benefits these companies are going to get from years and years worth of journalistic labour and effort, I don’t think it’s worth it. So I’m happy the Times is doing that, because somebody needed to take a stand. Back in September I wrote a guest essay in the New York Times about this. I'm really concerned about the commons being overgrazed by rapacious tech companies, stealing everything that’s in the public sphere. I’m worried about what this will actually mean – that there’ll be no incentive to put anything into the public sphere”.
Unless they will reach an agreement.