“A threat to democracy”: what we know about the Italian database scandal
Influence Operations Using ChatGPT. The EU Trusted Flaggers.
Digital Conflicts is a bi-weekly briefing on the intersections of digital culture, AI, cybersecurity, digital rights, data privacy, and tech policy with a European focus.
Brought to you with journalistic integrity by Guerre di Rete, in partnership with the University of Bologna's Centre for Digital Ethics.
New to Digital Conflicts? Subscribe for free to receive it by email every two weeks.
N.16 - 29 October 2024
Author: Carola Frediani and Andrea Daniele Signorelli
In this issue:
What we know about the italian mega hack
Influence Operations Using ChatGP
EU Trusted Flaggers
CYBER ESPIONAGE
“A threat to democracy”: what we know about the Italian database scandal
The Milan prosecutor's office used very strong language, calling it "a threat to the security of democracy in Italy. It's hard to imagine that investigators are exaggerating the seriousness of what happened, considering that – according to the ongoing investigation, which has so far led to four arrests and 60 suspects – the most sensitive and confidential databases of the Italian State were hacked for profit, extortion, and political and business influence.
This scandal has exposed the vulnerability of the Italian state's information systems, which, as the public prosecutor Francesco De Tommasi has noted in official documents, allow "the indiscriminate circulation of sensitive, confidential and secret information" aimed at keeping both citizens and institutions under control.
For these purposes, some companies (which we will discuss shortly) are accused of having esfiltrated information from government databases such as SDI (Sistema di Indagine, a system containing citizens' criminal records, investigations, and alerts used by law enforcement), Serpico (a system that processes tax data to detect potential tax evasion), and at least three other databases containing records of suspicious financial transactions, personal data, and pension information.
According to prosecutors, the goal of these data thefts – which involved hundreds of thousands of accesses to documents, including some from Italian intelligence – was threefold: the information could be extracted for sale to customers, used for blackmail, or used to influence high-level political and business appointments or harm rivals.
Among the suspects are the two partners of Equalize, a Milan-based investigative firm that allegedly organized the data thefts. Equalize is 95% owned by Enrico Pazzali, president of Fiera di Milano Foundation (majority owner of Fiera di Milano, Italy's leading trade show and convention operator), and 5% owned by former police officer Carmine Gallo, known for leading major investigations (including the Gucci murder case).
According to the investigation, while Gallo sought to profit from the sale of confidential information, Pazzali's primary goal was political, allegedly attempting to damage rivals of politically connected figures or influence government appointments.
According to the prosecutors, Equalize accessed government databases with the assistance of law enforcement personnel who directly stole information. Other accesses were made through Trojans that targeted databases using information provided by IT technicians who maintain these systems. In some cases, computers and smartphones belonging to individuals under surveillance were also hacked.
Other companies involved in the investigation, and currently under seizure, include Mercury Advisor (private investigations), Develop and Go, and others specializing in intercepting and hacking devices.
Among those monitored are Senate President Ignazio La Russa (along with his son Geronimo), former Eni president and current AC Milan president Paolo Scaroni, as well as bankers, journalists and celebrities. Equalize's clients, those who are accused of having paid for access to confidential information, include executives from the energy company Erg and Barilla (the Italian pasta giant). Among those under investigations is Leonardo Maria Del Vecchio, son of Luxottica's founder.
The leaders of what the Milan prosecutor's office suspects is a "criminal association aimed at unauthorized access to computer systems" reportedly enjoy "high-level support in various sectors, including organized crime and even foreign intelligence services”. According to documents filed by Anti-Mafia Prosecutor Francesco De Tommasi, "the suspects often boasted of their ability to intervene in investigations and legal proceedings”.
The prosecutor also states that the group linked to Equalize operated a "cluster structure" in which each "member" and "collaborator" maintained "contacts within law enforcement and other public administration offices" in order to "illicitly obtain data". Milan prosecutors will also investigate the alleged sale of sensitive data and information abroad to determine whether it ended up in other countries.
AI AND INTELLIGENCE
Influence Operations Using ChatGPT and Similar Tools
One of the recurring themes regarding the potential risks of generative AI is its facilitation of influence and disinformation campaigns that have been active for years and have been thoroughly reported and dissected by platforms like Facebook, which publish detailed reports on such operations. The concern now, however, is that the ease of generating content (text, images, video) could increase the scale and effectiveness of such operations. But is this really the case? Is it already happening?
OpenAI, the company behind the best-known generative AI tool, ChatGPT, recently released a report on the deceptive use of its tools: "Since the beginning of the year”, the report states, “we’ve disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models. To understand the ways in which threat actors attempt to use AI, we’ve analyzed the activity we’ve disrupted, identifying an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape".
So, what does this mean? There are indeed state and commercial entities that use tools like ChatGPT to create comments that are published on platforms like X (formerly Twitter) and Facebook. These are used, for example, to support the image of the government of Azerbaijan. In addition, AI is used to create fake profiles and images on social media, create biographies, analyze posts and comments, draft responses in multiple languages, and make corrections. These responses are then posted on platforms such as X or Facebook.
In one case, OpenAI shut down a series of ChatGPT accounts linked to an Iranian threat actor (previously identified by Microsoft and Meta) that were generating long articles and short comments in English and Spanish about the U.S. election.
But have they had a real impact? In reality, their impact appears to be relatively small. To measure this, the researchers use a scale, the Breakout Scale, which assesses whether an influence operation remains on a single platform or spreads across multiple platforms (including traditional media and political debates-a critical factor in assessing the success of an operation, which in turn raises the question of the accountability of news outlets and politics). It also measures whether the influence remains within a single community or spreads across multiple communities.
In any case, according to Ben Nimmo, co-author of the report, “AI companies, and thus AI investigators, sit in a unique niche in the information space—midway between upstream providers of things like emails, and downstream distribution platforms like social media. Final point: none of the operations we identified so far looked like it achieved viral breakout or audience engagement because of its use of AI. Social media are a tough environment".
A critical question arises: why is social media a though environment for influence operations? Perhaps these bad actors haven't made a serious investment yet, and we're still in the testing phase. Or maybe, after years of public hearings in various parliaments, some of these social media platforms, despite recent staff cuts, have taken steps to monitor and curb these abuses?
Answering this question is key to understanding how "bad actors" (mainly, but not exclusively, state actors) can be controlled in the AI era, where content distribution remains in the hands of social platforms. In general, such campaigns have a greater impact when they amplify existing problems rather than creating phenomena from scratch. But even in these cases, assessing their real impact remains the most difficult and elusive question.
DIGITAL SERVICES ACT
The EU Trusted Flaggers
Germany and Romania are the latest countries to appoint content flagging organizations under the EU’s Digital Services Act (DSA), the bloc’s law on platform responsibility. These organizations will help report online platforms' illegal behaviors.
Under the DSA, in fact, regulators from each EU member state need to appoint so-called “trusted flaggers,” to point out content that is illegal or violates intermediaries’ terms of services.
The number of countries that have appointed trusted flaggers under the DSA provisions has risen to 6: Austria, Denmark, Finland, Sweden, Germany and Romania. A few days ago, the Germany's main authority for infrastructure granted the first trusted flagger status to the REspect! reporting centre, that focuses on identifying hate speech, terrorist content and other violent content published in German, English or Arabic. Around the same time, in Romania the National Authority for Management and Regulation in Communications (ANCOM) granted the first trusted flagger certificate to Save the Children.
Trusted flaggers - explains the EU Commission website - must publish easily understandable and detailed annual reports. These must include information on notices submitted, the types of illegal content reported, and the actions taken by the online platforms.
The DSA came into force last February and the EU Commission has already started probes into TikToK, Facebook, Instagram, X and AliExpress.