The AI regulation battle
The EU AI Act is almost here. US and China regulations. Spyware and ransomware.
Digital Conflicts is a bi-weekly briefing on the intersections of digital culture, AI, cybersecurity, digital rights, data privacy, and tech policy with a European focus.
Brought to you with journalistic integrity by Guerre di Rete, in partnership with the University of Bologna's Centre for Digital Ethics.
New to Digital Conflicts? Subscribe for free to receive it by email every two weeks.
N.2 - February 7, 2024
Authors: Carola Frediani and Andrea Daniele Signorelli
Index:
- The AI Act is almost here
- The EU “AI factories”
- White House and tech lobbyists clash over AI regulation
- China, 40 AI models for public use, over 200 awaiting approval
- The social impact of ransomware
- Pro-Ukraine hackers erasing Russian research center data
- In brief
THE AI REGULATION BATTLE
EU
The AI Act is almost here
The AI Act, the EU Regulation on Artificial Intelligence, was unanimously approved by the representatives of all the 27 countries (Coreper) on Friday, February 2. This approval was the decisive and most delicate step after the political agreement reached in December.
On January 24, the Belgian presidency of the EU Council presented the final draft of the text during a technical meeting. The reservations of some member states were ultimately resolved on February 2 with the adoption of the AI law by the Committee of Permanent Representatives, as reported by Euractiv. The three governments that had shown some discontent – especially France and Germany, with Italy being less vocal – gradually softened towards the end, thanks in part to internal disagreements.
After Coreper's approval, the text (you can find it here and below) will be examined by the MEPs (Members of the European Parliament) of the two parliamentary committees responsible for reviewing AI law proposals on behalf of the Parliament: the Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs. Finally, the proposals will be put to the vote of all MEPs in a plenary session of the Parliament by April, before the Council formally votes on whether to adopt the text.
At this point, "most people who worked on the AI Act within Parliament are confident that the law will pass with no change," writes Politico, specifying that the Commission has reassured the more uncertain countries by promising to take into account some of the issues raised, which will be considered in the implementation of the law.
Comment from the AI Act co-rapporteur, Brando Benifei
Brando Benifei, the Italian MEP and co-rapporteur of the AI Act, attended a conference on AI organized by the University of Genoa on the same Friday of the AI Act approval. Digital Conflicts also participated, allowing us to report his words and answers directly.
Regarding the timeline, "the plenary vote of the Parliament could take place between March and April," said the MEP, specifying that he had written to the President of the EU Parliament, Roberta Metsola, asking to prioritize the issue in order to be able to vote in March. The idea is to speed up the timeline, which, according to Benifei, would be as follows: six months for the bans to take effect ("if approved in March-April, the bans will come into effect around October"); one year for transparency obligations; two years for conformity assessments on high-risk systems (in this case, yet to be developed standards are needed, explained the co-rapporteur).
Since the European elections will take place in June, the EU will propose "an early voluntary compliance procedure, with incentives, to start applying the norm," said Benifei. This is the AI Pact, not to be confused with the AI Act.
Benifei then revisited some of the prohibited uses, such as predictive policing. "Some governments did not want it to be banned, and it was the last point that we agreed to. We established that individual people cannot be identified as potential criminals; for us, the presumption of innocence is essential. However, we do not ban crime analytics. Obviously, crime analytics falls under the systems categorized by the AI Act as high-risk [so it must follow the required procedures. Ed.].
"For high-impact general-purpose AI models with systemic risk”, more stringent obligations are foreseen, such as model assessments, systemic risk evaluations, security tests, etc. (you can find them here).
Benifei confirms that the obligations "will apply to GPT-4 and Gemini, and also to European models when they will reach the requirements. However, many aspects still need to be determined by the new AI Office under the supervision of Parliament, especially regarding safety principles. The basis is a certain level of FLOPs [a measure of computational complexity. Ed.], but there will also be other criteria, such as the number of users or parameters”.
Regarding open source and research, Benifei says that "the Regulation does not apply to research and development (nor to national security, as it is not its scope), and it provides a partial exemption for open source. Therefore, it does not apply to development and distribution if there is no commercial intent. In practice, the open source developer bears no responsibility. However, those who use the system for a high-risk application (the provider) will be held responsible. This protects the open source developer, but we also aim to avoid excluding open source software used by companies such as Meta”.
Here is the final text of the AI Act.
“A blueprint for AI regulation”, says the Ada Lovelace Institute
Just before Friday's approval, the Ada Lovelace Institute, a prestigious British research institute born in collaboration with cultural institutions including the Alan Turing Institute, the Royal Society, the British Academy, commented: "The Act was created with the aim of protecting health, safety, and fundamental rights, and providing legal certainty and clarity. While not perfect, the final text represents a workable, pragmatic compromise that, if properly implemented, can help achieve these objectives. The legislation could support EU leadership by becoming an important and influential global blueprint for AI regulation”.
AI factories: the EU Commission's industrial strategy
In the meantime, the European Commission is developing a strategy on artificial intelligence to create "AI factories". "The strategy can be seen as a first step toward an industrial policy specific to Artificial Intelligence, while on the regulatory side, the EU is approaching the formal adoption of the AI Act," reports Euractiv.
At the center of the strategy are the so-called "AI factories," defined as "open ecosystems formed around European public supercomputers and bringing together key material and human resources needed for the development of generative AI models and application". Dedicated AI supercomputers and nearby or well-connected "associated" data centers will constitute the physical infrastructure.
OPENAI AND GDPR
Italy's privacy watchdog strikes back
The Italian Privacy Authority has notified OpenAI (ChatGPT's company) for violating personal data protection regulations. "Following the temporary ban imposed on OpenAI by the Garante [the Privacy Authority or DPA] on 30 March of last year, and based on the outcome of its fact-finding activity, the Italian DPA concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR," says the Authority.
"The authority's complaint falls within the context of the 2023 allegations," writes Wired Italy, "and therefore on the legal basis for the processing of personal data; 'hallucinations,' i.e., inaccurate responses produced by the chatbot that can lead to improper information; transparency and minors. The letter was sent to Ireland, where OpenAI's legal representative is based. (...) If the emergency measure [from March 2023] closed when the company adopted the necessary countermeasures (...) the analysis by the Supervisor's offices has not stopped."
OpenAI will have 30 days to submit its defense in relation to the alleged infringements.
USA
White House and tech lobbyists clash over AI regulation
The Biden administration's activism on AI is raising resistance from the technology lobby, GOP lawmakers, and conservative activists. “A group tied to the conservative Koch network has peppered the Commerce Department with information requests and a lawsuit. And tech lobbyists have indicated they could mount a legal challenge once the Commerce Department begins exercising its newfound AI authority at the end of January”, reports Politico.
These groups rejected the new requirements for private companies issued in the executive order back in October, arguing that they would stifle innovation in the sector. The same argument was heard many times in relation to the European AI Act.
US Secretary of Commerce Gina Raimondo also confirmed that the “Commerce Department will soon implement another requirement of the October executive order, requiring cloud computing providers such as Amazon, Microsoft, or Google to inform the government when a foreign company uses their resources to train a large language model. Foreign projects must be reported when they cross the same initial threshold of 100 septillion FLOPS”, writes Wired US.
Read: Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order
CHINA AND AI
40 models for public use, over 200 awaiting approval
According to Reuters, China has given the green light to more than 40 artificial intelligence models for public use in the first six months since authorities began the approval process. In fact, last summer the China Electronic Standardization Institute was tasked with implementing a national standard for large language models (LLMs).
The latest round came a few days ago when the authorities gave the OK to a total of 14 large language models for public use, reported Securities Times, a Chinese state agency. Xiaomi and 01. AI are among the beneficiaries.
Approved AI applications include a resume selection tool from Chinese online recruitment platform Zhaopin and a chatbot from e-commerce services company Beijing Zhidemai Tech, reports The South China Morning Post.
Robin Li Yanhong, founder and CEO of tech giant Baidu, said that as of October there were 238 large language models in the country, most of which had yet to be approved by the government. Baidu's version of ChatGPT, Ernie Bot, has 100 million users, according to the company's CTO.
Read: The Promise and Perils of China's Regulation of Artificial Intelligence - a recent paper by Angela Huyue Zhang -The University of Hong Kong - Faculty of Law
CYBERWARFARE
Pro-Ukraine hackers erase data from a Russian center
The Main Intelligence Directorate of Ukraine's Ministry of Defense claims that pro-Ukrainian hacktivists breached the Russian Center for Space Hydrometeorology, aka "Planeta" (планета), and wiped 2 petabytes of data, writes Bleeping Computer.
Planeta is a state research center that uses satellite and ground data to make forecasts on weather, climate, and natural disasters. The attackers claimed to have destroyed 280 servers and two petabytes of information, including meteorological and satellite data, as well as "unique research". "The work of the supercomputers – each worth $350,000 – has been paralyzed and cannot be fully restored," the attackers claimed, reports The Record.
According to Ukrainian intelligence, the attack on Planeta – which also provides services to the military – was carried out by a group of volunteers known as BO Team (apparently not well-known until now).
RANSOMWARE
Psychological and social impact of ransomware on vulnerable groups
Ransomware attacks have a significant psychological impact, and their economic impact is not fully understood. Current estimates of the economic damage caused by attacks likely do not include the cost of long-term and indirect financial damages. These findings come from a report titled "The Scourge of Ransomware" by the Royal United Services Institute (RUSI), a British think tank. While the report focuses primarily on the UK scenario, many of the analyses and conclusions are likely valid for other countries.
The authors state: "While reputation damage resulting from a ransomware attack is a valid concern for some companies, especially those whose clients expect a higher level of privacy (such as legal or financial services clients), the danger of reputational damage is often overestimated by victims. Similarly, the feared impact of exfiltrated data causing further harm through financial fraud or other crimes has not been confirmed by respondents. On the other hand, already vulnerable groups, such as subsidy recipients or healthcare patients, are disproportionately affected by ransomware damages”.
In conclusion, the authors write that government responses to ransomware attacks must focus on preventing societal harm.
IN BRIEF
SPYWARE
US new restrictions on individuals involved in misuse of spyware
The US announced new global visa restrictions on individuals who have been involved in the misuse of commercial spyware, in a move that could affect major US allies including Israel, India, Jordan and Hungary. The Guardian
Google’s report on the rise of commercial surveillance vendors
In the meantime, Google published a report saying that governments should take more aggressive steps to combat the growth of commercial spyware (Cyberscoop). Google’s Threat Analysis Group said that they are currently tracking roughly 40 commercial spyware vendors.
CYBER (OR META) CRIME
Metacrimes in the Metaverse
Interpol, the organization that enables police forces from different countries to collaborate against international crime, released a white paper on crimes in the metaverse. The document identifies current and potential “metacrimes," such as grooming, radicalization, identity theft, violation of private virtual spaces, assault, harassment, or robbing. The paper includes a comprehensive list of crimes along with the challenges and difficulties for investigators. These include the lack of standardization and interoperability, the extension of virtual worlds across multiple jurisdictions, and the difficulty in extracting data.
POPE AND AI
Pope Francis also talks about AI
"So, some questions arise spontaneously: how to protect the professionalism and dignity of workers in the field of communication and information, along with that of users worldwide?"
Pope Francis' message on artificial intelligence.
AI
Growing electricity demand
According to the forecasts of the International Energy Agency (IEA), the global demand for electricity from data centers, cryptocurrencies, and artificial intelligence could more than double in the next three years, adding the equivalent of Germany's entire energy needs, reports Bloomberg.
CYBER
Cyber Transparency Value Chain - a report by the Cyberpeace Institute and The Hague Centre for Strategic Studies (HCSS)
SOCIAL MEDIA
It Was Very Hard for Me to Keep Doing That Job”: Understanding Troll Farm’s Working in the Arab World - a paper on troll farms