Poland continues to investigate use of spyware
Managing the risks of generative AI. The European AI startups.
Digital Conflicts is a bi-weekly briefing on the intersections of digital culture, AI, cybersecurity, digital rights, data privacy, and tech policy with a European focus.
Brought to you with journalistic integrity by Guerre di Rete, in partnership with the University of Bologna's Centre for Digital Ethics.
New to Digital Conflicts? Subscribe for free to receive it by email every two weeks.
N.8 - 14 May 2024
Authors: Carola Frediani and Andrea Daniele Signorelli
In this issue:
- Poland continues investigation into alleged Pegasus abuse
- Managing the risks of generative AI
- The European AI startups
- In Brief
EUROPE/SPYWARE
Poland continues investigation into alleged Pegasus abuse
They reported abuses by their military police superiors and were placed under surveillance. In Poland, prosecutors investigating the use of Pegasus spyware in the country have called the first 31 people believed to have been spied on to testify.
The first group includes two former military police officers, Joanna JaĆocha (sub-lieutenant) and Karolina Marchlewska (corporal), writes the Polish newspaper Onet. Both have been summoned as witnesses by the prosecutor's office in the case of abuse of power by public officials in the use of Pegasus.
"Our lives and health were destroyed. For seven years, Joanna and I were targeted, harassed, slandered, deprived of the opportunity to serve in the military, which was our passion. And today we find out that we were under surveillance using Pegasus. This must be explained and the perpetrators punished", said Karolina Marchlewska.
The new government of Prime Minister Donald Tusk, which took office in December, has pledged to investigate the alleged misuse of Pegasus. As a result, a special parliamentary committee was set up in February to investigate the use of spyware. The following month, the committee called as its first witness the leader of the PiS, JarosĆaw KaczyĆski, the party in power during the alleged abuses.
Among the first 31 alleged victims surveilled by Pegasus is Krzysztof Brejza, a member of Tusk's Civic Platform (PO) party who was targeted while leading the PO's 2019 election campaign, then the main opposition to the PiS government, Notes from Poland writes.
There are nearly 600 people in Poland who are believed to have been under surveillance with Pegasus between 2017 and 2022 under the previous government, according to the current government's Justice Minister. The minister in charge of security services added that while many legitimate targets were monitored, there were "too many cases" where Pegasus was used against people who were simply considered "inconvenient" to the previous government.
According to the Polish newspaper Gazeta Wyborcza, the spyware was acquired by Poland's Central Anticorruption Bureau (CBA) in the autumn of 2017. The newspaper and the Polish broadcaster TVP report that "the CBA bought the spy software for 33.4 million zlotys (âŹ7.84 million) from the Polish Matic company, which in turn acquired it from the Israeli NSO Group for 25 million zlotys (âŹ5.86 million). Matic had an Interior Ministry license for IT services and arms dealing".
According to an AP investigation in December 2021 (based on research by Citizen Lab), just before the 2019 European Parliament and Polish parliamentary elections, Pegasus was allegedly used to spy on the phone of Senator Brejza.
As the European elections approach, it becomes clear why the issue of spyware use in Europe is becoming particularly heated.
US restrictions on individuals involved in the sale/development of commercial spyware
Meanwhile, "as part of U.S. efforts to counter the ongoing proliferation and misuse of commercial spyware", the U.S. State Department will impose visa restrictions on 13 individuals who have been involved in the development and sale of commercial spyware, or who are close associates of those involved. "These individuals have facilitated or derived financial benefit from the misuse of this technology, which has targeted journalists, academics, human rights defenders, dissidents and other perceived critics, and U.S. Government personnel".
Visa restrictions are part of a broader US government initiative to counter the abuse of commercial spyware and other surveillance tools.
In fact, American tech companies such as Salesforce, Microsoft, Zoom, Dell and Intel recently cut commercial ties with Sandvine, a Canadian network intelligence company. This is because the US Department of Commerce placed the company on its Entity List (a blacklist that imposes various restrictions) in February, penalizing it for providing "mass web monitoring and censorship technology" to the Egyptian government. "The designation effectively banned Sandvine from obtaining US technology", writes Bloomberg
GENAI RISKS
Managing the risks of generative AI
The NIST (National Institute of Standards and Technology, the US agency that produces standards for various systems) has released a document titled Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence - Profile.
It is a preliminary text, subject to change, that defines the risks created or exacerbated by the use of generative AI. The text also provides a definition of Generative AI (GAI): "The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital contentâ.
Most notably, the text lists and defines the risks: "Importantly, some GAI risks are unknown, and are therefore difficult to properly scope or evaluate given the uncertainty about potential GAI scale, complexity, and capabilities. Other risks may be known but difficult to estimate given the wide range of GAI stakeholders, uses, inputs, and outputs. Challenges with risk estimation are aggravated by a lack of visibility into GAI training data, and the generally immature state of the science of AI measurement and safety today.â
Among the risks analyzed are:
CBRN information
Lowering the barriers to accessing dangerous information about chemical, biological, radiological, or nuclear (CBRN) weapons.
Confabulation
The production of incorrect or false content although expressed with certainty (commonly known as "hallucinations" or "fabrications"). According to the NIST document, these are the result of the pre-training of generative AI, which involves predicting the next word.
"We note that the terms 'hallucination' and 'fabrication' can anthropomorphize GAI, which itself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human entities", the authors write. Furthermore, while the research suggests that confabulated content is abundant, it is still difficult to estimate the extent and downstream effects (in the development of subsequent processes and applications).
Dangerous or violent recommendations
GAI systems may produce results or recommendations that incite, radicalize, threaten, or glorify violence. In addition, the document notes, a significant number of users talk to chatbots about mental health issues â which current systems are not equipped to adequately address, nor are they able to direct these users to get the help they may need.
Data privacy
GAI systems pose numerous privacy risks. Models can leak, generate, or correctly infer sensitive information about individuals, such as biometric, health, location, or other personally identifiable information (PII). For example, in some attacks, large language models (LLMs) have revealed private or sensitive information that was included in their training data. A problem that has been defined as "data memorization".
The problem is also that, as the document states, "most model developers do not disclose specific data sources (if any) on which models were trained. Unless training data is available for inspection, there is generally no way for consumers to know what kind of PII or other sensitive material may have been used to train GAI models. These practices also pose risks to compliance with existing privacy regulationsâ.
Moreover, GAI models may be able to correctly infer personal information that was not present in the training data and that was not disclosed by the user, piecing together data from a variety of different sources.
Environment
The document states: âEstimates suggest that training a single GAI transformer model can emit as much carbon as 300 round-trip flights between San Francisco and New York. In a study comparing energy consumption and carbon emissions for LLM inference, generative tasks (i.e., text summarization) were found to be more energy and carbon intensive than discriminative or non-generative tasksâ. In short, generative AI would be more energy-intensive than other types of AI.
For the complete document, follow this link.
AI AND EUROPE
The European AI startups
"The numbers are crystal clear. In the decade from 2013 to 2022, 4,643 startups active in the field of artificial intelligence were founded in the United States. In China, the number reaches 1,337, while the United Kingdom, France, and Germany together barely exceed a thousand.
Investment figures are even more telling: in the past decade, the United States raised $249 billion in investment, China raised $95 billion, while the three main European nations have reached only $32 billion (leaving crumbs for the others). Other unflattering figures are reported by the specialized site Sifted, according to which just three Californian universities â Stanford, USC, and the University of California â have produced a number of founders of artificial intelligence startups (175) practically equal to that of the top ten European universities (177).
In short, itâs clear that Europe is not at the center of the artificial intelligence boom, especially when we consider that â according to Crunchbase data â OpenAI alone has raised more total funding than of all European startups combined
.Yet, if we step out of the impossible comparison with the United States and take a closer look at Europe in detail, a lively and growing landscape is still observed. In 2023, European startups active in the field of generative artificial intelligence raised $1.5 billion in funding, almost three times the amount reached in the previous year".
Europe's pursuit of AI is a challenging one. Who are the key players, the EU Commission's projects, and other available incentives. Read the article on the Guerre di Rete website (Italian only).
RIGHT TO REPAIR
EU Parliament enhances consumer access to repair services
On Tuesday, April 23, members of the European Parliament voted in favor of the Right to Repair Directive, which aims to improve consumer access to repair services and reduce waste. The directive, introduced by the Commission last March, aims to support the Green Deal by making repairs a simpler and more attractive option for consumers than buying a replacement. It also simplifies repairs, outlines manufacturers' obligations, and creates an online platform that helps locate repair shops and sellers of refurbished goods.
Swappie, a company that refurbishes used iPhones and sells them at a lower price than new ones, told Euractiv that the directive provides for an obligation to repair even outside the warranty.
This creates "greater opportunities to request a repair for cases not covered by warranty", such as a cracked screen, but "clearly anticipates that consumers should be able to choose to turn to any repair provider, regardless of whether it is affiliated with the original manufacturer", Swappie explained.
The company added that this action is considered essential to promote fair competition among repair providers and to strengthen consumer trust in independent repair services.
Technology companies, and Apple in particular, have often been criticized for their repair policies, which are said to make it difficult for independent repairers.
IN BRIEF
AI & WORK
A survey by the Society of Authors suggests that a third of translators and a quarter of illustrators could lose their jobs due to AI - SoA
TECH IN EUROPE
The challenge between London and Paris to become Europeâs tech capital â Sifted