European Union,General,International

The EU’s AI Act and its Impact on Electoral Processes

10 Sep , 2024  

Brussels, Vienna, 10 September 2024:  The EU passed several landmark legislative acts to regulate the digital space surrounding elections, including the AI Act, the Digital Services Act (DSA), the Digital Markets Act (DMA), the European Media Freedom Act (EMFA) and the Regulation on the transparency and targeting of political advertising (TTPA), enhancing the broader fundamental rights and safeguards framework, ahead of the June 2024 European Parliament (EP) elections.

European Partnership for Democracy (EPD) and Election-Watch.EU focus in this practitioners paper on the implications of the new rules on the integrity of electoral processes and assesses how the EU intends to regulate AI systems that pose risks to elections. The authors explore which AI systems would be in the scope of the AI Act under the high-risk category as ‘AI systems intended to be used for influencing the outcome of an election or referendum’, and assess the main risks posed to freedom of information, privacy rights, the independence and secrecy of the vote, and overall, the integrity of elections.

The findings build on the EU-wide Election Assessment Mission (EAM) of the citizen observer network of Election-Watch.EU and European Partnership for Democracy’s workshop on identifying AI systems posing risks to election integrity and related mitigation measures under the AI Act.

This paper addresses the key question on how the EU’s AI Act can be implemented to protect the integrity of elections, privacy rights and the freedom of expression against the impact of interference – especially mal-intended – by AI supported actors and systems. The purpose is to provide policy guidance to the European Commission (EC) and European legislators by proposing mitigating measures related to main risks identified.

Recommendations for the European Commission:

➔     Consider a moratorium for the use of AI systems in electoral campaigning to better understand the societal and political impact, given the rise of political forces questioning and/or undermining key democratic principles of established democracies.

➔    Draft provisions and guidelines on fundamental rights impact and risk assessments of the use of AI in electoral processes, to assess potential individual and societal harm.

➔     Elaborate the definition of individual/societal harm in elections given that one vote could make a difference in elections.

➔     Provide a definition and / or examples of the ‘significant harm’ concept, as per Article 5.1a and 5.1b (see Annex), taking in consideration societal harm and financial loss.

➔     Clarify the link between the AI Act provisions with DSA and GDPR, and the potential added value of the AI Act.

➔     Evaluate whether AI systems could be prohibited ex post. The European Commission should also define ‘intentionality’ as part of Annex III 8b (see Annex) and clarify whether it would be stemming from the producer or from the user. In considering high-risk AI systems ‘intended’ to be used to influence elections, infer intentions from consequences, with a broad understanding of intention as due diligence rather than strict intentionality.

➔     European Commission should further examine specific AI applications in the light of Annex III 8b (e.g. microtargeting and ad delivery techniques) and related assessment approaches for election-related impacts.

Recommendations for Civil Society Organisations:

➔     Provide the EC with examples and case studies to inform the guidelines that are being drafted on prohibited and high-risk systems. In particular consider bringing forward examples of past incidents involving AI leading to real harms to help demonstrate the ‘significant harm’ under Article 5.1a and 5.1b.

➔     Obtain more evidence on how certain potentially prohibited AI systems influence voting behaviours.

➔     Consider opportunities to feed into the debate, practice and body of knowledge around risk assessments and standards for high-risk AI systems related to elections.

➔     Develop and test good practice templates and examples of fundamental rights impact/risk assessment for the usage of AI in elections to protect and uphold the democratic process.

 

, , , , ,