Austria,General,International

Social media monitoring of Austria’s general election campaign

23 Jan , 2025  

Vienna, Berlin, 23 January 2025: Election-Watch.EU (as wahlbeobachtung.org) conducted a social media monitoring of the Austrian electoral campaign 2024 together with other researchers. In the study we examine toxicity, hate speech, and extremism on Austrian online and social media platforms, as well as on Telegram channels, ahead of the elections. This study displays findings in graphs to provide a better understanding and, in addition, includes a disinformation case study.

Final Report “From Hashtags to Votes” Social Media Monitoring Austria General Elections 2024

Key findings:

  • Controversial or emotionally charged topics such as migration often provoke polarised reactions and provide fertile ground for hate speech, and align with broader patterns observed in social media studies.
  • Offensive content, and particularly content labelled as hate speech, toxic, or extremist, receives significantly higher views and reactions compared to non-offensive material.
  • Disinformation surrounding alleged election fraud via postal voting highlights the role of alternative media platforms, particularly AUF1, in spreading unverified claims.

Recommendations:

1. The European Commission (EC) must actively fulfil its DSA oversight responsibilities for VLOPs and VLOSEs, while the Austrian government, Digital Services Coordinator (DSC), and civil society organisations (CSOs) should advocate for platforms to publish algorithmic impact assessments and risk mitigation reports. These should include measures to down-rank offensive content as part of their systemic risk mitigation strategies.

2. The EC and DSC should establish fully independent third-party audits of algorithms to enhance the detection and prevention of bias. As mandated by the DSA, platforms must contract audit firms, and assess and mitigate systemic risks, including those arising from the algorithmic amplification of harmful content, to ensure compliance and transparency.

3. The DSC and CSOs must prioritise strengthening digital literacy initiatives, particularly targeting younger demographics on platforms such as Instagram, Telegram, and TikTok, aligning these efforts with users’ digital habits. This is in line with the DSA’s emphasis on supporting public awareness campaigns to counter disinformation and harmful content.

4. The DSC should formally notify platforms of the need to enhance their content moderation and monitoring mechanisms, especially for high-risk channels (e.g., AUF1 and OliverJanich) identified as sources of hate speech. The DSC should also require regular reporting on compliance measures, to ensure accountability.

5. The EC and DSC should collaborate with platforms to integrate advanced multilingual AI models, as encouraged by the DSA. Particular attention should be paid to training these models to recognise and address content in different dialects, argots, jargon, and slang, to improve moderation accuracy.

6. The EC must ensure researchers’ access to VLOPs’ and VLOSEs’ data, in compliance with DSA Article 40. This includes introducing standardised guidelines for platforms to facilitate data access for systemic risk research and enforcing penalties for noncompliance with transparency and data-sharing obligations.

7. The EC and DSC should empower CSOs to contribute to systemic risk assessments, by providing them with the necessary tools, training, and resources to monitor offensive content effectively and report their findings to the DSC and platforms.

 

 

, , , , , , , , , ,