Defining hate speech and disinformation in a complex and dynamic digital landscape

Hate speech and disinformation are evolving rapidly across digital (as well as offline) spaces. This short article explores why clear, operational definitions are crucial, and how the ECLIPSE project addresses this challenge.

Hate speech and disinformation are not static phenomena. They are highly dynamic and mutating, adapting to political contexts, technological advancements, and cultural practices, and often blur into one another. To respond effectively, policymakers and practitioners require shared, operational definitions that reflect these complexities. This article presents key insights from the ECLIPSE project Deliverable 2.1, which lays the conceptual and methodological foundation for the overall project’s work.

Why definitions matter

Efforts to prevent and combat hate speech and disinformation often stumble over a problem of fundamental importance: there is no single, universally accepted definition of either phenomenon. Legal frameworks [1], platform policies, academic research, and general public discourse frequently use the same terms to describe different forms of harm. Approaching different scenarios in a uniform way complicates enforcement and undermines the development of effective technological tools dealing with the phenomena at stake.

Deliverable D2.1 addresses this challenge by synthesising legal, sociological, criminological, and technological perspectives to propose operational definitions that are both context-sensitive and practically usable. Rather than seeking rigid or universal formulas, the report emphasises definitions that can adapt across jurisdictions, languages, and digital environments.

Hate speech and disinformation as overlapping phenomena

Although hate speech and disinformation are – of course – conceptually distinct, they are deeply interconnected [2]. Disinformation is often used to justify or amplify hateful narratives, while hate speech frequently relies on distorted or selective representations of reality. In practice, the boundary between the two is shaped by context, intent, and impact rather than by content alone.

The report highlights how “selective truths”, humour, irony, and satire can simultaneously mislead audiences and reinforce hostile stereotypes. This overlap makes classification difficult, particularly when harmful content does not rely on explicit slurs or “falsehoods” but instead operates subtly, through implication, insinuation, or emotional appeal.

From memes to emojis: evolving forms of harm

One of the most significant insights of ECLIPSE’s recent publication on definitions and methodologies (Deliverable 2.1) is that contemporary hate speech and disinformation are increasingly multimodal. Images, memes, GIFs, emojis, and hybrid visual-textual formats play a central role in how harmful narratives are created and circulated. These forms often rely on cultural (or subcultural) knowledge, allowing messages to remain opaque to outsiders while being clearly understood by intended and targeted audiences [3].

Memes, for example, can compress complex ideological messages into easily shareable formats, blending humour with hostility. Emojis can function as coded substitutes for words, helping users evade moderation while normalising discriminatory or – very often – violent ideas. Such practices challenge both human moderation and automated detection systems, which are often designed primarily for text-based analysis.

Methodological challenges and technological implications

Detecting hate speech and disinformation is methodologically demanding. Existing approaches range from manual moderation to machine learning, deep learning, and Natural Language Processing (NLP). However, limited datasets, rapidly evolving language, and cultural variations reduce the reliability of purely automated systems [4]. At the same time, Artificial Intelligence-based moderation raises ethical concerns related to bias, transparency, privacy, and freedom of expression.

Deliverable D2.1 argues for context-aware, multimodal, and ethically grounded approaches that combine technological innovation with human expertise. It emphasises that effective responses must integrate insights from social and behavioural sciences alongside computational methods, particularly when dealing with subtle, implicit, or emerging forms of harm.

Clear and operational definitions are not merely academic exercises; they are a prerequisite for effective action. By grounding its work in a nuanced understanding of hate speech and disinformation, the ECLIPSE project establishes a shared conceptual foundation for research, technology development, capacity-building, and policy recommendations. In an evolving digital landscape, such clarity is essential to building resilient, inclusive, and informed societies.

 

Note: This article is based on ECLIPSE Deliverable D2.1 “Definition and Methodologies”.

Authors: Parisa Diba and Georgios A. Antonopoulos

References

[1] United Nations Strategy and Plan of Action on Hate Speech. (2020). Detailed Guidance on Implementation for United Nations Field Presences. [Online]. Available at:

UN Strategy and PoA on Hate Speech_Guidance on Addressing in field.pdf

[Accessed 4 November 2025].

[2] Wardle, C. (2024). ‘A conceptual analysis of the overlaps and differences between hate speech, misinformation and disinformation’. Department of Peace Operations (DPO). Office of the Special Adviser on the Prevention of Genocide (OSAPG). United Nations. [Online]. Available at:

Report – A Conceptual Analysis of the Overlaps and Differences between Hate Speech, Misinformation and Disinformation (June 2024)

[Accessed 6 November 2025].

[3] Becker, M.J., Scheiber, M. and Jensen, U. (Eds.) (2025). Imagery of Hate Online. Cambridge: Open Book Publishers.

[4] Fortuna, P. and Nunes, S. (2018). A survey on automatic detection of hate speech in text. ACM Computing Surveys, 51(4), pp. 1–30. 

Links

www.eclipsehorizon.eu/ 

Keywords

hate speech, disinformation, misinformation, digital platforms, memes, emojis, artificial intelligence, ECLIPSE project

Cover Image credit:

tete_escape / Shutterstock