Categories
Full Text Articles - Audio Posts

Early Warning in Atrocity Scenarios Must Account for the Effects of Technology, Good or Bad

Spread the news

Listen to this article

(Editor’s Note: This article is part of the Just Security symposium “Thinking Beyond Risks: Tech and Atrocity Prevention,” organized with the Programme on International Peace and Security (IPS) at the Oxford Institute for Ethics, Law and Armed Conflict.)

The potential impact of technology on mass atrocity scenarios has long raised questions for experts and policymakers in the field of atrocity prevention. In the two decades since the adoption of the Responsibility to Protect norm and the emergence of atrocity prevention as a discipline, developments in the “digital revolution” and the advent of the “information age” have influenced atrocity scenarios in countless ways, both positively and negatively.

Yet, despite its clear and growing importance, the subject remains underexplored and poorly understood. While the field better understands how different technologies relate to atrocities, a lack of systematic engagement with the topic and its many nuances still leaves major knowledge gaps, making it difficult to fully and constructively harness technology for prevention or to mitigate its harmful effects.

This is especially evident in atrocity early warning, which largely lacks a tech perspective. Many – though not all – of the frameworks that experts regularly rely on for early warning (such as the 2014 U.N. Framework of Analysis on Atrocity Crimes) were developed before the advent and widespread adoption of technologies now commonly present in atrocities scenarios. As technology becomes ever-more pervasive, understanding its impact on atrocity dynamics is urgent.

Crucially, technology cuts both ways in atrocity scenarios. While it can play an important role in supporting prevention and response – particularly in monitoring and accountability efforts – technology can also increase risks, including by enhancing perpetrators’ capacity to commit atrocity crimes, creating the enabling circumstances for atrocities, and amplifying possible trigger factors. Given these dynamics, accounting for the impact of technology in early warning frameworks is crucial for a nuanced understanding of atrocity risks.

Technology’s Potential for Prevention and Response

Various forms of widely available technology have become essential to prevention and response efforts, particularly by civil society groups. This includes geospatial intelligence, remote sensing, online open-source intelligence (OSINT), and documentation technologies ranging from simple photo cameras to specialized documentation software. In particular, these technologies are often used today to assist with monitoring, documentation, and accountability efforts, along with other investigative methods such as human testimony, forensics, social network analysis, and big data analytics.

Such technologies have already proven their worth. They have helped to identify the construction and use of military installations (as well as political prison camps and mass detention facilities) and facilitated tracking the movements of military convoys, weapons, militias, and wartime smuggling operations. Moreover, they have helped document and geolocate violent incidents, identify perpetrators and evidence of intent, and demonstrate patterns in attacks against protected groups. In some cases, technology has even been able to track the “mood” of specific groups, predicting with remarkable accuracy the outbreak and location of identity-based protests and other risk factors.

Clearly, then, the adoption of new technologies presents major opportunities to help prevent atrocity crimes and hold perpetrators accountable. Yet, the more widespread such tools become, the greater the need for practical guidance on how to use them ethically and lawfully. Efforts such as the Berkeley Protocol (among other guidelines and materials) are crucial contributions. But the conversation is just starting, and significant gaps remain – chief among them, governance issues.

Ultimately, technology is but a tool. It is what users make of it that will determine whether its effects are positive or nefarious. That is why technology-driven considerations must be analytically integrated into risk assessments and forecasting if they are to provide a meaningful, nuanced understanding of atrocity dynamics.

Possible Risks of Technology in Atrocity Scenarios

While the presence or use of any technology is unlikely to change the core motivations behind atrocity crimes, such as ideology or past grievances, technology can have a significant impact on how such motivations are framed, how they spread, and how they are exacerbated.

Political and economic cleavages, along with the politicization of past grievances, are risk factors common to all atrocity crimes. As technology becomes increasingly integral to societies worldwide, unequal access to critical technologies like the internet, coupled with differentiated use patterns (often called the “digital divide”), could reinforce pre-existing patterns of marginalization and discrimination. For instance, individuals with limited access to critical technologies may have restricted educational and employment opportunities. Research has already shown how the digital divide exacerbates other divides, leading to unequal levels of participation in society and resource distribution.

Moreover, algorithmic biases can worsen identity-based marginalization. As access to services, resources, and opportunities increasingly depends on artificial intelligence-powered technologies like facial recognition or other biometrics-collection software, a failure to proactively address these biases risks entrenching existing inequalities, or at worst, create new ones.

In addition to shaping motivations, technology has the potential to negatively impact the dynamics underlying atrocity scenarios. Specifically, it can affect at least three known features of escalatory dynamics: the capacity of perpetrators to commit atrocities, the enabling circumstances for such crimes, and potential trigger factors.

Regarding perpetrators’ capacity, various technologies are already being unlawfully used for surveillance purposes. These include maintaining domestic social control, targeting, tracking, and harassing dissidents, journalists, and activists – including through the use of spyware – and systematically targeting vulnerable groups by surveilling individuals on a mass scale based on race or ethnicity. All of this enhances perpetrators’ ability to commit atrocities, with devastating effects. The experiences of the Uyghurs and other Turkic minorities in Xinjiang, as well as the Rohingyas and other Muslim minorities in Myanmar, are just a couple of examples.

In terms of enabling circumstances, digital technologies – particularly social media – have proven to be formidable tools for organizing and mobilizing sectors of society. This includes actors seeking to disrupt public order or to grab and consolidate power, often to the detriment of marginalized groups. Equally, social media and the “dark-web” have enhanced the ability of violent extremists to “hide” in plain sight, attract new recruits, and radicalize individuals online through disinformation and propaganda.

Finally, “deep fakes” and generative AI can exacerbate the rapid spread of disinformation across social media platforms, creating triggering risk factors with harmful real-world outcomes. For example, fabricated depictions of public figures or military attacks can influence elections results or cause societal upheaval and other forms of instability that can lead to mass violence. After the fact, actors may also leverage these technologies to contest accusations of crimes by manipulating evidence.

Importantly, neither the existence, possession, or even prevalence of any technology necessarily indicates increased atrocity risks; this depends on context and application. Thus, analytically, the precise impact of technology on modern atrocity dynamics is complex and multifaceted. Nuance is key, and can only be captured by adopting a “tech lens” in early warning efforts. Specifically, this can be achieved by incorporating tech-based indicators of atrocity risks into existing early warning frameworks, enabling users to adopt a tech-sensitive approach to detection.

A Tech-Sensitive Approach to Risk Assessment and Forecasting

Detection refers to the process of identifying and analyzing signs – often called “indicators” – that may signal an increased risk of atrocity crimes in a given context. To enable users to practically identify risk factors, early warning frameworks must offer guidance on what indicators of these risks might be. This includes listing specific signs, as well as offering general guidance for interpreting data and evidence, assessing risk levels based on these indicators, and specifying when early warning actions are warranted.

Instead of completely overhauling existing risk assessment frameworks, a tech-sensitive approach to detection can be achieved by applying a “tech lens” to tools like the U.N. Framework of Analysis and others developed before the widespread adoption of digital and cyber technologies. Applying a tech lens involves systematically examining how technologies may influence or amplify risk factors outlined in that U.N. Framework, both by introducing new threats and by changing the way existing risks manifest. This approach allows users to more accurately assess technology’s impact on other risk factors and indicators, and to identify where new ones may be needed.

Increased Surveillance of Vulnerable Groups

For instance, the increased surveillance of vulnerable groups – often facilitated by technology – can serve as an early warning sign of perpetrators’ increased capacity to commit atrocities (Risk Factor 5 in the U.N. Framework). It can also indicate other risk factors, such as “enabling circumstances or preparatory actions” (Risk Factor 7), “triggering factors” (Risk Factor 8), and “intergroup tensions or patterns of discrimination against protected groups” (Risk Factor 9). However, none of the eight indicators listed under Risk Factor 5, the 14 indicators under Risk Factor 7, or the 12 and six indicators under Risk Factors 8 and 9, respectively, address the role of technology in how these risks might manifest.

Under Risk Factor 5, Indicator 3, which concerns the “capacity to encourage or recruit large numbers of supporters,” and Indicator 6, which addresses the “presence of commercial actors or companies that can serve as enablers by providing goods, services, or other forms of practical or technical support that help sustain perpetrators,” could be expanded to account for the presence of specific technologies that enable recruitment and mobilization, such as social media or the dark web. Similarly, companies may act as enablers of mass surveillance by sharing user data with authorities, often as a condition for accessing their products and services.

Recognizing the potential role of tech companies and specific products or platforms in increasing atrocity risks – particularly in contexts with weak legal protections for users’ data and privacy – would be highly beneficial. Even better would be adding an indicator that addresses the risks of mass surveillance in situations where access to goods and services relies on technology, but legal safeguards for users’ data are absent. Additionally, regarding “enabling circumstances or preparatory actions” under Risk Factor 7, Indicator 6—which addresses the “imposition of strict control on the use of communication channels, or banning access to them”—could be expanded to include authorities’ use of these channels for mass surveillance rather than just restrictions or bans.

The Spread of Hate Speech and Mis/Disinformation Online

Insights into the spread of hateful rhetoric and incitement online – including by whom, against whom, and through what means – can also indicate several risk factors in the U.N. Framework, such as “intergroup tensions or patterns of discrimination against protected groups” (Risk Factor 9), “signs of an intent to destroy in whole or in part a protected group” (Risk Factor 10), and “signs of a plan or policy to attack any civilian population” (Risk Factor 11). Adding indicators under each of these risk factors to capture how they may manifest through technology could sharpen the ability to detect them.

A tech-sensitive approach to detection also requires differentiating between the impacts of specific tech tools and platforms based on the nature and source of the content. For example, disinformation or hate speech spread by isolated private accounts will have different effects compared to content spread by government-lined accounts, whether they are real users or bots. Monitoring and distinguishing account types, such as those linked to State actors or their proxies, as well as assessing their potential reach, can also deepen understanding of how such content spreads. For instance, “influencer” accounts with hundreds of thousands or even millions of followers may spread disinformation either deliberately or unknowingly. Recognizing these dynamics can help identify measures to address or mitigate their impact in escalating situations.

In all cases, improved detection requires closer examination of patterns in how actors behave and how the use of digital technologies in at-risk societies might interact with other risk factors. By analyzing the relationship between these technologies and other risk factors, analysts can gain insight into patterns of escalation and identify structural weaknesses that may increase risk.

Ultimately, the primary danger of any technology in atrocity scenarios lies not in its mere existence or prevalence, but in the effectiveness of the governance system regulating its misuse. Because atrocity risks tend to increase when actors operate outside apt regulatory frameworks, the absence or dismantling of such frameworks is equally, if not more, indicative of atrocity risk and should be integrated into revised approaches to detection.

Equally important is the need to improve governance of the technology sector and its societal impact, rather than simply monitoring the absence or dismantling of regulatory frameworks. This ensures that legal protections and access to remedies keep pace with technological advancements, preventing tech innovation from outstripping the law’s ability to protect the users such innovation is meant to serve. Achieving this requires increased collaboration among all stakeholders directly and indirectly involved in tech regulation processes, including legislators, policymakers, academic experts as well as other civil society groups, and tech companies. 

Conclusions

Rapid technological advancements, particularly in the digital and cyber realms, are reshaping the dynamics of atrocity crimes. This requires early warning frameworks to systematically engage with how technology affects the risk factors and indicators commonly used for detection. Analysts using existing frameworks for detection should apply a tech lens to discern how these frameworks currently account for technology’s impact on escalation dynamics and identify where additional indicators are needed to enhance detection capabilities.

While technology is crucial, it operates within a broader context of political, social, and economic factors that influence the nature and onset of atrocity crimes. Thus, the mere presence or prevalence of certain technologies does not inherently indicate increased risk in vulnerable societies. However, technology can, under certain circumstances, heighten specific risks, warranting its incorporation into the indicators leveraged by existing frameworks for detection.

In doing so, analysts should guard against “tech fetishism,” or a novelty-driven urge to overhaul the foundational principles of atrocity prevention and risk assessment. Instead, it is often sufficient to tweak existing indicators within current frameworks to better reflect the nuanced ways in which technology can act as both a tool for de-escalation and a catalyst for specific atrocity risks.

In some instances, improved detection may also require the development of sophisticated, targeted tools by tech companies, including to curb the spread of certain harmful content and better protect users’ data from authorities, particularly where such data collection is a prerequisite for accessing basic goods and services. That, in turn, requires ongoing engagement between tech companies and the broader atrocity prevention field, including diplomatic, policy, legislative, and academic circles. Such partnerships are essential for developing a systematic understanding of how rapid technological changes and specific tools or products may impact atrocity dynamics, both generally and in specific contexts.

Moving forward, establishing new connections between technology and prevention strategies must be part of a broader, multi-stakeholder effort to improve governance of the technology sector and its societal impact, extending well beyond the specific context of atrocity scenarios.

IMAGE: This photo taken on June 4, 2019 shows schoolchildren walking below surveillance cameras in Akto, south of Kashgar, in China’s western Xinjiang region. The recent destruction of dozens of mosques in Xinjiang at the time highlighted the increasing pressure Uighurs and other ethnic minorities face in the heavily-policed region. (Photo GREG BAKER/AFP via Getty Images)

The post Early Warning in Atrocity Scenarios Must Account for the Effects of Technology, Good or Bad appeared first on Just Security.


Spread the news