The September 11 assault on the U.S. consulate in Benghazi, Libya is many things, the least of which is a clear course of events. As the last month of administrative disclosures has demonstrated, the bureaucratic breakdowns, insurgent directives, and diplomatic kerfuffles that preceded the deaths of four U.S. diplomatic officials remain opaque, at best. While critiques of the administration’s bureaucratic security procedures may be well-placed, Director of National Intelligence James Clapper’s recent statement demonstrates that the intelligence community’s ability to predict political events in Libya, as elsewhere, is an inherently contingent process.
While the particular outcome of the Benghazi attacks is tangential to human rights advocacy, the administration’s embattled warning analysis offers an instructive question for mass atrocities prevention: how can we know when atrocities will occur, and how can we improve the tools we use to predict them?
Over the past decade, “early warning” has become a prominent priority for the atrocities prevention community’s tool-strengthening efforts. Early warning is both a noun and an adjective: under the former definition, warning represents a particular process, conducted by members of the intelligence community, diplomatic officials, defense institutions, and various non-state actors; under the latter, warning describes a spectrum of policy tools, which identify plausible indicators and drivers of mass atrocities.
Early warning relies on a myriad of analytic frameworks, which identify key factors in a state’s ability to prevent, mitigate, and respond to mass atrocities, including dangerous speech, mediation capacity, impunity and accountability within state security forces, and the persistence of ethnic, social, political, and economic divisions. That the historical, situational origins of mass atrocities are unique is self-evident; with tens of atrocity events to monitor, however, a common, unified early warning framework eases the institutional burden of monitoring and analysis.
In spite of mounting concern for the strength, utility, and policy relevance of early warning tools, key gaps restrict early warning tools’ continued effectiveness. While existing early warning frameworks have identified, categorized, and applied country-level indicators to atrocities-monitoring operations, localized analysis and assessment efforts remain elusive. Unfortunately, the human cost of localized, inter-communal violence has become increasingly apparent: in South Sudan’s Jonglei State, Kenya’s Tana River Delta, and Nigeria’s central Middle Belt region, for example, national-level “violent entrepreneurs” often exploit localized tensions, increasing the possibility of atrocities escalation.
New research on localized conflict drivers, causes, and indicators, however, may offer opportunities for improvement. According to Chris Blattman’s field studies on conflict mediation in Liberia, localized conflict prediction is possible and, perhaps more surprisingly, effective. Existing technological resources–take Ushahidi’s crowd-sourcing platform, for example–effectively monitor atrocities events as they unfold, but do little to probe pre-event, structural indicators. Such resources, however, could function as valuable platforms for pre-atrocities analysis, given their proven capacity to process dynamic economic, political, and social data. As the interaction between localized violence and national atrocities events persists, predictive analysis on a local level may be a crucial entry point for strengthening early warning, and making the most of limited institutional resources. In this context, the U.S. Holocaust Museum’s nascent public early warning efforts may be a valuable model for further collaboration between public, private, and non-governmental data collection efforts for atrocities early warning. While imperfect information ensures that early warning will remain an inconsistent tool, complex technologies and new approaches to atrocities data may strengthen predictive efforts.