Categories
Full Text Articles - Audio Posts

US defense chief says alliance with Philippines will transcend administrations

Spread the news

Lloyd Austin’s comments came amid intense speculation over how the incoming Trump administration would steer US military engagements in Asia

Spread the news
Categories
Full Text Articles - Audio Posts

Red Hook org Friends of Firefighters launches new program to support children of New York’s Bravest

Spread the news

A new program is supporting the mental health needs of the families and children of New York’s Bravest.

Red Hook-based organization Friends of Firefighters, which advocates for the health and wellness of firefighters, is reaching a new demographic through its “Bravest Children” program, led by counselors Zach Grill and Kia Carbone. 

The pair work with children from ages 4 to 17, helping them navigate through the stresses of their parents’ dangerous job and of every day life.

friends of firefighters bravest children program
“Bravest Children” counselor Zach Grill. Photo by Lloyd Mitchell

‘It’s creating a community for the kids [of] these firefighters [scattered] all across the city,” Grill said. “The kids don’t necessarily meet up with other children of firefighters. So having a sense of community that they can see that the different lifestyle that they live by being a child of firefighter does have normalcy.” 

So far, twelve families participate in the group, meeting regularly to learn about effective communication, coping skills, and resilience, all while having some fun. 

Grill and Carbone said they want to help the children deal with their struggles, and talk freely about whatever is on their mind. They do that a little differently than they might with adults — Grill connects with the kids through tabletop games like UNO and role-playing games, and Carbone through painting and drawing. There’s also Sadie, aka Boox — a service dog in training, who will someday work with the kids. 

art at friends of firefighers
Carbone connects with children through art. Photo by Lloyd Mitchell

“I want them to be able to understand why they are feeling stressed or these emotions.” Grill said.

Carbone said she takes pride in helping the younger generation, who often weren’t taught how to communicate their emotions and struggles. 

“I mean, it was like, ‘Oh yeah, learn how to talk about it,’” she said. “But then, like, not told how to. And when you don’t know how to communicate, you bury things or you lash out, it’s one or the other, and it’s not conducive to anything.” 

Since the COVID-19 pandemic, Carbone said, programs for children have filled up, and there are waitlists for after-school programs and programs like Bravest Children. But those kids need a place to learn and get support. 

service dog at friends of firefighters
Sadie is training to become a service dog for the children. Photo by Lloyd Mitchell

Bravest Children helps children and families put things in perspective, she said.  An eight-year-old in the group recently told Carbone they felt that “life feels like it is moving too fast.” 

Being able to “figure out what’s going on, communicate that even if you’re communicating it to yourself, that’s something that a lot of us really struggle to do,” she said. “So I think that’s probably the most important thing.”


Spread the news
Categories
Full Text Articles - Audio Posts

November 19, 2024 1400 UTC

Spread the news


Spread the news
Categories
Full Text Articles - Audio Posts

Communication platforms play a major role in data breach risks

Spread the news

Every online activity or task brings at least some level of cybersecurity risk, but some have more risk than others. Kiteworks Sensitive Content Communications Report found that this is especially true when it comes to using communication tools.

When it comes to cybersecurity, communicating means more than just talking to another person; it includes any activity where you are transferring data from one point online to another. Companies use a wide range of different types of tools to communicate, including email, file sharing, managed transfer and secure file transfer. But there are many other communication tools, including SMS text, video conferencing and even web forms. Kiteworks’ research found that more is not necessarily better when it comes to security and communications tools.

The survey found that companies with more than seven different communication tools were at a significantly higher risk of data breach — 3.55x higher than the average. Only 9% of organizations overall reported more than 10 data breaches, but 32% of companies with more than seven communications tools experienced this high number of breaches. More communication tools also translate into higher data breach litigation costs, with organizations with more than seven tools reporting paying 3.25 times more in data breach litigation costs.

Impacts of a data breach

Companies with a high number of data breaches typically see numerous negative impacts on their organization, including lost customers, reputation damage and operational downtime. Many organizations also must hire additional staff after the breach, such as customer service help desks and credit monitoring services. Companies in regulated industries may also face fines related to the breach.

The 2024 Cost of a Data Breach Report found that the average cost of a data breach jumped to $4.88 million from $4.45 million in 2023, a 10% spike and the highest increase since the pandemic. While the study showed key improvements related to breaches, especially in terms of identifying and containing breaches more quickly, the increased cost of a breach is due to rising business costs.

Read the Cost of a Data Breach Report

Why an increase in communication tools increases risk

With communication and data transfer now central to all industries and most processes, both internal and external, reducing risk starts with understanding why each new tool increases the odds of a breach.

Here are key reasons for the correlation between the number of tools and the risk of a data breach:

Increased attack area

Each time a new tool is added to a process, the organization adds a new entry point for an attack every time a user accesses that tool. For example, the marketing department has begun using a different video conferencing tool than the rest of the company. Threat actors can now target users of the tool and recordings of the meetings stored in the cloud. Additionally, the information sent through the tool, such as chat comments and files shared, adds more opportunities for a data breach.

More opportunities for exchanging sensitive data

Kiteworks found that tracking sensitive data creates a big issue, with two-thirds of respondents sending sensitive data to more than 1,000 different third parties. Additionally, employees often let their guard down when using casual communication tools such as messenger and email, which creates instances of sharing sensitive data and increasing data breach risks.

More resources are required to govern and monitor

Because communication tools provide many opportunities for cybersecurity risks, the use of each tool must be carefully monitored with documented processes. This requires more resources, especially in terms of monitoring use for cybersecurity issues or improper use. With more tools to monitor, it’s easier to accidentally overlook a warning signal of a breach.

Increased risk of human error

Communication tools provide many different ways for employees to make mistakes that lead to a breach, such as falling for a social engineering scheme or using an insecure connection to send data. Employees are also more likely to make more compliance errors with more tools since the process may vary per tool, making it easier to overlook a step.

Reducing risk from communication tools

Reducing breaches often feels like an overwhelming task. By starting with communication tools, organizations can take proactive steps toward reducing their risk.

Take stock of tools in use

Many companies have no idea exactly how many tools they are using. By working with all employees and departments, organizations should create a catalog of all tools currently in use.

Eliminate multiple tools used for the same purpose

If your business finds that four different project management tools are used throughout the organization, you need to determine which tool is the best fit for the organization. By helping teams transition to the approved tools, you can reduce your risk of a breach.

Provide employees with the tools that they need

Many employees begin using approved tools because the tools that the company issues do not work for their tasks. For example, many companies instruct employees to use file-sharing tools that have file size limits. If an employee must transfer a file that is too big for that tool, then the only way that they can do their job is by using another approved tool. Many organizations find that their high number of tools used is due to employees improvising to get things done. By ensuring that your employees have tools that accomplish their required tasks, you can often quickly reduce the number of tools used in your organization.

Use tools that perform multiple tasks

The number of communication tools often quickly grows when an organization has a separate tool for each different type of communication task. By using platforms that perform multiple functions, such as file sharing, video conferencing and messaging, organizations can significantly decrease the number of communication tools.

It’s very easy to look up and realize that your company is using many different tools. By making a concerted effort to understand the tools needed and provide the right tools, your organization can reduce your risk of a breach.

The post Communication platforms play a major role in data breach risks appeared first on Security Intelligence.


Spread the news
Categories
Newscasts

NPR News: 11-19-2024 9AM EST

Spread the news

NPR News: 11-19-2024 9AM EST Learn more about sponsor message choices: podcastchoices.com/adchoices NPR Privacy Policy

Spread the news
Categories
Newscasts

NPR News: 11-19-2024 9AM EST

Spread the news

NPR News: 11-19-2024 9AM EST Learn more about sponsor message choices: podcastchoices.com/adchoices NPR Privacy Policy

Spread the news
Categories
Newscasts

AP Headline News – Nov 19 2024 09:00 (EST)

Spread the news


Spread the news
Categories
Newscasts

9 AM ET: Ukraine strikes Russia, ceasefire hope, New York stabbing spree & more

Spread the news

Moscow says Ukraine has fired US missiles into Russia, just days after Kyiv was given permission to use longer range American weapons. The Democrats are holding leadership elections in Congress today. There’s hope a ceasefire deal between Israel and Hezbollah could be getting closer. A man has been arrested in New York after three people were stabbed and killed. Plus, we’ll tell you why a former FEMA employee is defending herself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Spread the news
Categories
Full Text Articles - Audio Posts

‘Bomb Cyclone’ Threatens Northern California and Pacific Northwest

Spread the news

US Severe Weather

SEATTLE — Northern California and the Pacific Northwest are bracing for what is expected to be a powerful storm, with heavy rain and winds set to pummel the region and potentially cause power outages and flash floods.

The Weather Prediction Center issued excessive rainfall risks beginning Tuesday and lasting through Friday as the strongest atmospheric river — long plumes of moisture stretching far over the Pacific Ocean — that California and the Pacific Northwest has seen this season bears down on the region. The storm system has intensified so quickly that it is considered a “ bomb cyclone,” explained Richard Bann, a meteorologist with the National Weather Service Weather Prediction Center.

[time-brightcove not-tgx=”true”]

Read More: The Superstorm Era Is Upon Us

The areas that could see particularly severe rainfall as the large plume of moisture heads toward land will likely stretch from the south of Portland, Oregon, to the north of the San Francisco area, he explained.

“Be aware of the risk of flash flooding at lower elevations and winter storms at higher elevations. This is going to be an impactful event,” he said.

In northern California, flood and high wind watches go into effect Tuesday, with up to 8 inches (20 centimeters) of rain predicted for parts of the San Francisco Bay Area, North Coast and Sacramento Valley.

A winter storm watch was issued for the northern Sierra Nevada above 3,500 feet (1,066 meters), where 15 inches (28 centimeters) of snow was possible over two days. Wind gusts could top 75 mph (120 kph) in mountain areas, forecasters said.

“Numerous flash floods, hazardous travel, power outages and tree damage can be expected as the storm reaches max intensity” on Wednesday, the Weather Prediction Center warned.

Meanwhile, Southern California this week will see dry conditions amid gusty Santa Ana winds that could raise the risk of wildfires in areas where crews are still mopping up a major blaze that destroyed 240 structures. The Mountain Fire, which erupted Nov. 6 in Ventura County northwest of Los Angeles, was about 98% contained on Monday.

Winds will calm by the end of the week, when rain is possible for the greater Los Angeles area.

In southwestern Oregon near the coast, 4 to 7 inches (10 to 18 centimeters) of rain is predicted — with as much as 10 inches (25 centimeters) possible in some areas — through late Thursday night and early Friday morning, Bann said,

A high wind warning has been issued for the north and central Oregon coast beginning at 4 p.m. Tuesday with south winds from 25 mph (40 kph) to 40 mph (64 kph), with gusts to 60 mph (97 kph) expected, according to the weather service in Portland. Gusts up to 70 mph (113 kph) are possible on beaches and headlands. Widespread power outages are expected with winds capable of bringing down trees and power lines, the weather service said. Travel is also expected to be difficult.

Washington could also see strong rainfall, but likely not as bad as Oregon and California. From Monday evening through Tuesday, some of its coastal ranges could get as much as 1.5 inches (3.8 centimeters) of rain, Bann said.

The weather service warned of high winds from Tuesday afternoon until early Wednesday for coastal parts of Pacific County, in southwest Washington. With gusts potentially topping 35 mph (46 kph) — and likely faster near beaches and headlands — trees and power lines are at risk of being knocked down, the Pacific County Emergency Management Agency warned.

Washington State Patrol Trooper John Dattilo, a patrol spokesperson based in Tacoma, posted on social media Monday afternoon that people should be prepared for “some bad weather” on Tuesday night. “Stay off the roads if you can!”

A blizzard warning was issued for the majority of the Cascades in Washington, including Mount Rainier National Park, starting Tuesday afternoon, with up to a foot of snow and wind gusts up to 60 mph (97 kph), according to the weather service in Seattle. Travel across passes could be difficult if not impossible.

Outside of this region, the central and eastern Gulf Coast, including the Florida Panhandle, is at risk for flooding on Tuesday, with 2 to 3 inches (5 to 7.6 centimeters) of rainfall are in the forecast, according to the weather service. Low-lying and urban regions could see flash floods.

___

Associated Press reporter Lisa Baumann contributed to this report.


Spread the news
Categories
Full Text Articles - Audio Posts

Early Warning in Atrocity Scenarios Must Account for the Effects of Technology, Good or Bad

Spread the news

(Editor’s Note: This article is part of the Just Security symposium “Thinking Beyond Risks: Tech and Atrocity Prevention,” organized with the Programme on International Peace and Security (IPS) at the Oxford Institute for Ethics, Law and Armed Conflict.)

The potential impact of technology on mass atrocity scenarios has long raised questions for experts and policymakers in the field of atrocity prevention. In the two decades since the adoption of the Responsibility to Protect norm and the emergence of atrocity prevention as a discipline, developments in the “digital revolution” and the advent of the “information age” have influenced atrocity scenarios in countless ways, both positively and negatively.

Yet, despite its clear and growing importance, the subject remains underexplored and poorly understood. While the field better understands how different technologies relate to atrocities, a lack of systematic engagement with the topic and its many nuances still leaves major knowledge gaps, making it difficult to fully and constructively harness technology for prevention or to mitigate its harmful effects.

This is especially evident in atrocity early warning, which largely lacks a tech perspective. Many – though not all – of the frameworks that experts regularly rely on for early warning (such as the 2014 U.N. Framework of Analysis on Atrocity Crimes) were developed before the advent and widespread adoption of technologies now commonly present in atrocities scenarios. As technology becomes ever-more pervasive, understanding its impact on atrocity dynamics is urgent.

Crucially, technology cuts both ways in atrocity scenarios. While it can play an important role in supporting prevention and response – particularly in monitoring and accountability efforts – technology can also increase risks, including by enhancing perpetrators’ capacity to commit atrocity crimes, creating the enabling circumstances for atrocities, and amplifying possible trigger factors. Given these dynamics, accounting for the impact of technology in early warning frameworks is crucial for a nuanced understanding of atrocity risks.

Technology’s Potential for Prevention and Response

Various forms of widely available technology have become essential to prevention and response efforts, particularly by civil society groups. This includes geospatial intelligence, remote sensing, online open-source intelligence (OSINT), and documentation technologies ranging from simple photo cameras to specialized documentation software. In particular, these technologies are often used today to assist with monitoring, documentation, and accountability efforts, along with other investigative methods such as human testimony, forensics, social network analysis, and big data analytics.

Such technologies have already proven their worth. They have helped to identify the construction and use of military installations (as well as political prison camps and mass detention facilities) and facilitated tracking the movements of military convoys, weapons, militias, and wartime smuggling operations. Moreover, they have helped document and geolocate violent incidents, identify perpetrators and evidence of intent, and demonstrate patterns in attacks against protected groups. In some cases, technology has even been able to track the “mood” of specific groups, predicting with remarkable accuracy the outbreak and location of identity-based protests and other risk factors.

Clearly, then, the adoption of new technologies presents major opportunities to help prevent atrocity crimes and hold perpetrators accountable. Yet, the more widespread such tools become, the greater the need for practical guidance on how to use them ethically and lawfully. Efforts such as the Berkeley Protocol (among other guidelines and materials) are crucial contributions. But the conversation is just starting, and significant gaps remain – chief among them, governance issues.

Ultimately, technology is but a tool. It is what users make of it that will determine whether its effects are positive or nefarious. That is why technology-driven considerations must be analytically integrated into risk assessments and forecasting if they are to provide a meaningful, nuanced understanding of atrocity dynamics.

Possible Risks of Technology in Atrocity Scenarios

While the presence or use of any technology is unlikely to change the core motivations behind atrocity crimes, such as ideology or past grievances, technology can have a significant impact on how such motivations are framed, how they spread, and how they are exacerbated.

Political and economic cleavages, along with the politicization of past grievances, are risk factors common to all atrocity crimes. As technology becomes increasingly integral to societies worldwide, unequal access to critical technologies like the internet, coupled with differentiated use patterns (often called the “digital divide”), could reinforce pre-existing patterns of marginalization and discrimination. For instance, individuals with limited access to critical technologies may have restricted educational and employment opportunities. Research has already shown how the digital divide exacerbates other divides, leading to unequal levels of participation in society and resource distribution.

Moreover, algorithmic biases can worsen identity-based marginalization. As access to services, resources, and opportunities increasingly depends on artificial intelligence-powered technologies like facial recognition or other biometrics-collection software, a failure to proactively address these biases risks entrenching existing inequalities, or at worst, create new ones.

In addition to shaping motivations, technology has the potential to negatively impact the dynamics underlying atrocity scenarios. Specifically, it can affect at least three known features of escalatory dynamics: the capacity of perpetrators to commit atrocities, the enabling circumstances for such crimes, and potential trigger factors.

Regarding perpetrators’ capacity, various technologies are already being unlawfully used for surveillance purposes. These include maintaining domestic social control, targeting, tracking, and harassing dissidents, journalists, and activists – including through the use of spyware – and systematically targeting vulnerable groups by surveilling individuals on a mass scale based on race or ethnicity. All of this enhances perpetrators’ ability to commit atrocities, with devastating effects. The experiences of the Uyghurs and other Turkic minorities in Xinjiang, as well as the Rohingyas and other Muslim minorities in Myanmar, are just a couple of examples.

In terms of enabling circumstances, digital technologies – particularly social media – have proven to be formidable tools for organizing and mobilizing sectors of society. This includes actors seeking to disrupt public order or to grab and consolidate power, often to the detriment of marginalized groups. Equally, social media and the “dark-web” have enhanced the ability of violent extremists to “hide” in plain sight, attract new recruits, and radicalize individuals online through disinformation and propaganda.

Finally, “deep fakes” and generative AI can exacerbate the rapid spread of disinformation across social media platforms, creating triggering risk factors with harmful real-world outcomes. For example, fabricated depictions of public figures or military attacks can influence elections results or cause societal upheaval and other forms of instability that can lead to mass violence. After the fact, actors may also leverage these technologies to contest accusations of crimes by manipulating evidence.

Importantly, neither the existence, possession, or even prevalence of any technology necessarily indicates increased atrocity risks; this depends on context and application. Thus, analytically, the precise impact of technology on modern atrocity dynamics is complex and multifaceted. Nuance is key, and can only be captured by adopting a “tech lens” in early warning efforts. Specifically, this can be achieved by incorporating tech-based indicators of atrocity risks into existing early warning frameworks, enabling users to adopt a tech-sensitive approach to detection.

A Tech-Sensitive Approach to Risk Assessment and Forecasting

Detection refers to the process of identifying and analyzing signs – often called “indicators” – that may signal an increased risk of atrocity crimes in a given context. To enable users to practically identify risk factors, early warning frameworks must offer guidance on what indicators of these risks might be. This includes listing specific signs, as well as offering general guidance for interpreting data and evidence, assessing risk levels based on these indicators, and specifying when early warning actions are warranted.

Instead of completely overhauling existing risk assessment frameworks, a tech-sensitive approach to detection can be achieved by applying a “tech lens” to tools like the U.N. Framework of Analysis and others developed before the widespread adoption of digital and cyber technologies. Applying a tech lens involves systematically examining how technologies may influence or amplify risk factors outlined in that U.N. Framework, both by introducing new threats and by changing the way existing risks manifest. This approach allows users to more accurately assess technology’s impact on other risk factors and indicators, and to identify where new ones may be needed.

Increased Surveillance of Vulnerable Groups

For instance, the increased surveillance of vulnerable groups – often facilitated by technology – can serve as an early warning sign of perpetrators’ increased capacity to commit atrocities (Risk Factor 5 in the U.N. Framework). It can also indicate other risk factors, such as “enabling circumstances or preparatory actions” (Risk Factor 7), “triggering factors” (Risk Factor 8), and “intergroup tensions or patterns of discrimination against protected groups” (Risk Factor 9). However, none of the eight indicators listed under Risk Factor 5, the 14 indicators under Risk Factor 7, or the 12 and six indicators under Risk Factors 8 and 9, respectively, address the role of technology in how these risks might manifest.

Under Risk Factor 5, Indicator 3, which concerns the “capacity to encourage or recruit large numbers of supporters,” and Indicator 6, which addresses the “presence of commercial actors or companies that can serve as enablers by providing goods, services, or other forms of practical or technical support that help sustain perpetrators,” could be expanded to account for the presence of specific technologies that enable recruitment and mobilization, such as social media or the dark web. Similarly, companies may act as enablers of mass surveillance by sharing user data with authorities, often as a condition for accessing their products and services.

Recognizing the potential role of tech companies and specific products or platforms in increasing atrocity risks – particularly in contexts with weak legal protections for users’ data and privacy – would be highly beneficial. Even better would be adding an indicator that addresses the risks of mass surveillance in situations where access to goods and services relies on technology, but legal safeguards for users’ data are absent. Additionally, regarding “enabling circumstances or preparatory actions” under Risk Factor 7, Indicator 6—which addresses the “imposition of strict control on the use of communication channels, or banning access to them”—could be expanded to include authorities’ use of these channels for mass surveillance rather than just restrictions or bans.

The Spread of Hate Speech and Mis/Disinformation Online

Insights into the spread of hateful rhetoric and incitement online – including by whom, against whom, and through what means – can also indicate several risk factors in the U.N. Framework, such as “intergroup tensions or patterns of discrimination against protected groups” (Risk Factor 9), “signs of an intent to destroy in whole or in part a protected group” (Risk Factor 10), and “signs of a plan or policy to attack any civilian population” (Risk Factor 11). Adding indicators under each of these risk factors to capture how they may manifest through technology could sharpen the ability to detect them.

A tech-sensitive approach to detection also requires differentiating between the impacts of specific tech tools and platforms based on the nature and source of the content. For example, disinformation or hate speech spread by isolated private accounts will have different effects compared to content spread by government-lined accounts, whether they are real users or bots. Monitoring and distinguishing account types, such as those linked to State actors or their proxies, as well as assessing their potential reach, can also deepen understanding of how such content spreads. For instance, “influencer” accounts with hundreds of thousands or even millions of followers may spread disinformation either deliberately or unknowingly. Recognizing these dynamics can help identify measures to address or mitigate their impact in escalating situations.

In all cases, improved detection requires closer examination of patterns in how actors behave and how the use of digital technologies in at-risk societies might interact with other risk factors. By analyzing the relationship between these technologies and other risk factors, analysts can gain insight into patterns of escalation and identify structural weaknesses that may increase risk.

Ultimately, the primary danger of any technology in atrocity scenarios lies not in its mere existence or prevalence, but in the effectiveness of the governance system regulating its misuse. Because atrocity risks tend to increase when actors operate outside apt regulatory frameworks, the absence or dismantling of such frameworks is equally, if not more, indicative of atrocity risk and should be integrated into revised approaches to detection.

Equally important is the need to improve governance of the technology sector and its societal impact, rather than simply monitoring the absence or dismantling of regulatory frameworks. This ensures that legal protections and access to remedies keep pace with technological advancements, preventing tech innovation from outstripping the law’s ability to protect the users such innovation is meant to serve. Achieving this requires increased collaboration among all stakeholders directly and indirectly involved in tech regulation processes, including legislators, policymakers, academic experts as well as other civil society groups, and tech companies. 

Conclusions

Rapid technological advancements, particularly in the digital and cyber realms, are reshaping the dynamics of atrocity crimes. This requires early warning frameworks to systematically engage with how technology affects the risk factors and indicators commonly used for detection. Analysts using existing frameworks for detection should apply a tech lens to discern how these frameworks currently account for technology’s impact on escalation dynamics and identify where additional indicators are needed to enhance detection capabilities.

While technology is crucial, it operates within a broader context of political, social, and economic factors that influence the nature and onset of atrocity crimes. Thus, the mere presence or prevalence of certain technologies does not inherently indicate increased risk in vulnerable societies. However, technology can, under certain circumstances, heighten specific risks, warranting its incorporation into the indicators leveraged by existing frameworks for detection.

In doing so, analysts should guard against “tech fetishism,” or a novelty-driven urge to overhaul the foundational principles of atrocity prevention and risk assessment. Instead, it is often sufficient to tweak existing indicators within current frameworks to better reflect the nuanced ways in which technology can act as both a tool for de-escalation and a catalyst for specific atrocity risks.

In some instances, improved detection may also require the development of sophisticated, targeted tools by tech companies, including to curb the spread of certain harmful content and better protect users’ data from authorities, particularly where such data collection is a prerequisite for accessing basic goods and services. That, in turn, requires ongoing engagement between tech companies and the broader atrocity prevention field, including diplomatic, policy, legislative, and academic circles. Such partnerships are essential for developing a systematic understanding of how rapid technological changes and specific tools or products may impact atrocity dynamics, both generally and in specific contexts.

Moving forward, establishing new connections between technology and prevention strategies must be part of a broader, multi-stakeholder effort to improve governance of the technology sector and its societal impact, extending well beyond the specific context of atrocity scenarios.

IMAGE: This photo taken on June 4, 2019 shows schoolchildren walking below surveillance cameras in Akto, south of Kashgar, in China’s western Xinjiang region. The recent destruction of dozens of mosques in Xinjiang at the time highlighted the increasing pressure Uighurs and other ethnic minorities face in the heavily-policed region. (Photo GREG BAKER/AFP via Getty Images)

The post Early Warning in Atrocity Scenarios Must Account for the Effects of Technology, Good or Bad appeared first on Just Security.


Spread the news