SUBSCRIBE NOW

SIGHT

Be informed. Be challenged. Be inspired.

“Lost memories”: War crimes evidence threatened by AI moderation

Syria Nairab Idlib region

AVI ASHER-SCHAPIRO and BAN BARKAWI, of Thomson Reuters Foundation, report on how Al-enabled tools on media websites may be erroneously removing content that could be used to prove rights violations at bodies like the International Criminal Court…

New York, US/Amman, Jordan
Thomson Reuters Foundation

From bombings and protests to the opening of a new health centre, student journalist Baraa Razzouk has been documenting daily life in Idlib, Syria, for years, and posting the videos to his YouTube account. 

But this month, the 21-year-old started getting automated emails from YouTube alerting him that his videos violated its policy, and that they would be deleted. As of this month, more than a dozen of his videos had been removed, he said.

“Documenting the [Syrian] protests in videos is really important. Also, documenting attacks by regime forces,” he told the Thomson Reuters Foundation in a phone interview. “This is something I had documented for the world and now it’s deleted.”

Syria Nairab Idlib region

A man rides a motorbike past damaged buildings in the rebel-held town of Nairab, Idlib region, Syria, on 17th April. PICTURE: Reuters/Khalil Ashawi/File photo

YouTube, Facebook, and Twitter warned in March that videos and other content may be erroneously removed for policy violations, as the coronavirus pandemic forced them to empty offices and rely on automated takedown software.

But those AI-enabled tools risk confusing human rights and historical documentation like Razzouk’s videos with problematic material like terrorist content – particularly in war-torn countries like Syria and Yemen, digital rights activists warned. 

“AI is notoriously context-blind. It is often unable to gauge the historical, political or linguistic settings of posts…human rights documentation and violent extremist proposals are too often indistinguishable.”

– Jeff Deutch, a researcher for Syrian Archive.

“AI is notoriously context-blind,” said Jeff Deutch, a researcher for Syrian Archive, a non-profit which archives video from conflict zones in the Middle East.

“It is often unable to gauge the historical, political or linguistic settings of posts…human rights documentation and violent extremist proposals are too often indistinguishable,” he said in a phone interview.

Erroneous takedowns threaten content like videos that are used as formal evidence of rights violations by international bodies such as the International Criminal Court and the United Nations, said Dia Kayyali of digital rights group Witness.

“It’s a perfect storm,” the tech and advocacy coordinator said.

After the Thomson Reuters Foundation flagged Razzouk’s account to YouTube, a spokesman said the company had deleted the videos in error, although the removal was not appealed through their internal process. They have now restored 17 of Razzouk’s videos.

“With the massive volume of videos on our site, sometimes we make the wrong call,” the spokesman said in emailed comments. “When it’s brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.” 

In recent years social media platforms have come under increased pressure from governments to quickly remove violent content and disinformation from their platforms – increasing their reliance on AI systems. 

With the help of automated software, YouTube removes millions of videos a year, and Facebook deleted more than one billion accounts last year for violating rules like posting terrorist content. 

Last year social media companies pledged to block extremist content following a livestreamed terror attack on Facebook of a gunman killing 51 people at two mosques in Christchurch, New Zealand.

Governments have followed suit, with French President Emmanuel Macron vowing to make France a leader in containing the spread of illicit content and false information on social media platforms.

But the country’s top court this week rejected most of a draft law that would have compelled social media giants to remove any hateful content within 24 hours. 

Companies like Facebook have also pledged to remove misinformation about the coronavirus outbreak that could contribute to imminent physical harm.

These pressures, combined with an increased reliance on AI during the pandemic, puts human rights content in particular jeopardy, said Kayyali. 

Syria refugees girl with case

 A girl walks with her belongings near Baghouz, Deir Al Zor province, Syria, on 5th March, 2019. PICTURE: Reuters/ Rodi Said/File photo.

Social media firms typically do not disclose how frequently their AI tools mistakenly take down content. 

So, the Syrian Archive group has been using its own data to approximate change over time in the rate of deletions of human rights documentation on crimes committed in Syria, which has been battered by nearly a decade of war.

The group flags accounts posting human rights content on social media platforms, and archives the posts on its servers. To approximate the rate of deletions they run a script pinging the original post each month to see if it has been removed.

“Our research suggests that since the beginning of the year, the rate of content takedowns of Syrian human rights documentations on YouTube roughly doubled [from 13 to 20 per cent],” said Deutch, calling the increase “unprecendented”.

“In May, Syrian Archive detected more than 350,000 videos on YouTube had disappeared – up from 200,000 in May, 2019, including videos of aerial attacks, protests, and destruction of civilians homes in Syria.”

In May, Syrian Archive detected more than 350,000 videos on YouTube had disappeared – up from 200,000 in May, 2019, including videos of aerial attacks, protests, and destruction of civilians homes in Syria. 

Deutch said he had seen content takedowns in other war-torn countries in the region, including Yemen and Sudan. “Users in conflict zones are more vulnerable,” he said. 

Other groups, including Amnesty International and Witness, have warned of the trend elsewhere, including in sub-Saharan Africa. 

Syrian Archive was not able to test for takedowns at Facebook, because outside researchers are restricted from the platform’s application programming interface. 

But earlier this month Syrians began using the hashtag “Facebook is fighting the Syran revolution” to flag similar content takedowns on the platform. 

Last month Yahya Daoud, a Syrian humanitarian worker with the White Helmets emergency response group, shared a post and a photo showing a woman who died in a 2012 massacre by the forces of Syrian President Bashar al-Assad in the Houla region. 

By the end of the month Daoud said his account – which he had used since 2011 to document his life in Syria – was automatically deleted without explanation. “I was depending on Facebook to be an archive for me,” he said. 

“So many memories have been lost: the death of my friends, the day I became displaced, the death of my mother,” he said, adding that he had unsuccessfully tried to appeal the decision through Facebook’s automated complaints system. 

Facebook did not respond to requests for comment. 

Syria ambulance

A damaged ambulance is pictured after an airstrike on the rebel-held town of Atareb, in the countryside west of Aleppo, Syria, on 15th November, 2016. PICTURE: Reuters/Ammar Abdullah /File photo.

Researchers say they are only able to detect a small slice of erroneous content takedowns. 

“We don’t know how many people are trying to speak and we aren’t hearing them,” said Alexa Koenig, director of the University of California Berkeley’s Human Rights Center.

“These algorithms are grabbing the content before we even see it,” said Koenig, whose center uses images and videos posted from conflict zones like Syria to document human rights abuses and build cases. 

YouTube said that 80 per cent of videos flagged by its AI were deleted before anyone had seen them in the second quarter of 2019. 

That concerns Koenig, who worries that the erasure of these videos could jeopardise ongoing investigations around the world.

In 2017 the International Criminal Court issued its first arrest warrant that rested primarily on social media evidence, after video emerged on Facebook of Libya commander Mahmoud al-Werfalli.

The video purportedly showed him shooting dead 10 blindfolded prisoners at the site of a car bombing in Benghazi. He is still at large. 

Koenig worries this kind of documentation is now under threat: “The danger is much higher than it was just a few months ago,” she said. 

“It’s a sickening feeling, to know we aren’t close to where we need to be in preserving this content.”

 

Donate



sight plus logo

Sight+ is a new benefits program we’ve launched to reward people who have supported us with annual donations of $26 or more. To find out more about Sight+ and how you can support the work of Sight, head to our Sight+ page.

Musings

TAKE PART IN THE SIGHT READER SURVEY!

We’re interested to find out more about you, our readers, as we improve and expand our coverage and so we’re asking all of our readers to take this survey (it’ll only take a couple of minutes).

To take part in the survey, simply follow this link…

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.