Computerworld

Facebook, Google back ‘Christchurch Call’ in wake of terror attack

Governments, major tech companies sign up to NZ-backed call

Amazon, Facebook, Google, Microsoft and Twitter are among the backers of the ‘Christchurch Call’, which commits its signatories to taking steps to address the uploading and dissemination of “terrorist and violent extremist content”.

Daily Motion, Qwant and YouTube (owned by Google) have also endorsed the statement, along with 17 governments and the European Commission. Australia is a signatory, along with New Zealand, Canada and the UK. The White House has reportedly said it will not endorse the statement, which is an initiative of the NZ government.

Amazon, Facebook, Microsoft, Twitter and Google signed the statement at a meeting hosted by French President Emmanuel Macron and New Zealand’s prime minister, Jacinda Ardern,

“The Christchurch Call announced today expands on the Global Internet Forum to Counter Terrorism (GIFCT), and builds on our other initiatives with government and civil society to prevent the dissemination of terrorist and violent extremist content,” a statement from the five tech companies said.

”Additionally, we are sharing concrete steps we will take that address the abuse of technology to spread terrorist content, including continued investment in technology that improves our capability to detect and remove this content from our services, updates to our individual terms of use, and more transparency for content policies and removals.”

“The events of Christchurch highlighted once again the urgent need for action and enhanced cooperation among the wide range of actors with influence over this issue, including governments, civil society, and online service providers, such as social media companies, to eliminate terrorist and violent extremist content online,” the Christchurch Call states.

The statement includes commitments for governments and online service providers.

Governments signing the call say they will take steps to counter terrorism and violent extremism, enforce laws that prohibit the production or dissemination of terrorist and violent extremist content in a “a manner consistent with the rule of law and international human rights law, including freedom of expression”, and encourage ethical standards among media outlets, as well as support industry standards or similar frameworks for reporting on terrorist attacks.

The call also says governments will consider “appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content, including through collaborative actions”. That can include awareness-raising and capacity-building activities for smaller online service providers, developing voluntary frameworks, and regulatory measures “consistent with a free, open and secure internet and international human rights law”.

Online service providers will take “transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms”.

Potential measures floated include the “expansion and use of shared databases of hashes and URLs, and effective notice and takedown procedures” as well as implementing “immediate, effective measures” to reduce the risk of live-streaming of terrorist and violent extremist content.

Facebook announced this week it would implement restrictions on its ‘Live’ feature, which was used by the Christchurch gunman to film the 15 March attack. Facebook vice-president, integrity, Guy Rosen wrote that the company would implement a ‘one strike’ policy for Live.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense,” Rosen wrote. “For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”

“We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook,” the Facebook VP wrote.

Other measures outlined in the call include reviewing algorithms that “may drive users towards and/or amplify terrorist and violent extremist content”.

In April the Australian government rushed legislation through parliament that created legal obligations for online service providers to act against “abhorrent violent material” posted using their platforms.