Twitter and YouTube must self-regulate on terrorism

Customers leave the Westgate shopping mall after it reopened on July 18, 2015 in Nairobi, Kenya. Simon Maina / AFP

Customers leave the Westgate shopping mall after it reopened on July 18, 2015 in Nairobi, Kenya. Simon Maina / AFP

Nairobi’s Westgate mall reopened on the Eid Al Fitr weekend, 22 months after it was attacked by a terrorist group that had live-tweeted the bloodshed and the drama of those four harrowing days in September 2013.

The reopening was a timely reminder of how social media can be creatively misused by terrorists for propaganda and command and control purposes. And that reminder came as governments urged some of the world’s biggest tech companies to self-regulate and prevent social media platforms from serving as distribution channels for extremist groups.

Clearly, the case for absolute free speech is now in direct conflict with demands for acceptable restraint on jihadi groups’ online activity. There is little agreement about the limits of what is acceptable. Everyone acknowledges that there is a problem.

Much has transpired in the months since Al Qaeda-linked Somali militant group Al Shabab live-tweeted the massacre at that upscale mall in the Kenyan capital. ISIL has been using social media with devastating effect. Though it hasn’t live-tweeted the beheadings, the burning and the drownings, it has repeatedly posted gruesome YouTube videos slick with production tricks and gloated on Twitter about its brutality. It has incited the online community to kill or maim and share the results.

James Comey, director of the US Federal Bureau of Investigation, recently told Congress that ISIL is reaching out, through Twitter, to about 21,000 English-language followers with a message that basically goes as follows: “‘Come to the so-called caliphate and live the life of some sort of glory or something; and if you can’t come, kill somebody where you are; kill somebody in uniform; kill anybody; if you can cut their head off, great; video­tape it; do it, do it, do it’ … (it’s) a devil on their shoulder all day long, saying ‘kill, kill, kill, kill’.”

Mr Comey’s numbers may sound low but social media influence is about more than that. A US Homeland Security study said in May, that ISIL uses social media in a way that “burnish(es) an image of strength and power” through its visibility on popular platforms, the sharing, retweeting and online discussion.

Should ISIL be pushed offline then? Some European governments have been asking social media platforms to take responsibility and limit ISIL’s fatal fascination. Just this month, the US Senate Intelligence Committee approved a bill that requires social media companies to alert the authorities to any terrorism-related content on their sites. In June, a UN panel that advises the Security Council on sanctions against extremist groups said that “high-definition digital terror” was proliferating too easily and it demanded to know what Twitter, Facebook, YouTube and others were doing about it.

The answers vary. While ISIL’s online presence grows like a tumour, social media companies flip-flop about the nature of the disease and its cure. They question the rights and wrongs of turning them into regulators of content that they don’t create, but merely make available. This has resulted in patchy policy, if the ad hoc decisions can be called policy. After Westgate, for instance, Twitter was repulsed enough to take down Al Shabab’s accounts, some of which masqueraded as news services that offered ghastly updates on body counts in real time. But the company left on ISIL’s boastful claims and blood-curdling threats after last month’s attack on tourists in Sousse, Tunisia.

YouTube appears to follow a similarly conflicted case-by-case logic. In June, some of its senior officials admitted that it did not want to serve as a “distribution channel for this horrible, but very newsworthy, terrorist propaganda”. This was in reference to a grisly ISIL video that showed three killings – a burning alive in a car, a drowning in a locked cage and a decapitation by necklace bombs. Versions of that video remain available even though Google claims that it removes terrorism-related posts every day, generally after users flag them.

Even before ISIL’s rise, Facebook was the most unyielding of all social networking sites in its purge of terrorism-related content. With content reviewers in four locations around the world, it aggressively takes down posts and photos and blocks accounts. But it is Twitter, ISIL’s social media tool of choice, that has been less persuaded of the need to self-censor what its head of global public policy, Colin Crowell, has called “painful content”. Earlier this year, Mr Crowell explained that Twitter saw its role as “the provider of this open platform for free expression … The platform doesn’t take sides.”

But that’s precisely the problem. Shouldn’t it take sides in the battle between an abhorrent cult and all that is right and good and true about being human?

It’s worth noting that even in the US, where free speech is a constitutional right, federal law is unambiguous about child pornography. It prohibits the production, distribution, reception or possession of any image in that category. Computer scientists have responded with software that identifies so-called unique digital markers in relevant images, allowing companies to delete and report them. Social media relies on this software too. Though it would probably be much harder to identify and delete terrorism-related content – euphemistic words and new images won’t have recognisable digital markers because they would essentially be virgin territory – what about starting with the violent propaganda and graphic visuals already available? As with child pornography, surely every civilised society can agree that ISIL’s sadism and savagery does not deserve a platform? And each time ISIL migrates from one service to another, possibly less popular site, it loses visibility and must be digitally hunted down.

There is precedent for limiting expression if it is likely to have an adverse effect on society. When supporters of the Kurdish leader Abdullah Ocalan in Europe began a sweeping campaign of self-immolation in the late 1990s, the BBC, where I then worked, decided to limit coverage of the actual act of people setting themselves on fire. For traditional media, as for social media, perhaps the watchword must be: do as little harm as possible.