Armed police on St Thomas Street, London, Sunday June 4, 2017, near the scene of Saturday night's terrorist incident on London Bridge and at Borough Market. Several people were killed in the terror attack at the heart of London and dozens injured. Prime Minister Theresa May convened an emergency security cabinet session Sunday to deal with the crisis. (Dominic Lipinski/PA via AP)
DETROIT — In the wake of Britain’s third major attack in three months, Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online.
Here’s a look at extremism on the web, what’s being done to stop it and what could come next.
Q. What are technology companies doing to make sure extremist videos and other terrorist content doesn’t spread across the internet?
A. Internet companies use technology plus teams of human reviewers to flag and remove posts from people who engage in extremist activity or express support for terrorism.
Google, for example, says it employs thousands of people to fight abuse on its platforms. Google’s YouTube service removes any video that has hateful content or incites violence, and its software prevents the video from ever being reposted. YouTube says it removed 92 million videos in 2015; 1 percent were removed for terrorism or hate speech violations.
Facebook, Microsoft, Google and Twitter teamed up late last year to create a shared industry database of unique digital fingerprints for images and videos that are produced by or support extremist organizations.
Those fingerprints help the companies identify and remove extremist content. After the attack on Westminster Bridge in London in March, tech companies also agreed to form a joint group to accelerate anti-terrorism efforts.
Twitter says in the last six months of 2016, it suspended a total of 376,890 accounts for violations related to the promotion of extremism. Three-quarters of those were found through Twitter’s internal tools; just 2 percent were taken down because of government requests, the company says.
Facebook says it alerts law enforcement if it sees a threat of an imminent attack or harm to someone. It also seeks out potential extremist accounts by tracing the “friends” of an account that has been removed for terrorism.
Q. What are technology companies refusing to do when it comes to terrorist content?
A. After the 2015 mass shooting in San Bernardino, California, and again after the Westminster Bridge attack, the U.S. and U.K. governments sought access to encrypted — or password-protected — communication between the terrorists who carried out the attack. Apple and WhatsApp refused, although the governments eventually managed to go around the companies and get the information they wanted.
Tech companies say encryption is vital and compromising it won’t just stop extremists. Encryption also protects bank accounts, credit card transactions and all kinds of other information that people want to keep private. But others — including former FBI Director James Comey and Democratic Sen. Dianne Feinstein of California — have argued that the inability to access encrypted data is a threat to security.
Feinstein has introduced a bill to give the government so-called “back door” access to encrypted data.
Q. Shouldn’t tech companies be forced to share encrypted information if it could protect national security?
A. Weakening encryption won’t make people safer, says Richard Forno, who directs the graduate cybersecurity program at the University of Maryland, Baltimore County. Terrorists will simply take their communications deeper underground by developing their own cyber channels or even reverting to paper notes sent by couriers, he said.
“It’s playing whack-a-mole,” he said. “The bad guys are not constrained by the law. That’s why they’re bad guys.”
But Erik Gordon, a professor of law and business at the University of Michigan, says society has sometimes determined that the government can intrude in ways it might not normally, as in times of war. He says laws may eventually be passed requiring companies to share encrypted data if police obtain a warrant from a judge.
“If we get to the point where we say, ‘Privacy is not as important as staying alive,’ I think there will be some setup which will allow the government to breach privacy,” he said.
Q. Is it really the tech companies’ job to police the internet and remove content?
A. Tech companies have accepted that this is part of their mission. In a Facebook post earlier this year, CEO Mark Zuckerberg said the company was developing artificial intelligence so its computers can tell the difference between news stories about terrorism and terrorist propaganda. “This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” Zuckerberg said.
But Gordon says internet companies may not go far enough, since they need users in order to sell ads.
“Think of the hateful stuff that is said. How do you draw the line? And where the line gets drawn determines how much money they make,” he said.
Others say the focus on tech companies and their responsibilities is misplaced. Ross Anderson, a professor of security engineering at the University of Cambridge, says blaming Facebook or Google for the spread of terrorism is like blaming the mail system or the phone company for Irish Republican Army violence 30 years ago. Instead of working together to censor the internet, Anderson says, governments and companies should work together to share information more quickly.
Former Secretary of State John Kerry also worries about placing too much blame on the internet instead of the underlying causes of violence.
“The bottom line is that in too many places, in too many parts of the world, you’ve got a large gap between governance and people and between the opportunities those people have,” Kerry said Sunday on NBC’s “Meet the Press.”