Facebook said it would stop people who have recently posted or shared terrorist propaganda from broadcasting live video on its service, the company’s most concrete response so far to pressure to dial back the feature after it was used to broadcast the deadly attack on 51 people at mosques in Christchurch, New Zealand, The Wall Street Journal reported.
The social media giant said that it will impose a “one-strike” rule that will block people who have violated certain Facebook rules, including its restrictions on posting terrorist content without context, from using the company’s live-video streaming feature to broadcast to anyone else on Facebook for a limited time, for instance 30 days.
“Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” Guy Rosen, Facebook’s vice president for integrity, said in a blog post.
While the restrictions apply solely to the platform’s Live feature, the company said it had plans to extend them to other areas over the coming weeks, beginning with preventing those that are banned from creating ads on Facebook, ZDNet informs.
“We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” the post continued. “Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.”
The change comes ahead of a summit in Paris on Wednesday in which companies including Facebook and Alphabet Inc.’s Google, owner of YouTube, are set to join several countries including France, New Zealand, the U.K., and Jordan in what is called the “Christchurch Call”.
An early draft of the call, viewed by The Wall Street Journal, includes a specific commitment from social-media companies to implement immediate measures to reduce the risk that anyone can use live-streaming to broadcast extremist content.
A Facebook spokesperson said such a restriction would have prevented the alleged shooter from using his Facebook account to live stream the attack on two mosques in Christchurch in March.
Facebook’s move follows another by YouTube to restrict its live-streaming feature to users that have more than 1,000 users. Live video has been a specific focus of concern because of several recent incidents in which disturbing or extremist content has been broadcast live. Tech companies say that it is more difficult for them to detect what is going on in live streams than still images or video that has been previously recorded, the Journal adds.
Tech companies are under growing pressure on a number of fronts. The European Union has one of the world’s most comprehensive privacy laws and recently passed a copyright directive that imposes new restrictions and obligations on big internet companies. After several investigations into whether tech giants are violating competition rules, some politicians and others are calling for them to be broken up.
According to the Journal, a number of countries, most recently France, have proposed tough new rules to oversee divisive questions of how social-media firms should police hate speech and cyberbullying on their platforms.
Terrorist content, including propaganda, recruitment videos and material depicting attacks, has been less controversial because it is easier to draw a line around what should be removed. Facebook and Google both have automated tools to detect Islamic State content, for instance. Nevertheless, tech companies are under pressure to do more to remove that content more quickly, at the EU the G-7 and other international venues, the Journal concludes.
Be the first to comment