In a blog post, Antigone Davis, the global head of safety for the company said Facebook has also updated its child safety policies to clarify that it will remove Facebook profiles, pages, groups and Instagram accounts that are dedicated to sharing otherwise innocent images of children with captions, hashtags or comments containing inappropriate signs of affection or commentary about the children depicted in images.
Davis stated that the company has also started using Google’s Content Safety API to ‘better prioritize’ content that may contain child exploitation for its content reviewers to access.
Facebook has started testing two new tools, one aimed at the potentially malicious searching for child exploitative content and another aimed at the non-malicious sharing of content.
The first is a pop-up that is shown to users who search for terms associated with child exploitation on the company’s apps. The pop-up offers ways to get help from offender diversion organizations and shares information about the consequences of viewing illegal content.
The second tool is a safety alert that informs users who have shared viral, meme child exploitative content about the harm it can cause and warns that it is against the company’s policies and there are legal consequences for sharing the material.
“Accounts that promote this content will be removed. We are using insights from this safety alert to help us identify behavioral signals of those who might be at risk of sharing this material, so we can also educate them on why it is harmful and encourage them not to share it on any surface — public or private,” Davis wrote in the blog post.
The post also stated that content that isn’t explicit and doesn’t depict child nudity is harder to define. “Under this new policy, while the images alone may not break our rules, the accompanying text can help us better determine whether the content is sexualizing children and if the associated profile, page, group or account should be removed,” the post stated.
Davis wrote in the blog post that after consultations with child safety experts and organizations, Facebook has also made it easier to report content for violating its child exploitation policies. “To do this, we added the option to choose ‘involves a child’ under the ‘Nudity & Sexual Activity’ category of reporting in more places on Facebook and Instagram. These reports will be prioritized for review,” the blog post stated.
Facebook said to understand how and why people share child exploitative content on Facebook and Instagram, the company conducted an analysis of the illegal child exploitative content it reported to the National Center for Missing and Exploited Children (NCMEC) in October and November last year. Facebook said it found that more than 90% of this content was the same as or visually similar to previously reported content. And copies of just six videos were responsible for more than half of the child exploitative content it reported in that time period.
The company said it worked with NCMEC and other leading experts on child exploitation to develop a ‘research backed taxonomy’ to categorize a person’s apparent intent in sharing such content. Based on this taxonomy, it evaluated 150 accounts that it reported to NCMEC for uploading child exploitative content in July and August of 2020 and January 2021, and it estimates that more than 75% of these people did not exhibit malicious intent. The company said they appeared to share this content for other reasons, such as outrage or poor humor. “While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem. Our work to understand intent is ongoing,” Davis wrote in the blog post.
Discussing the new tools and updates in a press briefing, Karuna Nain, director of global safety policy at Facebook said people won’t come to the platform to connect and share if they don’t feel safe.
“Safety is core to our mission. We trebled our workforce for safety in the past few years in this space. We have around 35,000 people who focus on safety and security across the company. Around 15, 000 of these people focus on our community operations or our review teams. Typically that team reviews about two million pieces of content in a day. We are invested in safety for the long haul. We continue to invest in the space and we need to stay on top of new trends. This is an ongoing piece of work for us,” she added.