- Mike Schroepfer, Facebook's chief technology officer, revealed the figure in a blog post, adding that it is up from 80.5% a year ago and just 24% in 2017.
- Social media firms such as Facebook, Twitter and TikTok have been criticized for failing to keep hate speech, such as racial slurs and religious attacks, off their platforms.
- Facebook said it has also developed a new tool to detect deepfakes.
Facebook announced Thursday that artificial intelligence software now detects 94.7% of the hate speech that gets removed from its platform.
Mike Schroepfer, Facebook's chief technology officer, revealed the figure in a blog post, adding that it is up from 80.5% a year ago and just 24% in 2017. The figure was also shared in Facebook's latest Community Standards Enforcement Report.
Social media firms such as Facebook, Twitter and TikTok have been criticized for failing to keep hate speech, such as racial slurs and religious attacks, off their platforms.
The companies employ thousands of content moderators around the world to police the posts, photos and videos that get shared on their platforms. On Wednesday, more than 200 Facebook moderators said in an open letter to CEO Mark Zuckerberg that the company has risked their lives by forcing them back to the office during the coronavirus pandemic.
But humans alone aren't enough and the tech giants have become increasingly reliant on a field of AI known as machine learning, whereby algorithms improve automatically through experience.
"A central focus of Facebook's AI efforts is deploying cutting-edge machine learning technology to protect people from harmful content," said Schroepfer.
Money Report
"With billions of people using our platforms, we rely on AI to scale our content review work and automate decisions when possible," he added. "Our goal is to spot hate speech, misinformation, and other forms of policy-violating content quickly and accurately, for every form of content, and for every language and community around the world."
But Facebook's AI software still struggles to spot some pieces of content that break the rules. It finds it harder, for example, to grasp the intended meaning of images that have text overlaid, and it doesn't always get sarcasm or slang. In many of these instances, humans would quickly be able to determine if the content in question violates Facebook's policies.
Facebook said it has recently deployed two new AI technologies to help it combat these challenges. The first is called a "Reinforced Integrity Optimizer," which learns from real online examples and metrics instead of an offline dataset. The second is an AI architecture called "Linformer," which allows Facebook to use complex language understanding models that were previously too large and "unwieldly" to work at scale.
"We now use RIO and Linformer in production to analyze Facebook and Instagram content in different regions around the world," said Schroepfer.
Facebook said it has also developed a new tool to detect deepfakes (computer-generated videos made to look real) and made some improvements to an existing system called SimSearchNet, which is an image-matching tool designed to spot misinformation on its platform.
"Taken together, all these innovations mean our AI systems have a deeper, broader understanding of content," said Schroepfer. "They are more attuned to things people share on our platforms right now, so they can adapt quicker when a new meme or photo emerges and spreads."
Schroepfer noted the challenges Facebook faces are "complex, nuanced, and rapidly evolving," adding that misclassifying content as hate speech or misinformation can "hamper people's ability to express themselves."