The company will “remove content with false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities that could cause harm to people who believe them,” according to a blog post published Thursday by Kang-Xing Jin, Facebook’s head of health.
Jin said that includes claims “related to false cures or prevention methods” or “that create confusion about health resources that are available.”
The company also plans to increase its fact-checking and monitoring efforts on Instagram, which it also owns. Jin said users who click on a hashtag related to the coronavirus will now be served with a “pop-up with credible information.”
The social network wants to prioritize legitimate sources of information, Jin said, by letting select organizations run free ads that help educate people about the virus and also boosting posts that fall in line with health experts’ guidance to the top of users’ Facebook feeds. It did not specify which organizations would be included.
Jin noted in the Thursday blog post that not all the new measures were “fully in place” yet.
“It will take some time to roll them out across our platforms and step up our enforcement methods,” he wrote. “We’re focusing on claims that are designed to discourage treatment or taking appropriate precautions.”
The move is Silicon Valley’s latest attempt to combat misinformation about the outbreak, which has infected more than 9,800 people around the world and killed more than 200 in mainland China. The World Health Organization (WHO) on Thursday declared the outbreak “a public health emergency of international concern.”
Twitter and Google have also stepped up efforts this week to guide their users to verified sources on the subject.
Google(GOOGL) announced Thursday that when people search for information about the coronavirus, it will pull up a special notice with updates from the WHO.
YouTube, which is owned by Google, said it will promote videos from credible sources when people search for clips about the virus. The company said it specifically points to content from trusted users, such as public health experts or news outlets, in search results or panels that suggest which videos to watch next.
Twitter(TWTR) said Wednesday that it would begin prompting users who search for the coronavirus to first visit official channels of information about the illness. In the United States, for example, Twitter directs users to the Centers for Disease Control and Prevention, beneath a bold headline that reads: “Know the facts.”
The campaign is running in 15 locations, including the United States, the United Kingdom, Hong Kong, Singapore, and Australia, and “will continue to expand as the need arises,” the company said in a blog post.
As of Wednesday, Twitter said that it had already seen more than 15 million tweets about the coronavirus in four weeks, “and that trend looks set to continue.”
A company spokesperson told CNN Business earlier this week that it had not seen a coordinated increase in disinformation related to the virus, but would “remain vigilant” on the issue.
The moves this week are notable, particularly since social networks have long been criticized for allowing the spread of misinformation.
Instagram last year vowed to block more hashtags that surfaced vaccine misinformation after a CNN Business report found that content promoted by anti-vaccination accounts were still thriving on the platform.
While the company frequently touts its army of fact-checkers and reviewers who are paid to debunk false claims, the effectiveness of its policies remain in question.
On Thursday, some people pointed out that a search for “vaccines” on Instagram still brought up several pages linked to dangerous misinformation.
Facebook did not immediately respond to a request for comment when asked about those searches.