After months of struggle with deceptively altered videos, Facebook finally announced last week that it would ban the posting of Deepfakes on the social media platform as part of a new policy on manipulated media. Deepfakes refer to images and videos that are manipulated using Artificial Intelligence tools to show something fake. Monika Bickert, Facebook’s vice president of global policy management, announced in a blog post on Monday that Facebook Deepfake Ban Policy will add nudity, hate speech, and graphic violence in Facebook’s list of prohibited content categories.

The ban is appreciated for the reason that it will stop the spread of false news and disinformation. “While these videos are still rare on the Internet, they present a significant challenge for our industry and society as their use increases,” the company mentioned in the blog post.

Contrary to the company’s reputation for taking action on issues when they become widespread, this problem was tackled in time by Facebook. But like other attempts by social media companies to fix content on their websites are widely viewed as problematic, Facebook’s new policy also drew criticism for not solving the problem as a whole and creating too many loopholes.

Before discussing the loopholes, let’s look at the details of the policy first. For a video to be banned from the site, it has to follow two conditions. One, It must be manipulated “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not say.” Two, it must be “the product of artificial intelligence or machine learning.”

Facebook deepfake ban policy is being criticized for leaving out some of the videos that can prove to be as misleading as total fakes. As the policy doesn’t apply to a video “that has been edited solely to omit or change the order of words,” the most common type of editing used to create funny videos or memes can still be used to spread disinformation. This means that videos edited using low-grade software and with simple edits and effects will be exempt from the law.

Because of this, the famous videos of House Speaker Nancy Pelosi, former Vice President Joe Biden, and Mark Zuckerberg will remain up on the site. Although, it is being said that the platform will limit the spread of these.

It seems like Facebook’s policy has failed to make political figures happy, which were supposed to be the ones most affected by the videos and reason for the ban. Quite unsurprisingly, many of Pelosi’s staff members expressed their concerns about the new rules. Pelosi’s deputy chief of staff, Drew Hammill, acknowledged the change in a statement to The Verge in response to the updated policy, but said, “Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”

Biden’s campaign spokesman, Bill Russo, took to Twitter to say, “Facebook’s announcement today is not a policy meant to fix the very real problem of disinformation that is undermining face in our electoral process, but is instead an illusion of progress.”

Many critics and experts have said that Facebook’s move to ban deepfakes is a lousy measure for the spread of fake news as it prevents a large amount of content from being removed. “I think the new ban on AI-driven deepfakes is a step in the right direction, but it’s disappointing that Facebook’s new policy apparently won’t result in the removal of provably false videos doctored with less advanced means,” said Paul Barrett, the deputy director of NYU’s Center for Business and Human Rights and an expert on political disinformation.

Facebook is only banning the videos or images that are highly affected by AI, which makes them easier to recognize by the automated software and doesn’t need human interaction to decide what should remain up and what shouldn’t. While the kind of edits used in commonly edited videos cannot be detected easily by the systems. This makes it clear that Facebook needs to take serious measures to prevent the spread of false news and to truly satisfy the politicians and tech critics.