
The move is part of a wider raft of measures the social media giant has taken to tackle the problem of fake news on social media.
Allow Twitter content?
The new flagging feature for Instagram users was first introduced in the US in mid-August and has now been rolled out globally.
Users can report potentially false posts by clicking or tapping on the three dots that appear in the top right-hand corner, selecting “report”, “it’s inappropriate” and then “false information”.
Facebook says that if a fact-checker rates a story as false on Facebook and it then appears on Instagram, an extra button can be clicked to rate it on there as well.
Stephanie Otway, a spokeswoman for Instagram, said: “This is an initial step as we work toward a more comprehensive approach to tackling misinformation.”
‘Quite a lot of harm’
BBC News spoke with two fact-checking partners who welcomed the new feature and Facebook’s track record in responding to feedback.
But they have concerns over the additional volume of work and ambiguity regarding the amount of impact it will have on reducing the spread of misinformation on the platform.
“Parts of the [Instagram] tool are less well-defined than the Facebook one. The amount of content that is sent to us is much smaller,” says Aaron Sharockman, the executive director of PolitiFact, a US non-profit that has been using the tool.
Mr Sharockman says he had only been given access to 31 Instagram posts on the day of his phone interview with the BBC.
“They seem to have trouble finding out what constitutes misinformation. I am getting quite a lot of posts that I would not rate as false.
“I want to see more posts and be able to search within posts. Searching based on keywords is difficult on Instagram, especially if those words are in a photo or meme.”
He says he wants to focus on “pure misinformation” relating to politics and the 2020 US election, and especially regarding posts seen by large numbers of people.

“It is very important that platforms help to educate people about misinformation and the risks that exist on the internet and social media,” he says.
Mr Moy says misinformation about vaccination, 5G phone technology and toxins in food and drink are prevalent on Instagram.
“We have seen enough to know that health misinformation on Instagram is a real danger to people and can potentially cause quite a lot of harm.”
‘Most effective platform’
One report by the research firm New Knowledge said Instagram was “perhaps the most effective platform” for Russian troll farm the Internet Research Agency (IRA), adding: “Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare.”

Samantha Bradshaw, a researcher for the Computational Propaganda Project, says that viral images and videos are “the future of disinformation and fake news”, making Instagram an ideal platform for such operations.
“No-one has time to read a long piece containing false information anymore. People want a brief, digestible and sometimes humorous image or video carrying a specific political message,” she says.
Flagging tools for users are “positive steps in the right direction”, she says, but do not address the root causes of online disinformation.
“Platforms should think deeply about the broader issues that cause false content to go viral, rather than content that is true. They need to address concerns regarding their algorithms and business models.”
In an example of how misleading images can trick people, Madonna, Leonardo DiCaprio and Cristiano Ronaldo all recently posted what they assumed were images of fires in the Amazon on Instagram, generating over 14 million likes in total.
‘Human judgement’
But experts believe some level of human intervention will always be required to ensure the validity of the process.

“Ultimately, I expect that some aspects of identifying disinformation will be reliably covered by AI,” says Ben Nimmo, head of investigations at Graphika. “But that means training enough analysts in the first instance to be able to create a large and reliable enough dataset, and then training the AI on the datasets and making sure it is up to the task, including by using human checks as a backstop.”
Mr Moy says although AI will certainly help track the dissemination of fake news, it cannot “replace the human judgement in a way that respects the free speech of everyone involved.”
“We could probably get to a place where algorithms and computers can spot fake news with a high degree of certainty,” says Mr Sharockman. “But for people to believe it, we need humans to remain involved in the process.”