Instagram is introducing new anti-grooming safety measures that will prevent adult users from sending direct messages to teenagers unless they follow them.
It follows criticism of Facebook over its actions on child grooming, with multiple international statements calling for the company to ensure that young people using its platforms were kept safe.
Of particular concern has been the company’s plans to move all of its platforms to using end-to-end encryption, similar to WhatsApp. Critics allege this would disable Facebook’s own internal monitoring of messages for potential grooming.
Instagram‘s policies require that all of its users are over 13. People who identify themselves as teenagers are encouraged to make their accounts private to accounts that aren’t their friends – but this data is dependent on what the user reports.
The company said: “While many people are honest about their age, we know that young people can lie about their date of birth. We want to do more to stop this from happening, but verifying people’s age online is complex and something many in our industry are grappling with.
“To address this challenge, we’re developing new artificial intelligence and machine learning technology to help us keep teens safer and apply new age-appropriate features,” including the restrictions on direct messaging.
The company will prevent people who say they are over 18 from sending messages to people who are under 18 and don’t follow them.
It added: “This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up.
“As we move to end-to-end encryption, we’re investing in features that protect privacy and keep people safe without accessing the content of DMs.”
Instagram’s move was welcomed by Andy Burrows at the NSPCC, who described it as “correcting a dangerous design decision that should never have been allowed in the first place”.
Mr Burrows added: “There are consistently more grooming offences on Instagram than any other platform. Our latest data shows it is the platform of choice for offenders in more than a third of instances where they target children for sexual abuse.”
But he remained critical of the company’s approach to child safety in general, stating: “Nothing in this package will change the dial on Instagram’s dangerous plans to introduce end-to-end encryption, which will blindfold themselves and law enforcement to abuse and mean that offenders can target children on the site unchecked.”
As reported by Sky News last year, the UK government is considering issuing an injunction against the company from rolling out end-to-end encryption, and is collaborating with allies to establish pressure on Facebook.
Chloe Squires, director of national security at the Home Office, provided written testimony to the US Senate in December 2019 to complain about Facebook’s move to end-to-end encryption and rally support for the UK’s power which could have limited impact on the US-based corporation.
Through its own monitoring, Facebook submits thousands of reports to US authorities every year about predators using its platforms to attempt to groom children online, and millions of reports about images and videos featuring child abuse.
These child protection authorities estimate that 70% of Facebook’s reports will be lost if the company allows predators and their potential victims to communicate using an end-to-end encrypted service that the company itself can no longer monitor.
The company does not dispute this figure but argues that it can use the same tools that it uses with WhatsApp – looking for indications of child abuse in the metadata of messages – to detect and tackle predators.