The American technology giant Apple will deploy a new system that checks photos on iPhones in the US, before they are uploaded to the iCloud storage services, to ensure the upload does not match known images of child sexual abuse.
Detection of child abuse image uploads is sufficient to guard against false positives that will trigger a human review of and report of the user to law enforcement, Apple said. The system is designed to reduce false positives to one in one trillion, it added.
The project, which is detailed in a new “Child Safety” page on Apple’s Website, aims to address requests from law enforcement to help control child sexual abuse while also respecting privacy and security practices that are a core objective of the company.
Most of the leading technology providers, including Google, Facebook and Microsoft Corp, are already checking images against a database of known child sexual abuse imagery.
“With so many people using Apple products, these new safety measures have the lifesaving potential for children who are being enticed online,” Mr. John Clark, chief executive of the National Center for Missing & Exploited Children, said in a statement.
The law enforcement officials maintain a database of known child sexual abuse images and translate those images into “hashes”, numerical codes that positively identify the image but cannot be used to reconstruct them. Apple has implemented that database using a technology called “NeuralHash”, designed to also catch edited images similar to the originals and that database will be stored on iPhones.
When a user uploads an image to Apple’s iCloud storage service, the iPhone will create a hash of the image to be uploaded and compare it against the database. Photos stored only on the phone are not checked, and human review before reporting an account to law enforcement is meant to ensure any matches are genuine before suspending an account, the company said.
One feature that makes Apple’s system different from others is that it checks photos stored on phones before they are uploaded, rather than checking the photos after they arrive on the company’s servers. Users who feel their account was improperly suspended can appeal to have it reinstated, Apple added.
However, some privacy and security experts have expressed concerns that the system could eventually be expanded to scan phones more generally for prohibited content or political speech.
“Apple has sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content,” Mr. Matthew Green, a security researcher at Johns Hopkins University, warned.
Other privacy researchers like India McKinney and Erica Portnoy of the Electronic Frontier Foundation wrote in a blog post that it may be impossible for outside researchers to double-check whether Apple keeps its promises to check only a small set of on-device content.