Apple has provided additional information about its forthcoming plans to scan iCloud Photos for child sexual abuse material (CSAM) using customers’ iPhones and iPads. The corporation has published a new report that delves into the measures that it expects will boost user confidence in the programme. This contains a filter that only flags photographs found in several child safety databases from different governments, preventing one country from introducing non-CSAM information to the system.
Apple’s forthcoming iOS and iPadOS updates will automatically match known CSAM from a list of image hashes compiled by child safety organisations against US-based iCloud Photos accounts. While many firms examine cloud storage services remotely, several cryptography and privacy experts have slammed Apple’s device-based technique.
The study, titled “Security Threat Model Review of Apple’s Child Safety Features,” aims to ease concerns about the rollout’s privacy and security. It adds on an interview with Apple executive Craig Federighi in the Wall Street Journal this morning, in which he described some of the details.
Apple claims in the document that it will not rely on a single government-linked database to locate CSAM, such as the National Center for Missing and Exploited Children (NCMEC) in the United States. Instead, it will only match photos from at least two different national groupings. Because it wouldn’t match hashes in any other database, no one government would be able to discreetly inject unrelated content for censorship purposes.
Apple has mentioned the possibility of using different child safety databases, but it hasn’t described the overlap method until now. Apple told reporters on a conference call that it is only naming NCMEC because it hasn’t finalised deals with other groups.
The paper backs up a point made by Federighi: initially, Apple will only notify an iCloud account if it detects 30 CSAM photos. According to the article, this criterion was chosen to create a “drastic safety margin” to avoid false positives, and once the system’s performance in the actual world is evaluated, “we may adjust the threshold.”
It also includes more details on the auditing method proposed by Federighi. Apple’s list of known CSAM hashes will be built into all versions of iOS and iPadOS, however the scanning technology will initially only work in the United States. Apple will offer a whole list of hashes for auditors to compare to kid safety databases, providing another another way to ensure it isn’t secretly matching more photos. It also claims it will “refuse all requests” for moderators to submit “anything other than CSAM documents” for flagged accounts, implying that the technology may be used for other types of monitoring.
Apple created “confusion” with its statement last week, according to Federighi. Apple, on the other hand, has defended the update, telling reporters that, while it is still finalising and iterating on details, it hasn’t changed its launch plans in reaction to the recent criticism.