Apple recently announced a new suite of features to combat the spread of child sexual abuse material. Everyone agrees that child safety is important and that companies must do more to protect children online. But no technology is ever neutral — not even with the lofty goal of safeguarding children from predators.
As advocates warned, Apple’s initiative could be the start of a wave of privacy violations, setting a dangerous precedent for the future of digital privacy rights. The fact is that we need more transparency from Apple about how these policies will be put into practice and strong guardrails, both corporate and regulatory, to prevent abuse of these new features.
The initiative is spread out through three new features, two of which raised red flags for privacy advocates. The first is a tool to automatically scan photos stored on users’ devices before they are uploaded to iCloud. It will then compare those images with a database of child sexual abuse images hosted by the National Center for Missing & Exploited Children, a private nonprofit organization established by Congress in 1984. If a match is found, law enforcement may be notified.
The other controversial new feature will flag sexually explicit photos sent or received by minors using the Messages app, with an option for parents to receive notifications if their children view flagged photos. While that sounds like a helpful tool for parents, it may also lead to other risks.
For starters, the flagging feature could lead to discriminatory effects due to algorithmic bias in how the feature chooses which images to flag. For example, Instagram recently faced backlash when it introduced explicit content filters, which some argue lead to discriminatory blocking or downranking of posts by certain creators of marginalized races, genders and backgrounds.
The public deserves more transparency about how Apple plans to identify problematic images and what algorithms it plans to use for this feature.
This feature could even lead to greater abuse of vulnerable children, including potentially outing LGBTQ children to homophobic families. As legal scholar and advocate Kendra Albert put it, “These ‘child protection’ features are going to get queer kids kicked out of their homes, beaten, or worse.” Children in abusive homes may also be prevented from sharing photographic evidence of their abuse by the parental notification feature.
The good news is that the parental notification feature will be available only for children under 13, and minor users will be informed that the notification setting is on. In addition, both the risks and the benefits of the Messages scanning feature may be limited, as users can easily use any number of other messaging apps to evade the content flagging and parental notification.
Regardless, the public deserves more transparency about how Apple plans to identify problematic images and what algorithms it plans to use for this feature.
The device scanning feature is even worse. Currently, many tech platforms include some safety features that monitor content for child sexual abuse material, often reporting directly to the National Center for Missing & Exploited Children. However, most of these safety mechanisms scan content that is shared to servers or posted to sites. In contrast, Apple’s safety tool would scan photos saved to users’ devices. Doing so would by definition decrease users’ control over who can access information stored on their devices, which has a much higher risk of future abuse.
While Apple may mean well, it’s not hard to imagine how this safety feature could lead to even greater privacy incursions in the future. Today, the justification is child safety. Tomorrow, the justification might be counterterrorism or public health or national security. When we begin giving up our digital rights, it is hard to turn the clock back and bring back past protections.
Even worse, Apple’s device scanning feature could open the door to abuse by other parties, including state actors. Governments around the world have long called for “backdoor” access to applications and devices — built-in access that would get around standard security measures (like encryption) and allow governments to view, manipulate or control the data and devices you own.








