Apple reportedly plans to make iOS detect child abuse photos

CyberTech

Level 44
Thread author
Verified
Top Poster
Well-known
Nov 10, 2017
3,250
A security expert claims that Apple is about to announce photo identification tools that would identify child abuse images in iOS photo libraries.

Apple has previously removed individual apps from the App Store over child pornography concerns, but now it's said to be about to introduce such detection system wide. Using photo hashing, iPhones could identify Child Sexual Abuse Material (CSAM) on device.

Apple has not confirmed this and so far the sole source is Matthew Green, a cryptographer and associate professor at Johns Hopkins Information Security Institute.



The rest
 

Gandalf_The_Grey

Level 83
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 24, 2016
7,264
Apple will inform police if they believe you have abusive images on your iPhone
The Financial Times reports that Apple is working on a system which would scan photos on your iPhone for images of child abuse and then contact police if detected.

The so-called neuralMatch system has been trained on a database from the National Center for Missing and Exploited Children, and photos on your handset and uploaded to iCloud will be scanned continuously.

If an image suggestive of abuse is detected, the image will be referred to a team of human reviewers who will then alert law enforcement if an image is verified.

The system would initially be US-only.

Apple is of course not doing anything different from other cloud storage companies, though on-device scans is an exception.

In their support document Apple explains the benefits:
Apple does not learn anything about images that do not match the known CSAM database.
Apple can’t access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.
The risk of the system incorrectly flagging an account is extremely low. In addition, Apple manually reviews all reports made to NCMEC to ensure reporting accuracy.
Users can’t access or view the database of known CSAM images.
Users can’t identify which images were flagged as CSAM by the system
The big concern of course is false positives and the potential consequence of these. While Apple says the risk is “extremely low”, due to the law of large numbers, if the risk is 1 in 1 million, it would mean 1000 of Apple’s billion iPhone users may end up having to explain themselves to the police despite not doing anything wrong.

The other, less immediate concern, expressed by the EFF, is that the system may be broadened by mission creep, or by pressure by totalitarian governments to include other imagery, for example of terrorist acts or symbols used by dissents or even LGBT imagery, a favourite targets if increasing right-wing governments in Eastern Europe.

The feature will be rolling out as part of iOS 15.
 

jetman

Level 10
Verified
Well-known
Jun 6, 2017
477
If Apple are going to start scanning every image on every iPhone, then it raises some serious privacy concerns.

Even if the original intention is to seach for illegal images, it wouldn't take much tweaking to start searching for other things.
Personally, I think this is open to abuse.
 

CyberTech

Level 44
Thread author
Verified
Top Poster
Well-known
Nov 10, 2017
3,250
Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF).

In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns that Apple is rolling out a form of "mass surveillance to the entire world" and setting a precedent that could allow the company to scan for any other arbitrary content in the future.



Snowden also noted that Apple has historically been an industry-leader in terms of digital privacy, and even refused to unlock an iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino, California, despite being ordered to do so by the FBI and a federal judge. Apple opposed the order, noting that it would set a "dangerous precedent."




Apple today announced a series of new child safety initiatives that are coming alongside the latest iOS 15, iPadOS 15, and macOS Monterey updates and that are aimed at keeping children safer online.

User devices will download an unreadable database of known CSAM image hashes and will do an on-device comparison to the user's own photos, flagging them for known CSAM material before they're uploaded to iCloud Photos. Apple says that this is a highly accurate method for detecting CSAM and protecting children.

CSAM image scanning is not an optional feature and it happens automatically, but Apple has confirmed to MacRumors that it cannot detect known CSAM images if the ‌iCloud Photos‌ feature is turned off.


Apple today announced a series of new child safety initiatives that are coming alongside the latest iOS 15, iPadOS 15, and macOS Monterey updates and that are aimed at keeping children safer online.

One of the new features, Communication Safety, has raised privacy concerns because it allows Apple to scan images sent and received by the Messages app for sexually explicit content, but Apple has confirmed that this is an opt-in feature limited to the accounts of children and that it must be enabled by parents through the Family Sharing feature.

If a parent turns on Communication Safety for the Apple ID account of a child, Apple will scan images that are sent and received in the Messages app for nudity. If nudity is detected, the photo will be automatically blurred and the child will be warned that the photo might contain private body parts.

"Sensitive photos and videos show the private body parts that you cover with bathing suits," reads Apple's warning. "It's not your fault, but sensitive photos and videos can be used to hurt you."
 
Last edited:

CyberTech

Level 44
Thread author
Verified
Top Poster
Well-known
Nov 10, 2017
3,250
WhatsApp won’t be adopting Apple’s new Child Safety measures, meant to stop the spread of child abuse imagery, according to WhatsApp’s head Will Cathcart. In a Twitter thread, he explains his belief that Apple “has built software that can scan all the private photos on your phone,” and said that Apple has taken the wrong path in trying to improve its response to child sexual abuse material, or CSAM.

Apple’s plan, which it announced on Thursday, involves taking hashes of images uploaded to iCloud and comparing them to a database that contains hashes of known CSAM images. According to Apple, this allows it to keep user data encrypted and run the analysis on-device while still allowing it to report users to the authorities if they’re found to be sharing child abuse imagery. Another prong of Apple’s Child Safety strategy involves optionally warning parents if their child under 13 years old sends or views photos containing sexually explicit content. An internal memo at Apple acknowledged that people would be “worried about the implications” of the systems.



And more
 

SpiderWeb

Level 13
Verified
Top Poster
Well-known
Aug 21, 2020
608

SpiderWeb

Level 13
Verified
Top Poster
Well-known
Aug 21, 2020
608
This is really worse than anyone can imagine.
  • So the database of CSAM images is file hashes. It's impossible to validate its contents by design and ethical reasons. They can't explain who exactly is supplying this database of hashes, it's just assumed to be one or multiple US govt agencies.
  • So Apple, even though they said everything on your phone is encrypted, is keeping hashes of all the files on your phone. This allows them to infer and match up what files you have on your phone even if they can't directly look into your phone. So they have been hoarding metadata of all of our data.

You see the problem here? All the government has to do is submit a list of hashes of files they deem a national security threat, doesn't have to be CSAM and boom you're on some list. Apple cannot review the files on the CSAM list because they are hashes and looking at the files directly would be unethical if not highly illegal. This is like having an antivirus but the AV company handed over management of the malware signatures to your government and whatever the government considers malicious... There's nothing stopping the government from secretly adding other types of files into the mix to recognize content that they don't want in circulation. The more information we get about this, the worse it actually is.
 

Marko :)

Level 23
Verified
Top Poster
Well-known
Aug 12, 2015
1,264
Google does it too btw.

I'm not sure it does. All that website says is they have an API which they use on search and YouTube. I couldn't find anything on what would indicate they scan content of Google Drive, for example. Even their terms of use say you're forbidden from uploading that stuff, and if you find something, that you should report it. Nothing else.

Even if Google scans Drive (which, again, there is no proof), at least they aren't false advertising claiming they're a company focused on privacy like Apple does so. You can't say you're privacy company while going though people's stuff without their knowledge.
 
Last edited:
Mar 7, 2020
84
I guess it's time for users to boycott Apple. Their entire "we value privacy" slogan went down the drain.
They most likely won't back down on this, since they seem to be an incredibly stubborn company. "Only a small part of users had their devices bricked/disagreed/whatever..."

It's highly likely that the alphabet boys will be contacting Apple, and want to expand it further. In a few years, we might hear of "parasites doing x, y; and have implemented feature z to stop them, even if it means further invading your privacy", or whatever.
I guess it's time for the U.S to get its own version of GDPR, or possibly say goodbye to privacy.
 
Last edited:

SpiderWeb

Level 13
Verified
Top Poster
Well-known
Aug 21, 2020
608
I'm not sure it does. All that website says is they have an API which they use on search and YouTube. I couldn't find anything on what would indicate they scan content of Google Drive, for example. Even their terms of use say you're forbidden from uploading that stuff, and if you find something, that you should report it. Nothing else.

Even if Google scans Drive (which, again, there is no proof), at least they aren't false advertising claiming they're a company focused on privacy like Apple does so. You can't say you're privacy company while going though people's stuff without their knowledge.
They most definitely scan all of our content. Most certainly video content is being scanned against CSAM. And other files are not end-to-end encrypted by design because Google Drive gives you a prompt when you try to upload a format that they don't officially support they need to know the basics of that file in order to provide you options to edit it in Google Docs, edit in Photos or view it in their video player. I don't believe for one sec that they aren't already doing this.
 

Dave Russo

Level 22
Verified
Top Poster
Well-known
May 26, 2014
1,136
If indeed their motives are right {subjective not objective} With a well written guideline all can be sure what is definitely inappropriate, maybe its ok, but when Government dictates morality sooner or later the objective side of human reasoning goes on witch hunts or as mention uses police state potential or blatant abuse, maybe Apple is doing this to avoid lawsuits??? If someone thinks this is a good marketing ploy, they might be nuts
 

CyberTech

Level 44
Thread author
Verified
Top Poster
Well-known
Nov 10, 2017
3,250
Apple has published a FAQ titled "Expanded Protections for Children" which aims to allay users' privacy concerns about the new CSAM detection in iCloud Photos and communication safety for Messages features that the company announced last week.

"Since we announced these features, many stakeholders including privacy organizations and child safety organizations have expressed their support of this new solution, and some have reached out with questions," reads the FAQ. "This document serves to address these questions and provide more clarity and transparency in the process."

Some discussions have blurred the distinction between the two features, and Apple takes great pains in the document to differentiate them, explaining that communication safety in Messages "only works on images sent or received in the Messages app for child accounts set up in Family Sharing," while CSAM detection in ‌iCloud Photos‌ "only impacts users who have chosen to use ‌iCloud Photos‌ to store their photos… There is no impact to any other on-device data."

For more information and also must read it
 

show-Zi

Level 36
Verified
Top Poster
Well-known
Jan 28, 2018
2,464
If the announced plan is a way to close the media loophole that bypasses many national regulations, I think it can't be helped. Because you can expect the effect immediately. If I can make it clear that this plan will not infringe on the privacy of individuals in the future, I agree.

I think the number of children who casually publish naked selfies is increasing. Education for children who abuse themselves is also essential.:unsure:
 
Jun 21, 2020
363
I understand the reasoning, yet I am still against this whether it be Apple, Microsoft, Google you name them. Big techs have been known to not keep to their privacy policy and TOS they wrote themselves. This is just another way to accept the new woke cancel hashtag culture for a few years and then its still there for whatever to be done.

The primary issue in my opinion is more of a social problem as well as its an educational issue. Kids will do dumb things, if they don't know nor have never been made clear that what they are doing does hold consequences. Casually posting and sending nudes on public platforms such as twitter and twatbook, doing dangerous challenges while recording oneself (self inflicted abuse?) and still get a pat on the back signalling "you doing good son"... I can go on and on...

Even more so in recent years that more and more issues are dragged from under the rug under guise of "child abuse", unjustifiable many a time, just distracts people away from the onces that do actually suffer from such abuse and need immediate help.
 

CyberTech

Level 44
Thread author
Verified
Top Poster
Well-known
Nov 10, 2017
3,250
A couple of days ago, Apple announced that it will be rolling out new technology that allows it to scan photos uploaded to iCloud using on-device machine learning and comparing their hashes to known images of child sexual abuse material (CSAM) from the National Center for Missing and Exploited Children's (NCMEC) repository. It also stated that it would inform parents if a child under 13 years of age receives or sends sexually explicit photos.

The move has drawn a lot of attention and criticism, with an open letter protesting the matter getting over 5,000 signatories at the time of this writing.

The open letter in question can be seen here. It is addressed directly to Apple and says that while the company's moves are well-intentioned because child exploitation is a serious issue, they create a backdoor in the ecosystem which undermines fundamental privacy rights of customers. The document further says that since the methodologies use on-device machine learning, they have the potential to break end-to-end encryption.

It also cites quotations from several organizations and security experts to emphasize that the tech is prone to misuse and undermines privacy. An excerpt reads:

The rest and also read the comments
 
Jun 21, 2020
363
Let's say it does work according to the policy and no privacy or security is undermined at all, nor any misuses of the system. That is never going to happen, but let's say that is the situation.

There will still be a big flaw, and that is that those databases (hashes) used contain material that are US-only. So it wouldn't matter anyway at all in every way imaginable on the same scale they portray it to be and work. Missing or abused children in any other European country for example, like let's say Norway or Germany, would still not be detected by the algorithm since they are not part of the databases the entire machine learning is based on.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top