The company is also going to tap not one, but at least two child safety groups operating under different jurisdictions to determine what images of child sexual abuse to flag.
Apple’s upcoming system to detect and stop child sexual abuse images over iCloud has (perhaps inevitably) sparked a wave of alarm that the same system will make mistakes or be abused for widescale censorship and surveillance. So on Friday, Apple released more details about how it’ll ensure the technology doesn’t spiral out of control. The company did so by publishing a 14-page document, outlining the various safeguards in place for its child sexual abuse material (CSAM) detection system. One of the big takeaways is how Apple’s detection system will only flag an iCloud Photos account for investigation if at least 30 suspected child sexual abuse images are detected. The threshold is important, since there’s always a possibility Apple’s CSAM system will mistakenly flag an innocuous image uploaded to iCloud as child porn. The company settled on 30 images, describing it as “drastic safety margin reflecting a worst-case assumption about real-world performance.” If you think the threshold is too high or too low, Apple adds the company may change the number after it deploys the detection system with iOS 15 later this year. “But the match threshold will never be lower than what is required to produce a one-in-one trillion false positive rate for any given account,” the company writes in the document.
Home
United States
USA — IT Apple: Anti-Child Porn System Won't Trigger Until at Least 30 Images Are...