Apple announced on May 5 that when iPhone users in the United States are saving images in the cloud service iCloud, they will introduce a system to identify whether it is a childhood abuse content (CSAM).
According to Apple, it detects whether the image is CSAM before the image is stored in iCloud.
A mechanism for comparing the database of childhood abuse images collected by a missing and exploitation child center (NCMEC) and a child -related organization, and images that iPhone users are trying to save.
The CSAM image is converted to a numerical code called "hash", and "corresponds" with Apple's device.
Apple states that even if the image is edited, it can detect something similar to the original image.
If it is judged to be CSAM, humans will confirm the content and report the user to the law execution agency.
However, there are concerns that these technologies have been used to identify prohibited content and political remarks, and that privacy may be infringed.
Experts are concerned that the dictatorship may use this technology for their own people.
Apple explained that the new version of the iOS and iPados, which will be released later this year, will include a new application for encryption technology that considers online CSAM spread while considering user privacy.
Apple explains, "Before the image is stored in iCloud Photos, the collation process between the image and the known CSAM hash is performed on the device."
The system says, "The probability of incorrect warning about a specific account is extremely high, less than a trillion in one year."
When CSAM's suspicion is reported, check each one manually and determine whether there is something that matches CSAM.After that, take measures to disable user accounts and report them to the law enforcement agency.
According to the company, the company explains that the only user preserved images are only when the user has an iCloud Photos account, which has already been determined to be CSAM.Therefore, this new technology has a "big" merit in terms of privacy protection than existing technology.
On the other hand, there are privacy experts who are concerned.
"Regardless of the long -term plan, the company has a very clear signal. It is safe to scan users' mobile phones and detect prohibited content.-The company's (very influential) concept, "said Matthew Green, a security researcher at John's Hopkins University.
"It's almost no problem if it would be a correct answer or a mistake in the future, or that's almost a problem. This will break the dam. That is the point. In the future, the government will introduce this technology to all businesses.You should request it. "