Cyacomb welcome the recent article from the Information Commissioner’s Office (ICO), ‘Lessons from Innovation in Safety Tech: The Data Protection Perspective’ and are buoyed by their supportive comments following our participation in the UK Government’s Safety Tech Challenge.

“The Challenge has been an important phase of industry and regulatory collaboration on safety tech.” - ICO

The Challenge aimed to find an innovative solution to tackle online harm within end-to-end encrypted (E2EE) environments, while maintaining user privacy rights. We were extremely grateful to be involved and named as one of the five winning proposals. User privacy and data protection was at the heart of our proposed project, and this seems to have been aligned with the ICOs objectives from the outset.

“The ICO seeks to support the adoption of data protection by design and default within the solutions [proposed in the Safety Tech Challenge] and understand how these proposed proofs of concept operate within the law.” - ICO

Our solution, Cyacomb Safety, employs our patented Contraband Filter™ technology, which is built on principles of privacy-by-design. When integrated into a messaging app, this ground-breaking technology helps find and block known child sexual abuse material (CSAM) and prevents it from being shared in real-time, while maintaining the users’ privacy.

For the challenge, we engaged with the ICO’s Innovation Hub, the IWF, University of Edinburgh and Crisp Thinking, to provide external validation of our technology. As a result of their advice, we identified opportunities for further innovation, built additional new technology, and revised our prototype to incorporate those improvements.


And we’re pleased to read the ICO recognises our efforts:

“We are encouraged by the suppliers` efforts on data protection…These efforts to establish ‘data protection by design and default’ are what responsible innovation looks like.” - ICO

We received in-depth written advice from the ICO’s Innovation Hub on our proof-of-concept work during the Safety Tech Challenge Fund. As a result, we believe there are no fundamental data protection barriers to deployment. Our software achieves the standard of ‘effective anonymisation’ of user data and so protects user privacy.

It’s also worth highlighting our technology focuses on detecting known CSAM content.

From our close working relationships with law enforcement agencies in the UK, USA, Canada and Europe, we understand that 95% of reported CSAM content on social media platforms is known content. Facebook’s findings support this too, an investigation into CSAM reports in just two months found that over 90% of the content was the same or visually similar to previously reported content. Even more striking:

“Copies of just six videos were responsible for more than half of the child exploitative content that we reported in that time period.” - Facebook, Online Child Protection Investigation

Mitigating against these known risks is the first line of defence. By using credible and previously assessed known content from NGOs or law enforcement agencies as the benchmark, we can achieve much higher rates of effectiveness, leading to far fewer false positives. The extraordinarily low false-positive rate achieved by technologies based on similar principles to Cyacomb Safety is widely recognised, for example by the EU technology impact assessments:

“The Impact Assessment Report indicates that technologies to detect known material have an estimated false-positive rate of no more than 1 in 50 billion (i.e., 0,000000002% false-positive rate).” - Ylva Johansson, European Commissioner for Home Affairs

Using high reliability technologies focused on known content avoids the sort of false positives (and negative consequences) that are understandably of such great concern to the public and those advocating for human rights, including privacy.

A recent New York Times article illustrates the challenges faced by software designed to detect unknown images, and the ICO blog suggests “More work needs to be done by safety tech suppliers producing unknown image detection solutions on false positives to reduce the harm caused by the unwarranted intrusion.”

In the ICO article, the same comments are not applied to known image solutions such as Cyacomb Safety which uses previously validated data to identify known CSAM content. Nor does this type of technology require ‘meaningful human oversight’ which is inherently intrusive of users’ privacy.

Having acknowledged the differences in performance between the known vs. unknown detectors, the ICO points out that any detection must have provision for “obtaining human intervention” when a user believes a mistake has been made. This allows  a user to request review by a human to ensure that they’re not unfairly victimised by an error in automated technology. This is an extremely useful backstop, and one that Cyacomb has developed additional layers of technology to support. This human review does intrude privacy, but only ever needs to take place at the request and with the permission of the user.

We’re grateful to the ICO for publishing this blog, which we believe helps make the case that there are technologies which can protect user privacy while finding and stopping a large proportion of CSAM circulating.

We will continue our mission to make the online world a safer place by reducing and removing harmful content online, delivering solutions that deliver privacy and regulatory compliance, and allowing the further development of positive technological advancements.

 

Please enter your details below to download your resource

By submitting this form you acknowledge that your personal data will be processed in accordance with our Privacy Policy.

Thank you.

Please click here to start downloading your file.