Our CEO Ian Stevenson attended the WeProtect Global Summit ‘Turning the Tide on Child Sexual Abuse Online’ in Brussels last week, an inspiring event bringing together world leaders in child safety to share successes, challenge assumptions and drive action.
As part of this, Ian was privileged to take part in a panel discussion on Safety Tech and its implications for “Advocating for Child Rights in the Context of Online Regulation”. Chaired by Vicky Rateau (Oak Foundation), the panel also included Sarah Gardner (Thorn) and Moira O’Neil (Frameworks Institute).
Here he shares some of his reflections on the event:
Having attended very few large-scale events in the last two years due to COVID, the WeProtect summit was a wonderful return to meeting in person. The keynote speakers, including Ernie Allen, Joanna Shields and Ylva Johansson were hugely inspiring and energising.
The panel discussion I was a part of looked at the world of online regulation, and I wanted to share some of the thoughts I shared in my opening remarks.
Preventing Online Child Sexual Abuse and Exploitation is an enormous problem, and it’s easy to feel overwhelmed by the challenge. In fact, it’s entirely rational to think the problem will never be solved in its entirety.
One of the key issues that regulators face in trying to drive progress in tackling the problem is that they can only demand action that is technically feasible.
As a leader of a Safety Tech company and chair of the Online Safety Tech Industry Association (OSTIA), I spend much of my time working on or with solutions. They may not solve the entire problem of child sexual abuse (CSA) images being shared online, however, we can serve as a voice of hope and focus on what can be done. With practical solutions being made available today, Cyacomb and other online safety companies can contribute to improving online safety dramatically.
What do these solutions look like? Good Safety Tech can promote privacy by using “detectors” that trigger only on certain harmful types of content – for example Child Sexual Abuse Material. These can be likened to a metal detector that only sounds an alert when metal passes through it and have negligible impact on user privacy when correctly implemented.
Good deployments of Safety Tech are about making appropriate and proportionate responses when harmful content is detected. These responses can include warning a user, offering a user advice, signposting sources of help, blocking content or reporting content to the appropriate authorities – which is a legal requirement in some situations. The appropriate response to a detector triggering for Child Sexual Abuse Material depends on a number of things, including the context and the performance of the detector. Some detectors are good at detecting a wide range of material however, they may also have some false positives. These might be suitable for warning a user they might be about to do something illegal or escalating to a “human in the loop” process. Other technologies are extraordinarily reliable at detecting known abuse, and may be suitable for automated blocking or reporting users to the authorities.
We rightly talk a lot about privacy in online systems, and it’s important to help people understand what that really means. Good privacy is when personal information is protected, and no-one knows anything about us (or our interactions online) that we don’t want them to. Few useful systems offer perfect privacy – even great encrypted messaging systems can allow third parties to find out things like how many messages we send, when, and potentially to whom. Getting this information might require someone to be on the same network as us, to work at a network operator or the messaging company, or to have hacked in to one or more systems – so it’s not much of a threat to privacy in practice. However, the theoretical availability of very limited information doesn’t necessarily constitute a practical violation of the principles of privacy.
So if privacy isn’t as binary as the public discourse somehow suggests, how can we explain, in a relatable way, what privacy is really about? We need to think about what information an attacker might be able to access and how that affects privacy, and a way to do this is to consider phone numbers.
If your entire phone number was published, you clearly have a loss of privacy. If just one digit is published, then there is probably no practical loss of privacy even though some information was revealed. Publication of half the digits poses little real threat that someone would be able to use that information to contact you, but it might enable someone to correlate your phone number in a database obtained through some other means. Safety technology can work with small amounts of information (analogous to maybe a single digit from a phone number) which present no threat to privacy while enabling huge improvements online safety.
The challenges seem particularly acute when we look at what can be done in End-to-End Encrypted Messaging environments. Even here, Cyacomb and other OSTIA members (including DragonfAI, Yoti and SafeToNet) have solutions that can make a difference. Cyacomb has recently demonstrated technology to reliably detect known Child Sexual Abuse Material in messaging environments while protecting user privacy for the UK Safety Tech Challenge Fund. Other participants in this challenge demonstrated solutions that can bring together automated detection of age and nudity to help prevent young people sharing nudes. These are real technologies, demonstrated and ripe for deployment if policymakers and platforms wish to do so, that can work in even this most challenging environment.
I’m confident that Safety Tech will continue to develop new and more powerful ways create safer online experiences, offering a much-needed voice of hope in the fight to (in line with the title of the summit) ‘Turn the Tide on Child Sexual Abuse Online’.
Ian Stevenson
CEO Cyacomb & Chair OSTIA
Please click here to start downloading your file.