Blog

Breaking the chain: Inside the technical battle against online child exploitation

7 min read
Last edited: Apr 10, 2025

The internet has fundamentally transformed how people connect and share information globally. While digital connectivity has created unprecedented opportunities, it has also been exploited by criminal networks that leverage technological infrastructure to harm the most vulnerable among us. One of the most disturbing manifestations of this exploitation is the proliferation of child sexual abuse material (CSAM) online, which represents not merely harmful content but documented evidence of criminal acts against children.

In the third installment of our Leadership Link series, Carla Bourque, CEO of Rebrandly, discusses how online businesses and organizations can combat this issue with Dan Sexton, CTO of the Internet Watch Foundation (IWF)—a UK-based nonprofit organization focused on the prevention and removal of CSAM and other abuse-related content from the internet. 

Watch the full conversation, or read the summary below to learn how the IWF and partners like Rebrandly collaborate across industries and geographical locations to combat the spread of harmful content online.

Disrupting the CSAM supply chain through technical intervention

The IWF combats the spread of CSAM online by dismantling the digital infrastructure that powers it, making it harder for bad actors to create, access, share, or profit from abusive content online. They do this by adding friction at key points of the CSAM supply chain.

“One of the ways we introduce friction is by talking to payment providers and notifying them that their payment option is being advertised on a site for the sale of CSAM,” shares Dan. Payment providers can then withdraw their services to avoid an association with harmful materials. 

Other examples of adding friction with technology:

  • Blocking known abusive URLs and registering them with a UK database
  • Flagging cryptocurrency wallets associated with CSAM
  • Encouraging platforms to pre-scan uploaded content for potential abuse
  • Working with global partners to track and tag known abusive material worldwide

A concerted effort must be made to stop various harm types and user-generated sharing schemes associated with the sale and sharing of abusive content. Increasing visibility in the techniques and methods bad actors use to share abusive content, including the rise of ai-generated CSAM,  helps industries worldwide understand the mechanisms at play and proactively develop solutions to combat them.

Rebrandly acts as one of those partners in stopping the CSAM supply chain by actively pre-screening, hashing, logging, dismantling, or taking down malicious links and sharing any potentially abusive content with IWF, local government, and law enforcement agencies.

“To put it into more familiar terms for some of our audience,” explains Carla. “If you’re trying to reach a market or combat competitive threats, you need to consider the motivators and the systems that drive that criminal behavior and associated outcomes.” Focusing on the technical side of intervention helps shed light on the backend systems that power the infrastructure where CSAM exists.

Partnerships that build a safer online ecosystem

Solving an issue as complex as online CSAM proliferation requires strong collaboration across industries and technical touch points in the content-sharing supply chain. Preventing the spread of abusive materials also needs investment from organizations across the government, law enforcement, and commercial platforms that power the internet.

“A whole systems approach is what we need,” says Dan. “If you get the system working, then you get to a position where CSAM is not a pandemic and is minimized as much as possible online.” While making abusive content disappear entirely will be difficult, coming together to build protections globally can significantly decrease the risk.

The stakes are high. In its most recent annual report, the IWF identified 275,652 URLs as containing CSAM materials, an 8% increase year over year. The production of these materials is a direct result of and can lead to more child abuse.

Unfortunately, some social media platforms have been reducing the safety and moderation features they previously had instead of continuing to strengthen them. “Bad actors will be attracted to platforms with poor content moderation [vs. those with stringent controls] because, if they want to share something and it gets blocked, they’ll eventually stop trying,” says Dan. “And they’ll have to go somewhere else.” Intentional pressure on social media sites pushes CSAM away from more mainstream services and into the corners of the internet, where sharing becomes more difficult.

Regulations such as the UK’s Online Safety Act and Australia’s safety laws show regulatory promise by setting clear expectations for online content moderation and platform responsibility. Increased regulatory pressure ensures companies prioritize safety over or in relation to profit or growth. There is some positive movement in this regard—as more consumers become aware of the role they play in the chain of CSAM and focus their attention and spending on ethical services and non-exploitive products, more companies are willing to change. 

Everyone has a role in driving change: users by choosing platforms that prioritize safety, investors and advertisers by demanding ethical standards in their growth strategies, and governments by establishing baseline regulations that prevent the spread of child sexual abuse material.

Education, AI, and the future of online safety regulation

Education is vital to combating online abuse but cannot succeed in isolation. Rather than placing the burden solely on parents or victims, effective prevention requires ongoing, evidence-based awareness campaigns that evolve alongside emerging threats.

The proliferation of AI and image manipulation technologies has further complicated this landscape. “There’s a socio-economic context, but also a behavioral context,” notes Carla. “These topics are difficult but important to discuss, especially considering scenarios where children use these technologies to directly harm their peers.”

The era of relying on children’s self-regulation or expecting parents to shoulder the entire responsibility is behind us. Companies now have both ethical and practical obligations to address the structural issues enabling abuse.

Dan and the IWF advocate for government-mandated proactive detection of known CSAM through hashing technology. This approach creates a unique digital “fingerprint” for abusive material that can be tracked across platforms and shared with authorities.

“Proactive pre-scanning of content significantly enhances our effectiveness,” explains Dan. “The system tags images and videos of abuse with a unique identifier, then checks that identifier against a database of known illegal material. If there’s a match, the upload is blocked.”

This technology is readily available and straightforward to implement. However, without regulatory requirements, many businesses lack incentives to adopt these hashing programs—despite their proven ability to prevent revictimization by blocking the redistribution of known material and establishing industry-wide screening standards.

A Safer Internet Demands Immediate Action

Organizations like IWF stand at the frontlines of this critical battle. For three decades, they’ve identified, assessed, and removed CSAM from the internet while building essential partnerships across industries, governments, law enforcement agencies, and educational institutions. Their work extends far beyond simple content removal—it’s a comprehensive strategy to prevent redistribution, block access, and systematically dismantle the infrastructure enabling child exploitation.

Greater transparency around these exploitation systems helps establish new industry standards and equips IWF’s network of 200+ global partners with actionable protocols. The investment required to integrate safety measures into your digital infrastructure is minimal compared to the profound protection it provides to vulnerable children worldwide.

The time for action is now. Every minute without these protections means more children are suffering preventable harm. As technology leaders, we have both the capability and responsibility to implement solutions immediately.

Rebrandly has committed to this mission by embedding IWF’s proactive abuse screening directly into our core product suite. We challenge every technology company to follow suit and prioritize these essential safeguards.

Join us today. Contact the IWF to become a member. Implement hashing protocols in your systems. Make the commitment to build safety into your products from the ground up. Working together, we can create not just a safer internet, but a digital world where exploitation has nowhere to hide.

Read more about Rebrandly’s partnership with IWF, or visit our Trust Center to take the first step toward implementing these crucial protections in your organization.

Read more about Rebrandly’s partnership with IWF, or visit our Trust Center.