
Julie Dawson from Yoti, where I sat as an ethical board advisor for six years, and Jarek Sygitowicz from Authologic offered two sides of the same coin. Julie argued that ethical design has to start with dignity, not data. Jarek described how the next generation of e-IDs can prove you’re over 18 without revealing who you are. Both saw the potential for safety and privacy to reinforce each other, not compete.
Still, I keep hearing policymakers talk about “compliance” like it’s care. It’s not. Real care is making sure that when consumers of any identity, especially those vulnerable to the system, report abuse or a data leak, they’re met with empathy, not bureaucracy.
Does ethical tech = safe tech?
When I asked David Babbs, long-time campaigner and civic-tech leader, whether the growing number of “ethical tech” pledges were making any real difference, he smiled and said: “Not if the business model stays the same.” He’s right. For all the glossy sustainability reports and pastel mission statements, most tech platforms still rely on outrage and attention — the very things that make us all feel unsafe online.
Ethical tech is easy to say. Safe tech costs money. And that’s the crux of it: responsibility without redistribution is just PR. David argued that as long as ad-driven profits depend on engagement, tech companies will keep building for addiction, not wellbeing.
That tension came through again in my conversation with powerhouse writer, Lovette Jallow, who spoke about being neurodivergent online. “The digital world is built for speed, not for sensitivity,” she said. “When you’re neurodivergent, the pace of the internet isn’t just overwhelming — it’s alienating.” Her words reminded me that conversations about safety can’t stop at content moderation; they have to include design itself.
So much of what we call “safety” is really about friction — slowing down harmful interactions before they spiral. But in an economy where attention equals revenue, friction is treated like failure. We can co-design safety frameworks with care, not just harm reduction
Online safety isn’t one-size-fits-all
In one of my conversations, Hannah Swirsky from the Internet Watch Foundation told me: “The biggest mistake we make is assuming everyone experiences risk the same way.” She’s right. Safety frameworks tend to flatten differences, turning nuance into checkbox diversity.
Madhuri, Co-founder of content moderation tool Welivedit.AI expanded on this: “Policies rarely account for sensory overload, language barriers, or the way some of us process harm.” Safety isn’t just about blocking content — it’s about designing systems that don’t exclude people by default.
I’ve been thinking about how movements led by women, queer people, and disabled creators are already modelling this. They build safety through community, not code; through joy, not just justice. Carolina Are called this “hope as infrastructure,” and I love that phrase. Because the most radical thing we can do in tech right now might not be inventing the next algorithm — it might be resourcing the people who keep others safe.
Every conversation I’ve had over the last few months — whether with policy leads, sex workers, technologists or parents — kept coming back to the same truth: online safety isn’t a destination, it’s an evolving relationship. Between users and platforms, between communities and policymakers, between privacy and accountability.
I don’t want to live in a world where safety means surveillance, or where privacy means isolation. As digital citizens, we deserve both. And that’s the future worth fighting for: one where regulation protects without punishing, and connection doesn’t come at the cost of dignity.
Because the truth is, no one wins in the false choice between safety and freedom. But if we centre care, equity, and imagination, we might just build an internet that lets us all breathe easier.



Follow