How tech is weaponised against women
Major digital platform owners pay lip service to supporting female empowerment and furthering women’s rights, but it’s barely a year since Big Tech CEOs were falling over themselves to please US President Donald Trump with overnight U-turns on diversity initiatives. Cue Meta’s Mark Zuckerberg suggesting corporate culture needs more “masculine energy“.
Small surprise then that online toxicity and misogyny are at the heart of latest tech scandals. Elon Musk’s Grok AI tool was widely used to digitally undress women and girls; Meta’s Ray-Ban AI glasses have been found to non-consensually record women in states of undress.
The European Commission’s recent gender equality report highlighted that women are disproportionately exposed to online gender-based violence, including through harassment, stalking, doxing (exposing personal information), making and sharing non-consensual intimate images, and AI deepfakes, including sexualised or pornified photorealistic imagery.
The Commission’s Gender Equality 2026–2030 Strategy states that 98% of all online deepfakes are of a pornographic nature and 99% of them depict women.
Digital governance rules
Online platforms are regulated in the EU via the Digital Services Act (DSA), which aims to make tech giants responsible for content governance on their platforms. This supposedly includes ensuring that their algorithms, AI tools and viral deepfakes don’t harm women.
But tech companies have a history of creating tools that make online spaces unsafe for women. The DSA hasn’t changed that.
Before Grok, there were the AI avatar generator tools that produced unexpectedly sexualised portraits of women at the touch of a button. Snapchat beauty filters applied hyper-feminine standards that encourage body dysmorphia. The Instagram algorithm amplifies selfies in swimwear.
While the commissioner insisted that gender equality and protection, both online and offline, “is not optional”, the reality for EU women online falls far short of Virkkunen’s rhetoric.
Citing recent examples like Meta’s smart glasses, Greens lawmaker Alexandra Geese argued that “Big Tech has turned against women”. She said that the message Big Tech sends to women is clear: “Leave the political debate. Keep your mouth shut”.
Meanwhile, tech ‘innovation’ marches on.
Smart glasses to record women
AI-powered smart glasses are becoming more pervasive as they’re increasingly discreet enough to pass for regular glasses.
On social media, ‘pick-up artists’ quickly started using them to boost their content. The BBC talked to two women about their experiences of being approached in public by a man wearing glasses who later found videos of their interactions, secretly captured via embedded camera, circulating on social media, where thousands of web users had also posted derogatory and sexually explicit comments.
Geese told Euractiv such tools are giving men the “technical possibility to turn women into sexual objects … instead of human beings with equal rights”.
Meta Lab smart glasses. Myung J. Chun / Los Angeles Times via Getty Images
Meta, which is one of the leading tech companies commercialising smart glasses, claims that an LED light on the frame of the device clearly signals when it is recording.
But a recent investigation by Swedish media indicated that Meta’s glasses were recording people without their knowledge or consent. The devices sent footage to a third-party contractor in Kenya, with one staff member telling journalists they had seen people in intimate situations, citing an example where a woman changed clothes in a bedroom where the glasses had been left.
Grok undressing women
Earlier this year, Elon Musk’s controversial AI tool Grok – which is integrated into his social media platform X – caused a global scandal after it flooded the service with sexualised images of women and children.
The Commission responded by launching a DSA investigation into Grok in relation to the spread of pornified images via X. Virkkunen has stressed that “Deepfakes using non-consensual intimate images [is] illegal.” The probe will consider how Grok mitigates risks related to the spread of “manipulated sexually explicit images”.
Yet Grok is just the latest in a string of AI tools that perform nudification, with many AI apps designed for the explicit purpose of digitally undressing women. X labelled the nudifying function as “spicy mode”, in a bid to normalise the digital abuse.
Such tools remain a live concern for EU legislators, with several MEPs hoping to include a potential ban in another EU rulebook, the AI Act – which does not currently prohibit AI nudification.
But the DSA should be a “front-line” tool to address the spread of non-consensual intimate images, whether AI-generated or not, said Marie Seck, researcher at CDT Europe.
The EU’s online governance rulebook already contains “clear obligations” for online platforms when their “interface is at fault”, she said, suggesting these rules should have prevented the rollout of harmful features like Grok in the first place.
What will the EU do?
The Commission will soon issue guidelines for platforms on so-called trusted flaggers – vetted independent organisations whose reports of illegal content must be prioritised – with the aim of building their “capacity” to flag gender-based cyber violence online, she also said.
But Ireland’s watchdog has not announced a formal investigation yet, despite having been in touch with Meta in relation to its smart glasses since September 2021.
(nl, ow)


