In Kenyan newsrooms, artificial intelligence (AI) remains an unspoken subject. Behind closed office doors, some journalists quietly deploy AI tools to research stories, draft headlines and streamline workflows.
Yet openly, the technology is treated as contraband, something vaguely illicit that threatens to erode the craft of journalism itself. Editors warn AI will render reporters lazy and careless.
Journalists speak in hushed tones about their experiments.
There are no newsroom policies to guide the use of AI. No regulatory frameworks exist. No industry standards have been established. Meanwhile, the conversation rarely happens at all.
But outside the newsroom, on the sprawling platforms of social media, AI’s impact on women in Kenya’s media space is impossible to ignore and devastatingly real.
On X, formerly Twitter, TikTok, Facebook and WhatsApp, women journalists, commentators, and public figures face a coordinated weaponisation of AI designed to humiliate, intimidate, and silence them.
Deepfake technology strips women of their dignity. AI chatbots like Grok are explicitly prompted to generate sexually explicit content depicting female journalists and media personalities, creating a stream of abusive material that circulates online.
The technology that could democratise information instead becomes a tool for gendered harassment, a mechanism through which women are systematically dehumanised and driven from spaces where their voices matter.
Experts, feminists and journalists are stuck in a familiar paralysis: Do we need AI? What are the real pros and cons? Is it here to help us or to make life harder for women?
In response, the Africa Women Journalists Project (awjp), Deutsche Welle Akademie, Odipo Dev, and Africa Uncensored came together to host the Gender and AI 2025 fellowship and an AI in the Newsroom training programme.
The gatherings brought together journalists from across Kenya and AI experts tasked with a singular mission: to debunk myths, address fears, and confront the difficult question that lingers in every newsroom: Is AI here to steal your job?
At Africa Uncensored, a digital news outlet focused on investigative reporting, leadership has begun cautiously experimenting with AI tools. The reception has been mixed.
“The general perception among our staff about AI tools is somewhere between excitement and fear,” says Africa Uncensored Managing Editor Eric Mugendi.
“They are excited about the potential that these tools have, but they are also afraid that they may be training the tools that will take over their jobs.”
This tension between curiosity and anxiety mirrors the broader atmosphere in Kenyan journalism. The misconception that AI will replace journalists remains widespread, even though evidence suggests otherwise. Swapneel Mehta, an AI expert and trainer who has worked with newsrooms across Africa, pushes back against this narrative.
“The biggest misconception is that journalism is going to be replaced by AI. I don’t think this is true. If reporters actually tried out AI tools, they would realise that AI can augment their work, but AI is not in a position to replace their work because a lot of journalism involves critical thinking, perspective taking, and angle selection,” he said.
“The element that involves a human touch to a story is unlikely to be replaced by machine-generated text.”
Yet in the absence of clear policies and open conversation, many journalists remain hesitant. Some newsrooms are beginning to establish frameworks.
Africa Uncensored’s current policy is cautious: no uploading of content to commercial tools, clear disclosure when AI-generated videos are used in stories, and a focus on deploying AI for routine tasks like summarisation and image generation.
The goal is to free journalists from menial work so they can concentrate on substantive reporting.
Formal guidelines
But these are exceptions. Most Kenyan newsrooms operate without formal guidelines. Few have trained their teams. Even fewer have discussed, let alone addressed, the gendered dimensions of AI use in journalism.
Steffen Leidel, an AI trainer and senior consultant for Deutsche Welle Academy, the international broadcaster of Germany, brought his expertise to Kenya through a journalism fellowship designed to address this vacuum.
“We see a lot of change in the journalistic environment due to AI on several levels,” he explains. “In the newsroom, it’s about making work more efficient and faster, and thinking about automation of production for social media. Newsrooms are experimenting right now. It’s still early days in AI, so we really need to gather experiences and learn how to make this use of AI impactful.”
But there is an urgency that extends beyond newsrooms. “On the other side, we see the whole information ecosystem changing, not always towards the best for journalism,” Leidel observes. “We see a lot of AI slop at the moment - harmful information is being flooded into the internet. So we need to make sure that journalism still has a chance to be distributed.”
The fellowship has introduced what Leidel calls an “AI sandbox”, a safe environment where journalists can experiment with new tools and learn before integrating them into daily routines. This approach reflects a broader philosophy: smart experimentation paired with oversight. “It’s important to have spaces for colleagues to try things out, but also to have several experts in the newsroom who give oversight and guidance on how to use these tools responsibly,” Leidel explains.
For Ouma Elvine Tina, a feminist journalist and 2025 AWJP Gender and AI fellow, the stakes are particularly high. She has watched as AI technology transformed from a tool of opportunity into a weapon of harm.
“Kenya’s media is really notorious for sensationalism,” she says, “and that majorly affects women. The Kenyan media is also one of the spaces or platforms that promote technology-facilitated violence against women, particularly women in public life.”The evidence is mounting. Mwende Mukwanyaga, an AI ethicist and co-convener of the AI Salon Kenya, has documented a troubling trajectory. “As of 2025, the abuse towards women through the implementation and misuse of AI has increased,” she reports.
Recent examples are damning. On the platform X, Grok, an AI tool that generates videos and images, has been weaponised in a coordinated campaign against women.
Men are using the tool to undress women who are fully clothed in their original images. In other cases, they are digitally “covering up” women, imposing their own standards of decency onto female bodies.
“This is going in the direction of people deciding who has body autonomy,” Mukwanyaga explains. “They decide what is decent, how women should appear.”
In Kenya today, almost every woman who goes online has experienced some form of abuse. A staggering 99.3 per cent of women and girls reported facing technology-facilitated violence, whether through harassment, threats, exploitation, or emotional torture.
This is according to the Technology-Facilitated Violence Against Women and Girls (TFVAWG) report launched by the Women Advocates Research and Documentation Centre (WARDC), UN Women, and the Federation of Women Lawyers (FIDA).
The findings paint a grim picture of how the digital revolution has opened new opportunities while also exposing women to risks in virtual spaces. According to the report, 97.6 per cent of women in Kenya had endured psychological and emotional torture online, the most common form of digital abuse.
The problem runs deeper than isolated incidents. The datasets used to train generative AI models are riddled with bias. When asked to generate images of Black African women, many AI tools produce caricatures exaggerating lips, breasts, and buttocks, reflecting the racial stereotypes embedded in their training data.
This is not accidental. It reflects whose data was prioritised during development, whose perspectives were centred, and whose bodies were commodified in the process.
“Data sets that train AI do not fully capture the perspectives and views of women,” Tina notes.
“Access to AI is really limited for all women in Kenya. Not everyone has access. Even people living in urban informal settlements think AI can only be used to write an email or a CV. So the uptake of AI is not as deep as it should have been. And if women are not meaningfully engaged, it’s going to be another avenue or tool that men will use to perpetuate violence and to try to control and harm women.”
Ethical guidelines
The solution requires urgent action on multiple fronts. First, newsrooms must invest in AI literacy. “It’s really important that people first of all understand what the technology is about, how to use it, and what the risks are,” Leidel emphasises.
“The problem with AI tools at the moment is that they are still not 100 per cent reliable. So you always need a human in the loop. You need a policy; you need ethical guidelines to make sure that you don’t have the risk of losing quality in what you cover.”
Second, African newsrooms need to address a critical technical gap. Mehta identifies a structural problem: “The first challenge that African newsrooms face is the lack of support in low-resource languages for modern tools that they may be using elsewhere, and the lack of comprehensive guidelines around AI adoption for local newsrooms. Many newsrooms are likely to face resource constraints and technical challenges when they try to implement AI tools. This compounds the problem and makes a bigger divide between African newsrooms and modern AI tools.”
Solving this requires intentional design. “Solving the problem of AI tools for low-resource languages, as well as accounting for design feedback from local newsrooms, provides a good set of first steps towards designing better tooling for African newsrooms,” Mehta suggests.
“It is important to use verifiers that will allow you to tell whether the response of the AI model meets your basic threshold for publication.”
Third, newsrooms must establish comprehensive safeguards. This means transparent editing processes, human review of all AI-generated claims, and clear disclosure to audiences when AI is used.
For weather and sports reporting areas where AI agents can generate templated articles, this framework already exists in some outlets. But for more complex journalism, human judgment remains essential.
“You can’t ask these tools to produce an investigative story or to run a newsroom in a meaningful way,” Mugendi notes. “They are trained on existing data, so they lack the kind of imagination that makes storytelling interesting and engaging.”
A crucial challenge remains: how to ensure AI serves journalism without perpetuating bias. Leidel believes journalism itself has a role to play.
“How can AI be used to make journalism more inclusive, not biased? This is a big question and a difficult one. But what we as journalists can do is cover AI and create public pressure and awareness to make sure that people understand that AI models are biased and that the public has a chance to contribute to these models.”
The root of bias lies in data. “It’s all about data,” Leidel explains. “What we see is that data is biased because it’s reflecting the world as it is, and we know that the world has a lot of problems, including gender bias. The datasets are biased, and this influences the design of AI tools. We need to create awareness on how to change it.”
Mukwanyaga is emphatic about what Kenya’s media leadership must do. “Kenya is ready for AI,” she argues.
“Our newsrooms are ready for AI. However, the people at the top need to start making very good decisions, put the policies in place, and make it not shameful. Make it something that’s normalised, something you don’t have to hide in your newsroom.
This is especially important for women so that they don’t feel like they’re going to be undermined for their work because they used AI to edit or make an image. We can’t continue operating the way we were while the entire world is pivoting to another place.”
Leidel’s framework for balancing innovation with ethical responsibility rests on what he calls “smart experimentation.” This means creating safe environments where journalists can learn like the AI sandbox introduced through the fellowship, while maintaining oversight from experts within newsrooms.
“It’s important to have spaces for colleagues to try things out, but also to have several experts in the newsroom who give oversight and guidance,” he explains. “Experiment with as much as you can, but have oversight. Have people who understand AI and can guide others on how to use these tools responsibly.”
As elections approach in Kenya, the risks intensify. “Our biggest fear is AI misinformation and AI-generated videos and deepfakes, especially for women,” Mukwanyaga warns. This is not theoretical. It is an immediate threat.
Meanwhile, Kenya is experiencing rapid AI adoption. The country’s 92 per cent smartphone penetration, government-supported digital initiatives and innovative Silicon Savannah community have combined to create Africa’s most fertile ground for AI adoption.
The artificial intelligence market in Kenya is projected to grow by 28.22 per cent between 2024 and 2030, resulting in a market value of $1,070 million (Sh131 billion) by 2030.
Recently, Kenya became the 16th African country to launch a national AI strategy, positioning it as the 2025-2030 framework. It outlines the government’s commitment to positioning Kenya as a regional leader in AI research and development, innovation and commercialisation.
Yet awareness lags behind adoption. Only 32 per cent of Kenyans are aware of AI, though those most familiar with it are significantly using it.
This gap between rapid technological advancement and public understanding creates dangerous conditions, particularly for women navigating spaces where AI is being deployed with little oversight or accountability.