By: Nabeel Odeh
With the rise of generative AI tools such as ChatGPT across the media landscape, reactions inside newsrooms have sharply diverged. Some dismissed these tools outright, seeing them as nothing more than another fleeting tech bubble—much like the metaverse hype that faded a few years ago. Others rushed to embrace them with enthusiasm, deploying AI without a clear editorial vision or a deep understanding of its implications for journalistic integrity.
Between these two extremes, a more fundamental question has begun to surface: What does it mean to be a journalist in an era where algorithms can write, analyze, and edit almost anything in a matter of seconds?
This question is no longer philosophical; it sits at the heart of the profession’s future, defining the new boundaries of credibility, creativity, and human judgment in journalism.
Every journalist who spends most of their time chasing headlines and producing daily reports now finds themselves compelled to rethink the foundations of the profession—not out of fear that AI will replace them, but out of a need to adapt and evolve in order to preserve their ability to craft narratives that are deeper, more critical, and more aware.
AI, as I firmly believe, is neither an adversary nor a substitute for the human journalist. At its core, it is a professional challenge—one that requires practitioners to explore new ways of strengthening journalistic culture amid the rapid, almost explosive transformations we are witnessing. Meeting this moment demands safeguarding the essential journalistic quartet: critique, control, accountability, and transparency.
AI Is Not a Neutral Mirror of the World
At first glance, AI tools—especially generative ones—may appear to be neutral assistants that do not interfere with the content they produce. Yet in practice, and based on my own experience, the reality is far more complex. These models, including ChatGPT, do not generate ideas out of thin air; their responses are built on enormous datasets filled with linguistic, cultural, and historical biases—some visible, many more hidden, layered, and accumulated over time.
These biases are not accidental. They are the natural byproduct of human biases reflected in cultural production shaped by dynamics of power and authority. And no matter how skilled we become at reducing algorithmic bias, the outcome is ultimately a narrowing of the margin—not its elimination.
And when we, as journalists, ask these models to craft a news story or write a headline, they are not “thinking” or “understanding” in any human sense. Instead, they recombine the most statistically probable patterns based on what they have been trained on.
In other words, they simulate language, not reality—and that is precisely why we call them artificial.
This is precisely where the danger of relying on AI to generate journalistic content emerges. These tools can easily slip—unintentionally—into reproducing biased or misleading narratives. Lacking critical judgment and the capacity for verification, AI becomes a potential partner in producing content that appears objective on the surface, yet in essence may recycle entrenched biases or even introduce subtle distortions. Such risks demand a heightened level of editorial observation.
This is why I consistently emphasize—whether in talks or training programs—the central role of the human editor. My approach is built on the conviction that journalistic production must be grounded in a human–machine team model, where AI and journalists work side by side rather than in competition. Put differently, the journalist should collaborate with AI as part of a unified team, while maintaining their indispensable role in guidance, verification, and editorial judgment—a triad that remains non-negotiable in any serious journalistic practice.
Why Do We Write? And What Does Editing Mean in the Age of the Smart Machine?
Confronted with the technological wave unleashed by rapid advances in AI models— that can now generate text in a matter of moments—we find ourselves compelled to revisit a fundamental question we once assumed was settled: Why do we write? And what is the purpose of journalistic editing?
In an era where the smart machines can produce endless streams of text, the act of writing regains its deeper meaning—not as a mechanical function, but as a human exercise in intention, perspective, and interpretation. The question is no longer about speed or volume; it is about the value that human judgment brings to the text.
In the past, the answers to these questions felt almost self-evident. We wrote because journalism was our window for understanding the world, our tool for interpreting it, and the quiet authority responsible for society’s self-correction. Today, however—now that a machine can generate a report or an article in less than a minute—we face a profound challenge that forces us to reconsider, perhaps even redefine, our answers. We write because journalism is not merely a linguistic act; it is an act of critical thinking. We edit not to polish wording alone, but to uncover what lies beneath the text—to expose what is hidden, to illuminate what has been silenced.
Unlike LLM model, the journalist does far more than gather information and stitch sentences together. They push deeper—into the situational context, the human dimension behind the story, and the power dynamics that shape whose knowledge is elevated and whose is erased. A journalist asks: Who benefits from this narrative? Who will be remembered, and who will be excluded? What is left unsaid—and what must be said? These are questions no machine can generate, because they arise not from computational pattern-matching but from human consciousness—from an alerted professional conscience and from a political and social awareness of the immediate and historical contexts that define people’s lives.
AI, no matter how advanced, lacks what we might call an editorial intention. Algorithms do not challenge dominant narratives, do not take the side of a missing or incomplete justice, do not question authority, and do not interrogate the prevailing storyline. They simply reproduce patterns—they do not deconstruct them. The paradox is clear: the more capable machines become at generating text, the more they depend on humans to write, analyze, and dismantle narratives so they can learn new patterns to replicate later.
From Experience to Digital Acumen: What Future Awaits Journalists in the Age of the Smart Machine?
Smart digital transformation is no longer a strategic option for media institutions—it has become an unavoidable destiny. The real question is no longer “Should we use AI?” but rather “How do we use it without sacrificing the essence of journalism?” There is a profound difference between a journalist who employs intelligent tools with a sharp, critical eye and a critical professional conscience, and another who gradually slips into the role of a mere “quality inspector” for content generated by algorithms.
The real challenge is not to resist artificial intelligence, but to integrate it within a moral and professional framework that redefines journalistic skills in light of new technological capabilities. This raises a set of urgent questions that newsrooms can no longer ignore:
First: Which tasks can be safely delegated to machines without undermining the core mission of journalism—such as sorting, summarizing, archiving, or even suggesting headlines?
Second: What are the red lines that must remain exclusively human—investigative reporting, political analysis, narrative verification, and the shaping of editorial angles?
Third: Does the media institution have a clear editorial policy governing the use of AI? And who holds the authority to define and revise these boundaries?
Finally: How do we maintain transparency and protect media institutions from risking the trust of their audiences?
The shift from experience to digital acumen does not imply abandoning the tools of the past. Rather, it means integrating them into a broader epistemic horizon—one that reorients our compass toward a more conscious understanding of the human role in the age of the machine.
As I often emphasize, technology is not the adversary of the journalist. But it is a continuous test of their professional depth and their ability to redraw the boundaries of their role amid the rapid expansion of algorithmic capabilities and their growing influence on knowledge production.
Training Journalists on AI: From Skill to Awareness
It is not enough for newsrooms to be equipped with the latest AI tools. The issue is not technical capacity alone—it is how these tools are embedded within a conscious editorial framework. Just as we were once trained to verify the credibility of sources, we must now learn to verify the outputs of language models: Is the text balanced? Does it reflect a diversity of perspectives?
Does it reinforce exclusionary patterns or dominant narratives without scrutiny? Investigative rigor remains the same in essence, even if its forms and materials evolve.
In other words, using AI requires more than technical proficiency—it requires critical awareness, far removed from superficial fascination with the machine. No matter how “intelligent” the tool appears, it does not know how to question, nor when to remain silent, nor when to apologize.
In my trainings with journalists, I stress that the journalist in the age of AI must exert more effort, though the nature of that effort has changed. AI will not grant complete comfort in the newsroom; rather, it will impose new responsibilities that demand deeper engagement. The outcome, however, will be markedly different: we will still make a lot of effort—yes—but the results will be greater, faster, and more impactful than ever before.
The Elephant in the Newsroom… Impossible to Ignore
At the heart of today’s newsrooms stands a very visible elephant named Artificial Intelligence. Despite its expanding presence and accelerating influence, some media institutions continue to behave as though it simply isn’t there—turning their gaze away and carrying on with business as usual. But pretending that nothing has changed is the worst response journalism can offer in a moment of such profound transformation. Denial does not stop disruption; it only widens the gap between institutions that adapt and those that fall behind.
AI is not a peripheral tool—it is a turning point in the history of the profession. Treating it as a passing trend or a fleeting technological fantasy is a profound mistake. What is required instead is an honest acknowledgment followed by responsible institutional action, aligned with the three integration levels I always highlight at the beginning of every course or lecture:
- Individual Integration – through training and skill-building for journalists themselves.
- Departmental Integration – embedding AI into newsroom workflows, policies, and editorial processes.
- Institutional Integration – restructuring the organization as a whole to harness AI strategically and sustainably.
Only when all three levels are addressed can media institutions navigate this transformation without losing their professional core.
Journalists have adapted to major technological shifts before—from the arrival of the camera in newsrooms, to the explosion of the internet, to the disruptive shock of social media. Today, artificial intelligence stands before them as the most complex and consequential transformation yet. Once again, the question is not whether we will use it, but how we will redefine our professional identity in its presence. This is a question of being, not merely of technique. And to preserve that sense of being, we must awaken our instinct for adaptation—not on the basis of “survival of the strongest,” but survival of the most capable of adapting and creating.



