AI in communication: why experimenting isn’t enough anymore
How communication teams can grow toward structural use of AI
TL;DR in short
Most communication teams are experimenting with AI, but few have embedded it in shared ways of working. Real value comes when AI moves from individual experimentation to collective practice, guided by clear principles and connected to everyday communication processes.
AI has officially entered the communication profession. Not with a big bang, but quietly, through individual experiments, clever shortcuts, and curious colleagues trying things out. Most communication teams are already using AI in some way—editing texts, structuring ideas, summarizing feedback. And yet, in many organizations, AI still lives in the margins: used individually, rarely discussed openly, and hardly ever anchored in shared agreements or workflows.
That creates a familiar in-between phase. We experiment, but we don’t commit. We learn, but mostly alone. AI influences our work, but not always in a way that truly improves it.
The real question, then, is not whether communication departments should use AI. That question has already been answered. The question is how teams can move from scattered experimentation to structural, professional use—without heavy policies or technical roadmaps, but in a way that fits the everyday reality of communication work. This blog explores what that next step can look like, and why now is the moment to take it.
The steps
1. Start with your existing practice
The first step is not technical, but social. In almost every communication team, colleagues are already using AI. One person uses it to refine texts, another to structure ideas, someone else to summarize feedback or notes. By making this existing use visible, you create a much more realistic starting point than if you begin with rules or policies.
Often, a conversation is enough. Who uses AI, for what purpose, and what does it deliver? Just as important is the other side of the question: where do people feel doubts, discomfort, or concerns? Sharing these experiences creates space to learn together.
2. Turn experimentation into a shared learning practice
As long as AI remains something everyone does individually, it stays optional and noncommittal. The move toward structural use begins when experimenting becomes part of team practice.
This does not require rigid guidelines, but it does require practical agreements. What do we use AI for? Where are we cautious? And what do we share with each other? By exchanging concrete examples—including things that did not work—the craft develops. Not individually, but collectively. The survey shows that many departments are already doing this in some form.
3. Connect AI to existing communication processes
AI only becomes truly structural when it is embedded in daily work. Not as an extra tool, but as part of existing processes. Think of content development, analysis and monitoring, advisory work, or change communication.
The key question then is no longer which tool you use, but where AI can speed things up, deepen insights, or provide support. When AI is connected to processes such as developing content from briefing to final editing and translation, analyzing and monitoring signals, advising, building scenarios, reflecting, or designing formats and interventions, it shifts from something experimental to something self-evident.
4. Organize ownership, but keep it simple
Working structurally requires clarity. Not everyone needs to be an expert, but someone does need to keep an overview. Who safeguards quality and care? Who collects examples and lessons learned? Who tracks what works? And who keeps an eye on new developments?
This does not have to be a formal role. One or two clearly designated coordinators are often enough. By organizing ownership lightly, AI remains something of the whole team rather than becoming a specialist niche.
5. Anchor AI in principles, not in tools
Tools change quickly. What is useful today may be outdated tomorrow. That is why it makes sense to anchor AI in principles rather than in specific applications.
Principles such as transparency about AI use, careful handling of data and privacy, and the idea that human judgment remains leading provide much more stability. You do not need to start from scratch here. Most communication departments already have guiding principles for internal and external communication, and many discover that these align naturally with AI—think reliability, openness, and quality.
Why internal clients may expect professionalization
Internal clients and colleagues are right to expect communication departments to keep pace with developments that affect the work. Not because AI is a hype, but because it influences speed, quality, accessibility, and consistency. As communication professionals, it makes sense to reflect on this systematically and continuously.
If we ask others to change, to learn, and to adopt new tools, we should also show what that looks like when done professionally and with care.
In closing
AI is developing faster than our routines, agreements, and role definitions. Teams that remain stuck in individual and implicit use risk letting AI influence the work without actually improving it. That is exactly why now is the moment to take a conscious next step as a communication department.
Share experiences, make choices explicit, and connect AI to the craftsmanship that already exists in your team. Waiting until everything is clear will not work. But continuing to experiment without direction will not either. See it as an ongoing learning process—a green change. Action is needed now, not through heavy policy documents, but by learning together and having real conversations about how AI can structurally strengthen your work.
Huib Koeleman