How would you like to be a Guest Blogger for KMI? Email us at: info@kminstitute.org and let us know your topic(s)!

Knowledge Management in the Age of AI — It’s Time to Upgrade Your Roadmap

May 13, 2026
Guest Blogger Ekta Sachania


We talk a lot about KM implementation. But how many of us have stopped to ask — is our KM framework upgraded and evolved for the AI era?

I created a KM roadmap a while back to help KM folks like me have a structured approach to knowledge management. The six steps — from defining objectives to measuring outcomes — remain as relevant as ever. But times have changed, and so has our KM roadmap.


AI is no longer a future consideration. And if our KM systems are not designed with AI in mind, we are leaving enormous value on the table.

So I went back to my original framework and asked one simple question at every step: where can AI make this smarter, faster, and more impactful?

Here is what that looks like:

When you define objectives, AI can analyse patterns across your organisation to predict which knowledge gaps are causing the most friction — before your customers even tell you.

When you identify knowledge sources, AI can crawl across your systems, documents, and conversations to surface the knowledge that already exists but nobody can find.

When you choose your KMS, look beyond traditional systems. AI-native platforms with smart search, auto-tagging, and content recommendations are now the baseline, not the premium.

When you design your KM plan, let AI do the heavy lifting on categorisation, taxonomy suggestions, and flagging content that has gone stale or outdated.

When you train for cultural shift, AI can create personalised learning paths so every team member gets the knowledge most relevant to their role — not a one-size-fits-all training deck.

When you measure and evaluate, AI dashboards can track not just knowledge usage but also real CX outcomes — CSAT, first-contact resolution, average handling time — connecting your KM investment directly to business results.

This is not about replacing the human side of knowledge management. It is about amplifying it by using AI as your assistant..

Your people still drive the culture. Your experts still create the insight. AI simply helps you do more with what you already have by giving you time to focus on what matters and freeing up your time for things that you can automate.

If you are a KM professional thinking about where to focus your energy this year, start here. Not by overhauling everything — but by adding the AI layer, one step at a time.

I would love to know — which of these six steps do you think AI can impact the most in your organisation? Drop your thoughts in the comments.

_______________________________

Conversational Leadership in the Age of AI

May 13, 2026
David Gurteen

Artificial intelligence is reshaping how organizations handle information and influence decisions. Many treat it as a replacement for Knowledge Management, assuming better answers will follow.

The real challenge is how people think, question, and decide together with AI, which makes Conversational Leadership a practical discipline for responsible judgement and action.

Artificial intelligence is reshaping how organizations handle information and what we often call knowledge. It is tempting to see it as a replacement for Knowledge Management, a more capable system that finally delivers what earlier approaches struggled to achieve. In one sense, that is understandable. AI can capture, retrieve, and synthesize information at a scale and speed that traditional repositories, taxonomies, and search tools never managed.

But if that is all we mean by Knowledge Management, then we have reduced it to something quite limited.

The deeper ambition was never just better storage or faster access. It was always about better judgment, better learning, and better decisions in situations that are often messy and uncertain. The challenge was never simply information. It was how we make sense of it together.

AI changes the terrain. It does not just store or retrieve information; it can participate in our flow of thinking. It can reframe questions, suggest connections, and influence what we notice. When we begin to think with AI rather than only use it as a tool, the line between information and knowledge becomes less clear.

AI works with representations of the past. It does not experience the present as we do, and it does not bear responsibility for what follows. That remains with us.

This matters because AI outputs often feel fluent and convincing. The risk is not that we know too little, but that we accept too quickly. We may find ourselves agreeing without fully examining what is being suggested or overlooking what is missing.

As AI strengthens the informational backbone of organizations, the real work shifts. It moves toward interpretation, alignment, and responsible action. It asks more of us in how we question what we see, how we surface assumptions, and how willing we are to stay with uncertainty rather than close things down too quickly.

Conversation becomes central here, but not just any conversation. Many organizational conversations reinforce existing patterns, avoid challenge, or defer to authority. For conversation to be useful in this context, the conditions need to support curiosity, allow for doubt, and enable thinking things through together without rushing to agreement.

This is where Conversational Leadership comes in, not as a role or a position, but as a practice. It is about creating the conditions in which people can think together more carefully, especially when the issues are complex and the answers are not obvious.

In the age of AI, that practice extends to how we engage with the technology itself. If AI becomes part of how thinking happens in organizations, then it also becomes part of the conversation. It needs to be questioned, tested, and worked with, not simply accepted.

Seen this way, AI is not an oracle that provides answers, but a participant in a broader system of sense making. It can extend our thinking, but it does not replace our responsibility for judgment, ethics, or action.

So, the question is less about what AI can do, and more about how we respond to it. Knowledge Management, in this light, becomes less about systems and more about our collective ability to make sense of things together in environments where AI is always present.

The tools will continue to evolve. The need to think well together, and to take responsibility for what we decide and do, remains a human concern.

____________________________________

AI Outcomes Made Simple: It Starts with Trusted Organisational Knowledge

May 5, 2026

In many discussions about AI literacy, a natural follow-up question quickly appears: What does Knowledge Management literacy mean inside organisations?

Τhe term is often mentioned, but rarely explained in practical terms.

Knowledge Management literacy is not primarily about tools or platforms.

It is about understanding how organisational knowledge is recognised, structured, and validated so that it can reliably support decisions.The simple framework below summarises four practical capabilities that shape how organisations work with knowledge.


These capabilities may appear straightforward. In practice, they often determine whether knowledge supports sound decisions or simply turns into fragmented information.

1. Locate Knowledge

The first capability is the ability to locate where organisational knowledge actually lives.

In many organisations, knowledge is distributed across multiple systems: document repositories, collaboration platforms, shared drives, internal portals, and email archives. Without a clear understanding of this landscape, people often spend a considerable amount of time simply searching for information.

Knowledge Management literacy therefore begins with a basic awareness of the organisation’s knowledge environment: where different types of knowledge are stored and which systems serve which purpose.

Without this basic capability, organisations struggle to use knowledge consistently, whether by people or by AI systems.

2. Identify the Authoritative Source

Locating information is not enough. The next step is recognising which version of knowledge can be trusted.

In practice, organisations often operate with multiple versions of the same document, guideline, or procedure. Teams may rely on different sources without knowing which version is officially maintained.

Knowledge Management literacy therefore includes the ability to identify the authoritative source: the version of knowledge that is validated, maintained, and intended to guide decisions.

3. Understand Knowledge Context

Knowledge is never created isolation. It always emergies in a particular context: a regulatory environment, a project phase, and a specific organization challenge.

Understanding this context is essential for interpreting knowledge correctly. Without it, documents and guidance may easily be reused in situations where they no longer apply. Knowledge Management literacy therefore involves recognising how and why knowledge was produced, and under which conditions it should be interpreted.

4. Validate Knowledge before reuse

Finally, knowledge must be validated before it is reused, shared, or embedded in automated processes.

Organisations evolve, policies change, and procedures are updated. If knowledge is reused without verification, outdated information can easily spread across teams or systems.

Knowledge Management literacy therefore requires the ability to confirm that knowledge remains current and relevant before it is applied again.

Why these capabilities matter for AI

These four capabilities become particularly important as organisations explore AI-enabled systems.

AI can retrieve, process, and connect information at scale. However, the quality of its outputs depends directly on the structure and reliability of the knowledge it accesses.

If knowledge sources are fragmented, unclear, or outdated, AI may simply accelerate confusion rather than support judgement.

For this reason, developing Knowledge Management literacy is not only a Knowledge Management concern. It is increasingly becoming a foundational capability for organisations seeking to use AI responsibly and effectively.

Future Knowledge Nuggets will explore these capabilities in greater detail and examine how organisations can strengthen them in practice.

Disclaimer: The views expressed in this article are my own and do not represent the position of my employer or any institution I am associated with.

__________________

When Writing Was the New AI

February 21, 2026

Revisiting Plato’s tale of King Thamus and Theuth to understand our concerns about AI

Each new wave of Knowledge Management technology raises familiar questions about what we might lose. Writing once seemed a threat to memory and understanding, much as AI does today. Revisiting Plato’s story helps clarify what changes, what endures, and why conversation still matters in KM.


The story of Thamus and Theuth from Plato that is worth returning to. Socrates recounts it. The Egyptian god Theuth, inventor of many arts, appears before King Thamus with a new invention: writing.

Theuth claims it will improve memory. People will become wiser. They will be able to record what they know and not forget.

Thamus disagrees. Writing, he says, will weaken memory. People will rely on external marks instead of remembering for themselves. They will read widely but not truly understand. They will possess the appearance of wisdom without its substance.

It is a simple exchange. The inventor sees promise. The ruler sees risk.

Writing did change us. It shifted knowledge beyond the mind and into artefacts. It altered how knowledge travels across time and distance. From a Knowledge Management perspective, it was a foundational technology. It enabled archives, laws, contracts, science, administration. It allowed knowledge to scale.

But it did not destroy thinking. It transformed it.

Now we face a similar moment.

AI systems generate fluent answers, summarise documents, draft reports, analyse patterns. In KM terms, they accelerate the capture, retrieval, and recombination of information. And again, we hear the anxiety. Memory will erode. Thinking will weaken. We will mistake fluency for understanding.

There is substance to that concern. If we outsource judgement, if we stop questioning, if we treat generated output as authoritative, then we do diminish something important.

Yet every stage in the evolution of Knowledge Management has involved externalising knowledge in some way. Writing did it. Printing did it. Databases did it. The question is not whether we externalise knowledge, but how we relate to what we externalise.

This is where Conversational Leadership enters the picture.

Classic Knowledge Management focuses on organising, storing, and sharing knowledge. That work remains necessary. But in conditions of uncertainty, stored knowledge is not enough. We must interpret it, test it, challenge it, and apply it with judgement.

In the age of AI, answers are abundant. Judgement is not. The scarce capability is the ability to think together, to examine assumptions, to surface differences, and to reason in dialogue rather than accept what sounds plausible.

AI can generate text. It cannot take responsibility. It cannot care about consequences. It cannot sit in disagreement and work through it. That remains human work.

The deeper lesson in the story of Thamus and Theuth is not that technology is dangerous, nor that it is liberating. It is that each new knowledge technology reshapes the conditions under which we think. The task for Knowledge Management today is not simply to deploy AI tools, but to strengthen the conversational capacity through which knowledge becomes wise action.

While technology will evolve, the human responsibility to reason together will not disappear.

 

________________________________

Sparking the Knowledge Management Engine with an AI Centre of Excellence

January 31, 2026
Rooven Pakkiri


For the first time in the history of enterprise technology, the people using the technology know more about its potential than the people buying it.

Let that sink in for a moment. Because it inverts everything we know about organizational change management - and it's why your traditional approach to building a Centre of Excellence will fail when it comes to AI.

The ChatGPT Moment

Dr. Debbie Qaqish, in her white paper on AI Centres of Excellence (2024), captures this perfectly. She describes watching every major tech evolution of the past four decades - from rotary phones to smartphones, from dial-up internet to cloud computing, from on-premise servers to SaaS platforms. Nothing, she says, was as earth-shaking as the release of ChatGPT on November 30, 2022.

Why? Because every previous technology came with a predictable evolution path. You could see where it was going. You could plan for it. You could reasonably accurately define use cases upfront and execute against them.

AI shatters that predictability. We are in unknown territory. And that changes everything about how organisations must adapt.

How We've Always Done Tech Implementation

Let me show you what I mean with a concrete example.

Think about a CRM rollout in the 2010s - let's say Salesforce:

  • Leadership identifies the problem: "Our sales pipeline visibility is terrible; deals are falling through cracks"
  • Leadership selects the solution: They evaluate vendors and choose Salesforce
  • Leadership defines the use cases: Lead tracking, opportunity management, forecasting reports - all documented upfront in requirements
  • Workers execute the plan: Sales reps get trained on defined fields, follow mandatory processes, use standardized reports
  • Knowledge flows DOWN: "Here's how you'll use it, here's the dashboard you'll look at, here's the fields you'll fill in"

The Centre of Excellence's role in this world? Implementation, training, and optimisation of those predetermined use cases.

This model worked beautifully for decades. The technology was stable. The use cases were knowable. The path was clear.

Enter AI - And Everything Breaks

Now let me show you what's actually happening with AI in organisations today.

I recently worked with a European Customer Support team on AI integration. Here's what we discovered:

Support agents started using AI to draft responses. Nothing revolutionary there - that was the planned use case. But then something interesting happened. Agents began noticing that the AI was identifying sentiment patterns they had never formally tracked. One agent said, "Wait - this AI noticed that customers who use certain phrases are actually asking about X, not Y."

Then they discovered the AI could predict escalation risk based on subtle language cues that nobody had ever documented. These weren't use cases we planned for. These were discoveries made by front-line workers experimenting with the technology.

The knowledge didn't flow down. It flowed up.

The AI CoE's role became capturing these emergent insights and scaling them across teams. Not training people on predetermined workflows but harvesting what workers discovered about AI's capabilities.

The Tacit Knowledge Goldmine

But here's where it gets really interesting - where AI and knowledge management converge in a way that's never been possible before.

Consider financial advisors. I recently delivered a customised program for  an Insurance client, working with their team of several advisors nationwide. These senior advisors hold extraordinary tacit knowledge - the kind that traditional technology could never capture:

Pattern Recognition: "I can tell from a conversation if someone's underinsured." That's not in any manual. That's 20 years of experience reading between the lines.

Client Psychology: "How to explain complex coverage in simple terms. When to push and when to back off. How to have difficult conversations about underinsurance." No CRM workflow can teach this. It's intuitive, contextual emotional intelligence built over thousands of client interactions.

Local/Regional Expertise: Understanding flood zones, weather patterns, crime rates, local business ecosystems, community relationships and networks. This is place-based tacit knowledge that exists in advisors' heads, not in databases.

Claims Wisdom: How to guide clients through claims processes, what to document at the scene, how to advocate for clients with claims teams. Real-world responses to "that's too expensive." How to explain the value of coverage.

Creative Problem-Solving: Which products naturally go together, how to package solutions for different life stages, creative solutions for unique client situations. Each client is different. Senior advisors have a mental library of "I once had a client who..." scenarios that saved the day.

Underwriting Judgment: When to escalate a risk versus handle it, how to present a borderline risk to underwriters, what information underwriters really need.

The traditional tech approach would have built workflows for standard cases, created dropdown menus for common scenarios, documented "best practices" in a manual nobody reads - and missed 80% of the actual value in those advisors' heads.

But here's what we discovered with AI:

When advisors start experimenting with AI in Communities of Practice, something remarkable can happen. The AI could help them articulate their tacit knowledge. One veteran advisor would be able to say: "The AI just explained the pattern I've been following unconsciously for 15 years. I never knew how to teach this to newer advisors, but now I can see it."

AI becomes the externalisation engine - converting "I just know" into "Here's why I know."

And the AI CoE's role in this brave new world? Systematically capturing these discoveries flowing UP from practitioners and scaling them across all the many advisors.

This Is Pure SECI in Action

If you're familiar with knowledge management theory, you'll recognize Nonaka's SECI model at work:

  • Socialisation: Practitioners in Communities of Practice sharing "hey, I tried this with AI and it worked"
  • Externalisation: The CoE capturing those tacit discoveries and converting them into documented use cases
  • Combination: The CoE synthesising patterns across experiments into frameworks and best practices
  • Internalisation: Organisation-wide learning and capability building

The AI Centre of Excellence becomes the knowledge conversion engine - transforming frontline tacit knowledge about AI's emergent capabilities into organisational strategic advantage.

This has never been possible before. Traditional technology couldn't access tacit knowledge. It could only automate explicit processes. AI can help surface, articulate, and scale what people know but couldn't explain.

Why AI CoEs Are Completely Different

Dr. Qaqish identifies three key differences that make AI Centres of Excellence unlike any CoE you've built before:

1. Continuous big changes vs. step-chain improvement

Traditional tech followed a "pilot, test, deploy, optimise" model. You implemented once, then made incremental improvements. AI doesn't work that way. It requires ongoing adaptation to rapid, sometimes disruptive changes. Your CoE isn't optimising a stable platform - it's managing continuous experimentation and change.

2. Bottom-up vs. top-down

This is the game-changer. Because nobody can predict AI's evolution, initiatives must come from hands-on users experimenting and learning, not from leadership defining use cases upfront. The insights flow up from practitioners, not down from executives.

This inverts traditional change management. Your workers know more about AI's potential applications than your leadership does. The CoE's job is to harvest that knowledge and convert it into organisational capability.

3. Requires more leadership, resourcing, and budget

Unlike other technology CoEs that could operate as "nice to have" side projects staffed by people in their free time, the AI CoE needs dedicated time, real budget, executive clout, new incentives, and structured support.

Why? Because this isn't about implementing a predetermined solution. It's about creating an organisational learning system that can adapt at the speed of AI evolution.

The Two Functions Your AI CoE Must Integrate

Some frameworks separate the AI Council (governance, risk, compliance) from the AI Centre of Excellence (innovation, experimentation, capability building). I've found this creates unnecessary friction and slows everything down.

Your AI CoE needs to integrate both functions:

Governance Function: Policy development, risk assessment, ethical frameworks, compliance. The "don't screw up" guardrails.

Innovation Function: Managed experimentation, capability building, training, best practices. The "make cool stuff happen" engine.

Why keep them together? Because effective experimentation requires governance guardrails. You can't separate "try new things" from "do it safely" without creating either chaos or paralysis. One integrated team moves faster than two teams coordinating.

What This Means For Your Organization

The implications are profound:

Traditional tech CoE role: Train people to use the platform as designed.

 AI CoE role: Harvest what people discover about AI's capabilities and convert it into strategic advantage

Traditional knowledge flow: Leadership → "Here's the system" → Workers use it

AI knowledge flow: Workers → "Here's what we discovered" → CoE → Organisational transformation

Traditional CoE success metric: Adoption rates, process compliance, efficiency gains

AI CoE success metric: Rate of knowledge capture, speed of capability scaling, tacit knowledge externalisation

Companies that treat their AI CoE like a traditional implementation team will lose to companies that treat it like a knowledge creation system.

Getting Started

If you're building or reimagining your AI Centre of Excellence, here's where to focus:

1. Establish Communities of Practice - Create structured spaces for hands-on workers to experiment and share discoveries. This is your knowledge generation engine.

2. Build knowledge capture systems - Don't just let experiments happen. Systematically document what's being learned, especially tacit knowledge that AI helps surface.

3. Ensure executive clout - Your CoE leader needs power to move quickly on discoveries. When front-line workers find a game-changing application, you need to scale it fast.

4. Resource it properly - This isn't a side project. People need dedicated time to experiment, reflect, and collaborate. Budget for tools, training, and incentives.

5. Integrate governance and innovation - Don't separate them. Build one CoE that can experiment safely and scale learnings responsibly.

The Bottom Line

For the first time in enterprise technology history, the knowledge about what's possible flows from the bottom up, not the top down. Your front-line workers, experimenting with AI in their daily work, are discovering capabilities and applications that leadership couldn't have predicted.

The AI Centre of Excellence isn't about deploying technology. It's about harvesting tacit knowledge, converting discoveries into capabilities, and building organisational learning systems that can adapt at the speed of AI evolution.

This is where AI and knowledge management meet. And it changes everything about how we think about Centres of Excellence.

The question isn't whether to build an AI CoE. The question is: Are you building a traditional implementation team or a knowledge conversion engine?

Because only one of those will succeed in the AI era.

 ______________________________________________________