How would you like to be a Guest Blogger for KMI? Email us at: info@kminstitute.org and let us know your topic(s)!

The 4 Steps to Designing an Effective Taxonomy: Step #2 Make Sure Your Facets Are Consistent

September 26, 2016

Taxonomy is not as daunting as it seems. In this blog series, one of EK’s taxonomy experts, Ben White, provides 4 practical steps to designing and validating a user-centric taxonomy.

Step #2: Make Sure Your Facets Are Consistent

In the first blog of my series, “The 4 Steps to Designing an Effective Taxonomy,” I spoke about the importance of designing a user-centric taxonomy. Indeed, developing an understanding of how people think about the content in question allows a taxonomist to design a clear and consistent taxonomy, enabling site visitors to find what they need. Though this may be the first step, it’s hardly the last. Once you have completed an initial taxonomy design, it’s essential to remain consistent with faceted classification when tagging your content, which is the subject of today’s blog.

For intranets and websites, the cost of an ill-considered taxonomy is efficiency. Creating a truly successful taxonomy design involves breaking down the content by its attributes and organizing those attributes in an easily understandable classification scheme. During this process, the taxonomist will develop multiple taxonomies related to several different categories, or facets. This method is known as faceted classification.

The end result of a faceted classification system is a faceted search capability. Faceted search is a technique that allows users to explore a collection of information by applying multiple filters. This enables users to practice a hybrid of search and browse to find content. Because users expect navigation systems to behave rationally, the terms found in the faceted classification system should describe the body of content using common and naturally occurring descriptors.

Although there is no universal set of facets that can be used across information environments, we have found there are several common facets:

  • Topic/Subject
  • Document/Product Type
  • Format
  • Audience
  • Geography

Of course this list is not exhaustive, but it’s an excellent place to start when designing a faceted classification system. A few additional tips:

  • Ensure that the terms that fall beneath each of these facets are mutually exclusive and clearly communicate the universe of content it is describing.
  • Choose a list of preferred terms that reduces confusion.
  • Identify terms that speak the same language as the information environment’s users while accurately describing the content.

So, if we know that inconsistent terms can create ambiguity and decrease efficiency, what can we do to address these challenges? In Information Architecture for the World Wide Web, Peter Morville outlines several guidelines for designing effective labels. These are applicable to taxonomy design as well. As Morville discusses, in order to ensure consistency it is important to pay close attention to:

  • Syntax– Verb-based terms (e.g. run) and noun-based terms (e.g. health & wellness) are often mixed together in a single faceted taxonomy. Choosing a single syntactic approach can improve consistency within the faceted search system.  
  • Granularity– Within a faceted classification system choosing terms that are approximately equal in specificity can reduce confusion and improve consistency. For example, “Stool”, “Table”, “Bergere”, and “Caquetoire” at the same level in the classification system will cause confusion among users when searching and browsing.  
  • Audience– When choosing preferred terms within a faceted classification system it is imperative that you choose the terminology most commonly used by the audience. For instance, using “Cute Puppies” and “Felis Catus” in the same classification system can confuse users when searching and browsing for information.

By being aware of syntax, granularity, and audience, the taxonomist can take steps to create a meaningful and consistent taxonomy that reduces confusion and increases efficiency. This benefits all users by increasing usability and findability.

Once you’ve established a taxonomy that is both user-centric and consistent with faceted classification, you’ll be ready for my next blog, which describes how to validate your taxonomy. Stay tuned! 

Design a User-Centric Taxonomy

September 7, 2016

Taxonomy is not as daunting as it seems. In this blog series, one of EK’s taxonomy experts, Ben White, provides 4 practical steps to designing and validating a user-centric taxonomy.

Step #1: Design a User-Centric Taxonomy

When most individuals hear the term “taxonomy design,” the initial reaction may be to disregard the practice as too technical or complex. Yet in reality, all that a taxonomy design entails is collecting the information that is already available, then organizing it to help your end users find and use the correct information efficiently and effectively. The end product—a taxonomy— is a standardized list of terms or controlled vocabulary, which can be applied to product categorization, web site structure, and faceted navigation.

Regardless of the way you choose to use a taxonomy, it is important to understand the tried and true principles that allow us to design for success.  At Enterprise Knowledge, we use traditional information science principles together with core usability concepts to enhance information retrieval in diverse information environments.  

When searching for information, it is common for users to jump from page to page or document to document. This allows users to discover more about the information they are seeking and as a result, refine their search.  This behavior is known as “berrypicking”. The berrypicking model was developed by Marcia Bates at the University of California at Los Angeles.  Berrypicking results in multiple searches before a user finds the appropriate set of information.  However, when designing a taxonomy to aid in search retrieval, we should always strive to help users find information faster and limit berrypicking.  This is a difficult task, as search behavior varies from user to user. Despite user differences, there are a number of key factors that influence the way users search for information. Some of these factors include:

  • Technical Proficiency: How familiar users are with a specific subject area
  • User Goals: What users are looking to achieve in the information environment
  • Query Formulation: Terms used for searching

Technical Proficiency
Levels of subject knowledge and field proficiency within a group of users governs the language that users will use to search.  Users with a great deal of knowledge and familiarity with a subject will use precise and industry specific jargon.  On the other hand, users with less technical knowledge will use more general terms.  It is important to keep this in mind when designing a taxonomy.  

User Goals
It is important to uncover how users will interact with the information environment. Andrei Broderidentified three prevailing goal-based query types when searching for information:

  • Navigational Queries– Users searching to reach a specific area of an information environment. One example of a navigational query is a user searching to get to a specific portal or area of a website.  
  • Informational Queries– Users searching to acquire specific information in a web page or document. An example of an informational query would be a user searching for where Oolong tea originated.   
  • Transactional Queries– Users searching to perform a task.  This could include submitting time and leave information.  

Of course, there will be elements of each of these queries among users and we should design for all. However, there will most likely be one or two prevalent goal based query types.  Being aware of these goals can create a more efficient taxonomy design.  

Query Formulation
Examining the components of user queries by query analysis can help identify how users search. It is important to note any patterns that appear when analyzing user queries.  Common patterns to take note of include:

  • Acronyms
  • Technical Jargon
  • Query Length
  • Noun based Queries
  • Verb based Queries

The common patterns found when analyzing the queries should be reflected in the taxonomy.  This will ensure that actual queries are echoed in the taxonomy, improving usability and findability among users.  

By applying usability and information science concepts to the taxonomy design process, you can maximize the findability of your content. Designing a user-centric taxonomy is only the the first step. Stay tuned for future blogs to learn more the remaining steps in designing and validating an effective taxonomy:

  • Make Sure Your Facets Are Consistent (Step 2)
  • Validate Your Taxonomy (Step 3)
  • Measure the Findability of Your Content (Step 4)

Can’t wait? Contact Enterprise Knowledge for help with enhancing the usability and findability of your information.

The Disruptive Future of Knowledge Management

August 22, 2016

In the following post we will be looking at the future of knowledge management (KM), specifically we will explore together the key tenets of what the field has to hold and how technology will change the role of the KM  practitioner.

Historical aspects of KM (some history nuggets that should be considered before reading on).  

The following timeline showcases the three generations of Knowledge Management (click image to view larger version on original post):

KM evolution is made up of three generations; there hasn’t been consensus on the third one and it´s not something that you will find in KM textbooks. However,  it is a picture of the reality surrounding KM at the moment.

It´s hard to pin-point an exact date for the beginning of KM. I personally like to refer to 1987 since in this year a very special book was published in England by Karl Sveiby and Tom Lloyd called “Managing Knowhow”. Although the term KM wasn’t used here it provided companies with a structured framework and business case in order to understand why organizations should start paying attention to their intellectual assets.

First generation KM was primarily IT driven and during this period we saw the rise of tools such as IBM´s Lotus Notes and the first Intranets (focus on information, not knowledge)

In 1995, Nonaka and Takeuchi published a book called the “knowledge creating company”. The Japanese authors warned KM practitioners that in order to drive KM success they needed to focus on people rather than IT. This advice would only be taken into consideration a decade later.

Nonaka and Takeuchi introduced the SECI model which became a cornerstone foundation for KM. Their approach meant that KM models should be looking closely at the way knowledge is generated within people in order to prepare a process to make knowledge generation and sharing much more easy (specially, in order to turn tacit knowledge into explicit).

Second generation KM was primarily people focused and looked to create processes based on Nonaka´s SECI model- how knowledge is generated, made explicit and socialised in organizations.

Following another 10 years we come to third generation KM and this is where something really interesting occurs. Going through the lessons learned obtained from many decades of work, third gen KM is founded on the idea of “going back to the basics”. What does this mean?

It means that KM needs to focus primarily on critical knowledge before investing in any tech solution or looking at specific actions. The reason I refer to 3rd Gen as C-Gen KM is because there are three powerful “Cs” present:Connectivity, collaboration and co-creation. In another post we will look at the underlying aspects of third gen KM but for the moment lets concentrate on some of the principal IT components surrounding the future of KM.

KM technology of the future (and right now!)

Third gen KM doesn’t discard IT. On the contrary, it requires tech more than ever before. But what sort of technology are we speaking of? The specific tech that is made present in current times and which will definitely shape the future of KM are four forms of technology that combined will make a big difference in companies:

  • Cognitive technology
  • Robotics
  • Artificial Inteligence
  • 3D printing

What new forms of knowledge management technology are changing the way KM is done?

This is the future of KM. Let's dig deeper now.

Have your heard of IBM´s Watson?  #Watson is a system created by IBM that integrates natural language processing and machine learning in order to reveal insights from various data sources. In short, it is able to learn and provide solutions. If you are fond of Jeopardy, a very popular american quiz show, then you will probably remember the episode when Watson competed with human participants and won! In order to win, Watson combined two separate areas of artificial intelligence research with winning results. Natural language understanding was merged with statistical analysis of vast, unstructured piles of text to find the likely answers to cryptic Jeopardy clues.

How did supercomputer Watson beat Jeopardy champion Ken Jennings? (Photo source: blog.ted.com)

So Watson in some way is able to replicate the human thought process in order to give meaning to the information it analyses. Powerful stuff for KM.

In fact, Watson is being used in medicine in order to provide expert advise to doctors who would have to otherwise undertake many hours or weeks of learning in order to correctly process information. For example there is a specific Watson solution for oncology in which doctors get  the assistance they need to make more informed treatment decisions. Watson for Oncology analyses a patient’s medical information against a vast array of data and expertise to provide evidence-based treatment options.

How is Watson helping the medical sector develop critical patient knowledge?

This new forms of cognitive systems that understand, reason and learn are helping people expand their knowledge, improve their productivity and deepen their expertise. In short, Watson is like an artificial brain. But a brain wont function unless it has a body and this is where advanced robotics comes in.

If we look at some of the advances in robotics, we find companies such as Boston Dynamics that are capable of producing robots with amazing human movement skills. For example, one their robots “Atlas” has a humanoid form and possesses articulated, sensate hands which will enable Atlas to use tools designed for human use. Atlas includes 28 hydraulically-actuated degrees of freedom, two hands, arms, legs, feet and a torso.

What would happen if these robots are plugged to a Watson like system? This is where cognitive technology and robotics give way to artificial intelligence.

If you got to this point, I am  sure that you might be thinking that this level of technology seems more sci-fi than reality. Just let me point out that this technology is already available and it is being used by a number of firms. You can even head down to the Watson portal, download the API´s and start using Watson at home!

Have you used 3D printing yet? I have, and I must admit it´s wonderful. I had second thoughts whether or not to include it as part of the tech that is changing KM, but I find it to be a powerful tool for tacit knowledge transfer. For example, two people working on separate locations can literally co-create prototypes as they share experiences and information. This means that you can touch and feel the outcome of the shared knowledge!

3D printing is a powerful tool for tacit knowledge transfer

Not only can we facilitate tacit knowledge transfer this way. Virtual reality is also helping in this regard and with the recent advances in the field we might experience learning in a whole new manner. I would like to invite you to check out the HoloLens website so that you can see it for yourself.  Microsoft combined virtual reality with hologram technology so that users can actually interact with the objects they see. In this sense, imagine what a knowledge transfer session would look like using this tech! I'm very eager to try out!

Microsoft HoloLens (source; https://www.microsoft.com/microsoft-hololens/en-us)

So KM is finding new forms of technology as opposed to traditional IT that dresses in the form of Intranets, databases and social networks. The future in this regard is very exciting for KM and there and many things we can expect in the short term. KM practitioners will have to start learning about this technology and a radical shift in their future role is that they might be summoned to feed this systems.

However this doesn’t mean that we should forget the focus of KM. “Going back to basics” entails understanding first what knowledge a company should focus on as opposed to managing all of your company’s knowledge. This is not wise and very dangerous   as you might be allocating resources and time in order to develop knowledge that is not related to the company strategic plans or primary results.

So exciting times are waiting for KM. It would be interesting to discuss the use of this technology in companies (which is already happening as we speak). I am particularly interested in following the advances made by Watson in the medical field as it is rapidly impacting outcomes and providing doctors with a knowledgeable resource in order to take action rapidly.

Foolish Knowledge: The Dunning-Kruger Effect

August 4, 2016

"Ignorance more frequently begets confidence than does knowledge." – Charles Darwin

When presented with a question or challenge, some humans are diffident about their knowledge and timid to take action.  Others bullishly push forward with confidence in what they think they know.  The underlying issue in both cases is the same:  many people suffer from false illusions of inferiority or superiority and are unable to evaluate themselves.

Cornell University Researchers David Dunning and Justin Kruger have studied this phenomenon, now called the “Dunning–Kruger effect.”  The Dunning-Kruger effect results from the metacognitive bias of unskilled individuals who mistakenly assess their ability to be much higher than is accurate. Put differently, the unskilled individual does not know what they don’t know and is unable to recognize their own ineptitude or effectively evaluate their own ability.

Most organizations recognize this issue and rely on experienced individuals for knowledge and action.  However, in some instances, experts may not serve an organization very well at all.  While the Dunning-Kruger applies to the inexperienced, this metacognitive problem effect also extends to experienced individuals. Dunning and Kruger found that some experienced individuals underestimate their relative competence, and may even erroneously assume that what is easy for them is also easy for others.”  In other words, even seasoned individuals can make assumptive errors due to their inability to effectively evaluate the abilities of others.

In the project planning process, the cognitive biases of both experts and the novices becomes particularly evident.  Jeff Sutherland, author of Scrum: The Art of Doing Twice the Work in Half the Time, points out the fact that first estimates of work can range from 400 percent beyond the time actually taken to 25 percent of the time taken. In other words, human time estimates can be off by a factor of 16.

Even worse, the research shows that neither novices nor experts are any better at estimating time requirements.  This inability to gauge time required for a project is consistent with the Dunning-Kruger effect and the inability of experts and novices alike to understand and assess their own abilities and the abilities of others to complete a given task as part of the project.

As a solution to the issue of cognitive bias in time estimates, Sutherland has found greater success by using both experts and novices in an anonymous time-estimation voting process.  Sutherland recommends that rather than asking the novices and experts who are voting to give precise time estimates for the various tasks in a project, they instead use a more approximating, “relative sizing” approach.  In the relative sizing of a task, Sutherland suggests that the individuals estimating time assign a number to each task from the Fibonacci sequence of numbers:  1, 2, 3, 5, 8, 13, 21…

The side-by-side use of both experts and novices in estimating time has proved to be an effective measure to eliminate some of the cognitive bias in the time and resource planning process.  Sutherland’s recommended technique relies on the efficacy of crowds and distributed decision-making as an effective method for overcoming the Dunning-Kruger effect.

The Dunning-Kruger effect is caused by expert and novice cognitive biases regarding knowledge and skill.  This bias can be overcome by reliance on crowds including both novices and experts because a mixed crowd holds more potentially diverse knowledge and abilities to contribute to a given task. In his 2005 book, The Wisdom of Crowds, James Surowiecki points out that “experts simply lack much of the knowledge held by novices because it is not in the ‘world they live in.’” By adding both novices and experts into a system or project, the overall group is made more diverse than it would otherwise be – and better able to overcome the knowledge and abilities biases pointed out by the Dunning-Kruger effect.

Collaborative Knowledge Mapping

July 6, 2016

Over the years I have felt extremely frustrated with the so-called knowledge repositories, such as SharePoint, and the many other solutions for collaboration that exist around an intranet. Many years ago I joined an engineering consultancy firm in London called Fulcrum (which a few years later merged into Mott MacDonald). That was back in 2008 and we were around 150 employees, with small offices in Edinburgh, Madrid and Hong Kong. Those were the days were sustainable building design was going strong. The six directors were (and are) an extremely cool and forward-thinking lot and they put together a great team of sustainability consultants and building engineers. I was one of sustainability guys.

As you can imagine, sustainable building design touches on many aspects of the building; insulation, air-tightness, energy efficiency, daylight, building controls or thermal comfort, to name a few. Knowledge was very important and we had a Knowledge Base (SharePoint). As we were constantly researching new technologies and design principles, we were continuously coming across very interesting documents and articles. We were devouring them and uploading them on the Knowledge Base. We had categories and tags and all the rest, and we were not too bad at applying metadata to the files. But nonetheless, it was a phenomenal mess.

Soon it was obvious that we were uploading stuff much more frequently than downloading files. The main reason for this was that any ‘search’ would yield a large number of results and there was no way we could obtain anything which actually matched what we needed in the moment without opening and reading a lot of documents. Now, many years later, I have a better understanding of the problems we were suffering then, but the truth of the matter was that we all had our own repositories of knowledge on our computers, and any time we had a need or an itch, we would turn to our reliable contacts (for instance, Tom, just across from my desk) who would send us an email with the document in an attachment.

 

We had built a platform which was to be a knowledge sharing platform, but we did not know the difference between a library and a collaboration environment. As a result, we ended up doing none, because we could not tell information from knowledge. To illustrate this, I will reproduce here the definition of Knowledge Management by Kimiz Dalkir and Jay Liebowitz: ‘Knowledge management develops systems and processes to acquire and share intellectual assets. It increases the generation of useful, actionable, and meaningful information, and seeks to increase both individual and team learning. In addition, it can maximise the value of an organisation’s intellectual base across diverse functions and disparate locations.’ Our knowledge base had tons of information with little use, relatively low meaning, and it was certainly not actionable.

KM is the Supermarket and your Project is the Kitchen

I often use this analogy. A knowledge-sharing platform is the supermarket you go to find the ingredients to take home to your kitchen. Once there, you can mess around with the alchemy of your project. I still work in the construction industry, sadly not anymore at Fulcrum, but at Werner Sobek, which is another very good firm. We are building engineers and designers doing pretty much all the things that architects do not do: structural engineering, façade engineering, heating and cooling, etc. As you can imagine, our kitchen can get pretty messy and we have all sorts of things going on at once.

I’ll give you a small example. We were recently approached by an architectural firm in Philadelphia to support them in a cool and confidential competition in Hamburg. It’s something like a museum and it will be small-ish, 2,500 m2 of net floor area. We have three weeks to cook up our magic and there are no fees involved, so we don’t want to spend too many hours cooking.

After a few days we received the architectural drawings, showing the exhibition areas, back of house offices, circulation, toilets, and so on, but there isn’t a single technical room for us to put our equipment in. This is quite common, by the way. One of the dishes on our menu takes priority and has to come out of the kitchen really fast, as all starters should. -This is: To Tell The Architects How Many and How Big Our Technical Areas Should Be-. Speed is key, because everybody is working away and the sooner we get our foot in the door, the easier our life will be for the next two years.

Now that you know the context, let’s go back to Knowledge Management.

So now that we know the breakdown of areas in the building, we rush to the supermarket and check out the different aisles and shelves. Navigating the supermarket is very easy and we quickly find an aisle call ‘Spatial Allowances’ (that’s the lingo). We walk along the aisle taking a look at the different products on display. It is very clear in our minds what the final dish shall be, so we easily identify the ingredients we can use:

  • Template booklet for spatial allowances
  • An excel spreadsheet with benchmarks for other museums
  • A tool to estimate the loads (power demand, heating, ventilation, water, etc)

Furthermore, while looking around, we find other related ingredients that we did not know existed and which will give our dish extra flavour such as case studies of technical areas in museums we did in the past and a couple of diagrams we can adapt to fit our project. In fact, the architects won’t notice this, but we also took a couple of ready-made meals from the freezer, but hey, economies of scale, right?

Three Principles of Good Practice

In order to provide such an experience (navigating the supermarket), we had to establish a few requirements. Or rather, define a brief which is not too abstract nor too narrow; as Tim Brown puts it in his book ‘Change by Design’. The way I see it, the knowledge-sharing platform should conform to the following three principles.

  • Knowledge should be very easy to create, share and rearrange 

The members of the organisation should be able to share their explicit knowledge in the easiest way possible, as any burden to the process of creating and sharing knowledge will dramatically reduce the level of engagement and the amount of contribution. Similarly, any knowledge domain is organic and will evolve with time, so the different domains and the different knowledge assets will need to be re-arranged (forgotten even). This process should also be extremely easy. In my experience, SharePoint and Wikis don’t fulfil this principle, especially when it comes to re-arranging.

  • Knowledge should be organized as an ontology, not as a taxonomy

In case I am not using these big terms in the proper way; by taxonomy I mean a tree diagram, and by ontology I mean something like a network. A well-known taxonomy is the animal kingdom (or parts of it, rather). Under such organisation, any given species will only be in one place, and there is only one path leading to that species. So next time someone in your organisation needs to do some work about rabbits, he or she will have to access the folder of chordata (I just learned this word), then the folder of vertebrates, and so on until reaching the rabbit and accessing the knowledge your organisation holds on rabbits. But in reality, the way our brains reach different domains of knowledge is by navigating a network of domains, so different people will access their domain ‘rabbits’ by a myriad of different paths. Notably: carrots.

  • Whatever the KM method, it should be built from the ground up 

Another barrier to a successful KM system is when the system comes from above. This now seems obvious to me, but not when we started implementing the KM platform back in the day. Back then we had a series of workshops between a bunch of senior guys where we devised the KM system including the major domains all on our own. We then passed it on to the wider company expecting them to start  populating and using it. It was not well received and it obviously failed.

This third principle is quite straight forward: whatever the KM system, it should be built from the ground up. Furthermore, I recommend building it around communities of practice and start small. The way I do it is as follows: first choose a specific company objective that is closely connected to knowledge (low hanging fruit), second define a small community of practice around it and give them a clear goal, and then start working on that specific domain for that specific target. By so doing, you will create a small but functional KM environment, which is useful for everybody from day one. People within this community will feel ownership, will look after their domains and will feel comfortable using the platform.

Knowledge Mapping on a Mind Mapping Platform  

Now, discussion on technology is unavoidable and so far I have only found a way to do this: communities of practice and collaborative knowledge mapping. In particular, we use mind mapping software. I don’t think there is much point in mentioning the particular software we use, since many commercial products out there provide the necessary functionality.

 

 

 

 

 

Mind mapping is a very simple and very powerful technique to organise your thoughts (and in our case, our collective thoughts). This is the Wikipedia description: “A mind map is a diagram used to visually organise information. A mind map is often created around a single concept, drawn as an image in the centre of a blank page, to which associated representations of ideas such as images, words and parts of words are added. Major ideas are connected directly to the central concept, and other ideas branch out from those.”

For the last six years we have been using collaborative mind mapping to manage our knowledge. It’s been the most successful platform I have ever used. It is simple, intuitive, easy to use and fully complies with the three principles of good practice. It provides an ontological navigation experience, so that different people reach the same domains following different paths. I can’t stress enough how important this is. Every now and then, when I have shared a new knowledge asset, I go for a walk and ask some random colleagues if it would be ok to carry out a test for me. I ask them to go to the knowledge map and see if they can find (or rather, access) something in particular. Invariably, they all find it in a matter of seconds. I observe the paths they follow and it is very interesting to see how different they can be.

The aim of this article is just to provide an insight into what I believe to be an effective knowledge sharing and collaboration platform, and what the principles should be to govern such an initiative. It all comes down to people and to influencing the organisation’s culture. I believe it should be down to the users to curate the experience of navigating the company’s knowledge. I do not want to overextend and lose your interest, and I hope you have found this story useful so far. I would be delighted to hear from you:

  • What do you think?
  • What is it like in your industry? What do you use?
  • Is knowledge mapping a sensible solution only for engineering disciplines?