LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
Organisations we work with realise that knowledge, what their people know and how they use it, is their lifeblood. They know that their ability to sense or anticipate change, to innovate, to detect risk, remain efficient and effective is all driven by knowledge. So why do so many Knowledge Management (KM) programs fail?
Knowledge Management Whitepaper
Juran - The Source for Benchmarking
Authored by Dr. David Griffiths
The Top 2
In Partnership with
Our expertise has been developed over more than six decades of real-world experience; our methods are
universal and have been applied in nearly every industry and culture around the world. We are trusted
advisers, enabling global leaders to achieve measurable, breakthrough results. We provide best-in-class
services designed to improve our clients’ organizational performance and business results. Improvement is
our business, and we are fully committed to our clients and believe that our clients’ success is our success.
K3-Cubed was founded by the University of Edinburgh in 2009 and is Juran’s Knowledge Capability
Benchmarking partner. K3-Cubed was built upon award-winning research, conducted by Dr. David Griffiths,
the company’s founder, that produced the Knowledge Core (K-Core) Benchmarking model. The K-Core is
designed to help improve organisational Knowledge Management and Knowledge Capability. David is deeply
committed to providing Juran’s clients with credible and valuable information through which they can make
high impact decisions that will improve their knowledge capability.
In Partnership with
Authored by Dr. David Griffiths
The Top 2 Reasons Knowledge Management Fails
Organisations we work with realise that knowledge, what their people know and how they
use it, is their lifeblood. They know that their ability to sense or anticipate change, to
innovate, to detect risk, remain efficient and effective is all driven by knowledge. So why do
so many Knowledge Management (KM) programs fail?
We found that the vast majority of organisations don’t have a problem understanding the
potential value of coordinating the knowledge effort but most find it difficult to manage or
optimise and even more difficult to report on, in terms of value. CEOs understand in
business terms how important knowledge is, but Knowledge Managers struggle to explain
value creation and CFOs are not being convinced when it often comes to discussions on
Return On Investment (ROI). This creates dissatisfied stakeholders and results in waning
support for this critical aspect of modern business.
This was reflected in research conducted by one of our KM experts, which found that 72% of
leaders from 354 global organisations were dissatisfied with the strategic performance of
KM, which increased to 76% when looking at operational performance.
This situation has to change in order to develop sustainable, resilient organisations that are
prepared for whatever the future might throw at them. More than this, dissatisfaction is
eroding trust, constraining the knowledge effort and inhibiting organisations interested in cost
optimisation, quality assurance, improving efficiency, ongoing innovation and speed of
response in a turbulent global economy.
To help, we have developed a value model, which will enable you to tell the story of how
your knowledge efforts contribute to protecting and creating value. Secondly, we have
created a unique benchmarking dashboard for your knowledge capability, which provides a
broad analysis of key aspects of your operations.
Problem and Solution Number 1: 90 Days to Reporting on Value
We know that reporting the value of the knowledge effort is often the number one challenge
for the majority of the organisations we work with. This is becoming even more significant
with the emergence of new accounting frameworks that ask organisations to report on their
KM World magazine “The
State of Knowledge
Management” KM World,
20(10), 16-17 and 28
human, social and intellectual capital, alongside the more traditional financial and
Ultimately, organisations are fast reaching a point where they need to clearly report on the
outputs from their knowledge assets; not to do so could negatively impact the market value
of your organisation. To help, we have developed the FAIRR value model, which can be
used in any knowledge project and should be considered before you launch any new
knowledge effort. We have used this approach to help organisations, who have struggled
with demonstrating KM value, to comprehensively report ROI in less than 90 days .
Level 1 - Feelings: Here we want know how people feel about systems, processes, tools
and frameworks deployed to enhance the knowledge effort (think about lessons learnt
frameworks and community of practice portals through to job design frameworks and
appraisal processes that influence knowledge-based activities). This requires a process of
employee engagement (e.g. surveys and focus groups). Any negative feelings need to be
analysed, synthesised and actions need to be seen to be taken; negative feelings will
amplify throughout the FAIRR framework and will impact ROI.
Level 2 - Access: The access to resources (e.g. people, reports, lessons learnt,
communities) demonstrates perceived value - for example, a person believed that it was
worth their time to make contact with a person or accessed a knowledge repository to find an
answer to a question. A lack of access could occur for various reasons - indicating barriers
that surface in Level 1. It is also possible that the resources available are simply not seen as
being valuable enough (perceived value not meeting the needs of the real time situation -
leading to reputational damage, where the value of lessons learnt systems are questioned or
unclear) and therefore are no longer accessed.
Level 3 - Impact: We need to demonstrate that people who access knowledge are doing
something with it. For this reason we seek to explore impact, which we see as being a
contribution to a demonstrable change in behaviour (people or process-based). If people
cannot demonstrate a change to behaviour, as a result of accessing knowledge resources,
then it could be that the knowledge being accessed is not valuable enough. In other words,
the perceived value of the knowledge does not align with the real world problems being
faced. It is unreasonable to expect that you will demonstrate how every single knowledge
asset (documents/schematics, process maps, protocol and “goto” people within the
organisation) creates impact in your business. Therefore you have to take a significant
sample of the population.
Level 4 - Results: Knowledge is being accessed and you can demonstrate that it
contributes to a change in behaviour. Now, using the sample collected in Level 3, it is
possible to explore quantifiable outputs from the change in behaviour; this could take the
form of a reduction in error rates, duplication of effort and improvements to efficiency and
effectiveness that reduced the cost of doing business. This data will allow you to explore
median results from your knowledge effort through to contributions to human, social and
intellectual capital that impact the market value of the business.
Barriers: Negative feeling toward your knowledge services (e.g “Sharepoint just
doesn’t let me run a community of practice the way I need to” or “our Lessons
Learnt don’t tell me anything I don’t already know - I learn more over a drink with
the project manager”).
Barriers: Negative feelings at Level 1. A lack of access at Level 2.
Level 5 - Return: The final part of this ongoing process is to turn the learning outcomes from
the data gathered in Level 4 to illustrate the Return On Investment for the organisation’s
Problem and Solution Number 2: Work Across the KM Dashboard
We have found a significant amount of evidence to suggest that organisations are not taking
into account the whole picture when addressing their knowledge effort. Referring back to our
research, only 24% of respondents said that the organisation’s knowledge needs are
reflected in company policy, 35% in employee appraisal processes, 19% in recruitment and
selection processes, and 20% in the pay and reward structure. The problem is that most
organisations do not know where to start when they are looking to optimise their Knowledge
We demonstrated this through our award winning, peer reviewed analysis into the KM
environment, which showed that only 3% of KM articles (n=1,000) consider all the elements
required to optimise knowledge effort. Even more surprising was that on average KM
literature only covered 62% of the variables that influence your KM effort. We then looked at
over 100 KM consultancy models and frameworks and found that none (zero) accounted for
all the variables required to optimise your knowledge effort. Knowing this, is it any wonder
that KM projects fail?
Your knowledge effort is far too important to be left to fall by the side of the organisational
road. For this reason we have developed an evidence-based benchmarking model that
uniquely analyses all known variables required to optimise your knowledge effort enabling
organisations to protect and create value by design.
We have developed a benchmarking dashboard which examines the interrelatedness of the
variables. You see, knowledge cannot be benchmarked in the same way you would any
other asset. Knowledge is limited by human capability (e.g. the ability to curate, select and
deploy appropriate knowledge). It is not owned by your business (it is at best shared
ownership and your most valuable knowledge could walk out of your business today
forever). The optimisation of any knowledge effort is limited by the scarcity of data,
information, knowledge, skills, experience, time, finance and technology when you sense a
problem or change and need to act. Therefore to truly analyse knowledge capability you
have to look at the relationships between key factors in your environment.
To do this one must understand an organisations performance for 12 key variables which
drive the 4 key capabilities of its knowledge efforts:
• Acquisition & Storage
Barriers: Negative feelings at Level 1. A lack of access at Level 2. A lack of impact at
Level 3, which could surface as a problem at Level 1, where people speak in
negative terms about the value of the resources they have access to.
Barriers: Negative feelings at Level 1. A lack of access at Level 2. A lack of impact
at Level 3, which could surface as a problem at Level 1, where people speak in
negative terms about the value of the resources they have access to. People do not
have the awareness or understanding to discuss, or they do not see the value in
reporting impact and therefore the value of the knowledge effort is lost at Level 4.
1. Socialisation: Think about how your socialisation spaces (physical and virtual) enable or
restrict knowledge flows. Consider how these spaces stimulate serendipitous discovery that
enables innovation - innovation happens when people meet. How easy is it for people to find
answers to problems?
2. Motivation: What compels people in your organisation to act for the better good of others
in the organisation? Have you considered whether you are utilising the ‘best fit’ leadership
model, in terms of your purpose, vision, strategy and goals? Do you have a distributed
leadership model? Knowledge thrives in social environments, have you engaged people to
explore how the actions of your leaders and managers motivate or demotivate others -
remember words can lie, but actions hold the truth.
3. Capability: What capabilities activate the embedding, sharing, use and development of
knowledge in your business? How scarce are these capabilities? Digging deeper, how does
time, finance and technology influence your knowledge capability?
4. Adaptability: Do people in your organisation question the governing norms that influence
the way you do business or do they get bogged down in the day-to-day of operational
delivery? What about failure, how is it tolerated in your organisation, is it safe to fail?
Learning comes from questioning what exists, how do you ensure this is happening? Do you
have a distributed decision-making model?
5. System & Processes: How are lessons learnt captured? How effective are your peer
review processes? What do you do to detect and reduce errors? Do you work to identify
duplication of effort? How do your Quality Assurance processes inform ongoing professional
6. Feedback: What processes exist to sense whether your knowledge effort is creating
value? How do you anticipate and sense the emerging needs of your stakeholders? What
processes do you rely on in order to detect failure early enough to limit its effects?
7. Context: Do your people work with a singular consciousness, in terms of the context of
the knowledge effort? How do your HR processes stimulate the knowledge effort (job design
through to recruitment and selection, through to induction, appraisal and exit interviews)?
How do Human Resource, Information Technology and Knowledge Management strategies
integrate - do you have a concerted effort? Do you take a resource-based approach or a
knowledge-based approach to your knowledge effort?
8. Organisational Design: How does the complexity of your organisational structure impact
your knowledge flows? What are the characteristics of your organisational typology and how
do you work to overcome barriers to the knowledge effort? What do the networks look like in
your organisation, do they exist in silos?
9. Knowledge Load: Your knowledge effort needs to consider the characteristics of the
knowledge you are looking to organise and the input required to organise it into a schema
that enables it to be accessed, shared, deployed and developed. What action have you
taken to ensure that your knowledge services are optimised in these areas? How effective
are your taxonomies and how do you know?
10. Current Knowledge: What does your organisation consider important, in terms of
knowledge? Where is it located? How accessible is it and how does it become embedded?
How scarce is the ‘important’ knowledge in your business and what are you doing to protect
it? Knowledge Managers will talk about getting the right knowledge to the person who needs
it, at the time they need it most.
11. Culture: Do you talk about values and what do they mean to the people in your
organisation? What stories do people tell around the water-cooler? What control
mechanisms do you have in place and what do these systems/processes say about your
12 Communication: Do the people in your organisation understand the value of your
knowledge efforts (to the individual, their team, the wider organisation and its external
stakeholders)? How effective are your artefacts (e.g. lessons learnt) at communicating what
is considered to be important knowledge?
An Example Where We Have Helped
The following is an example of how these models were deployed to assist a large European
energy company. The challenge was twofold:
We identified two key drivers for these challenges, which were not immediately
communicated or obvious:
1. Identify ways that the company could do a better job of anticipating customer
need and innovate quicker than they had done in the past - to do this the company
first wanted to gain a better understanding of their existing situation.
2. Determine how the Lessons Learned System was performing - specifically, was it
As part of the enquiry into cost reduction, we applied the FAIRR value model to the
company’s Lessons Learned Portal (LLP). The LLP was designed to sense process/system
errors, improve quality, reduce expenditure, while also protect against duplication of effort.
The LLP had been in place for two years and had collected over 1300 lessons learned from
Enquiry 1: We conducted a system test and demonstrated that lesson captured by
engineers were being uploaded but could not be located by other users. For example, we
located an interesting lesson that spoke about a process change and asked 10 engineers to
locate the same lesson we had found. In each case the engineer, using key-words typical to
the problem discussed in the lesson, did not present the document we had previously
Enquiry 2: We spoke directly with the LLP management team about access rates, re-use
rates and impact/results from the lessons captured. The managers were not able to provide
any evidence for a lesson being accessed, re-used or how the LLP had created an impact.
This equated to zero value being linked to a LLP that had cost in excess of $1.3 million over
a two year period. This included zero ROI on:
• LLP staffing costs
• Investment and maintenance costs of the software platform
• Costs associated with 1300 engineers completing a lessons learned template that
required, on average, 5 hours of input from 5 staff - a total of 32,500 man-hours).
Enquiry 3: We surveyed the users to ascertain their feelings toward the LLP. The feedback
was extremely negative. Engineers made statements such as, “85% of what is in [the
lessons learned] just doesn’t make sense and I have 30 years of experience” or “nobody
uses it, you can’t find anything useful and it is just a tick box, something we have to do and
we just work to get it done.” The LLP management team had peer reviewed content which
they did. However, they did not peer-review the document for quality of content, only
whether the content was accurate. There was no consideration for whether the most
valuable knowledge was being captured or what the most valuable knowledge actually was.
The study resulted in recommendations for a new lessons learned template (based on KM
best practice) and the adoption of a new value-based approach to the LLP, which
continuously monitors the value created.
Using KM benchmarking we were also able to demonstrate that the company had a low level
of Knowledge Capability maturity. The results showed that current practices restricted the
generation of new ideas, the ability of the company to sense and anticipate problems, their
ability to maximise revenue opportunities and their ability to innovate. Some of the main
findings from our benchmarking process concluded that:
1. A focus on reducing the cost of doing business in the short to medium term.
2. A desire to improve the resilience of the business by surfacing new ideas and
creating new revenue streams through new business models.
Each of these areas were interconnected and had to be addressed as part of a wider effort.
The benchmarking results focused the company to create both a more agile and resilient
approach to business development for the company.
How can you enhance your knowledge capability and prevent failure in your knowledge
management programs? Simple, by benchmarking your knowledge capability with Juran.
Benchmarking with us will allow you to discover your strengths and weaknesses, understand
them and close the gaps in your performance. Our method provides opportunities to
benchmark against other organisations, as part of a benchmarking consortium, or internally
with other parts of your business and against ‘Best in Breed Standards’ (optimal conditions
for success). This will not only provide you with a rich learning experience, but it will guide
you toward actions that will enable you to surpass your competitors.
Visit our website www.juranbenchmarking.com to view all our benchmarking programs and
download more white papers on our thoughts on benchmarking activities.
• A vision and strategy for developing and monitoring knowledge capability
did not exist - consequently there was little or no consideration for goal-
setting in this area.
• There was no ownership for knowledge capability at an executive level and
there was no consideration for the modelling of appropriate behaviours at a
• At an operational level, there was a low awareness of knowledge need and
any activity tended to exist in highly constrained silos.
• Technology platforms were varied, did not ‘speak’ to each other and the
infrastructure existed in layers and silos.
• Communities were expected to be self-organising, but were constrained by
key Performance Indicators that constrained knowledge sharing, idea
creation and innovation.
• Methods used to stimulate knowledge flows were of a low maturity and in
most cases were not considered at all.
• Reporting/feedback methods did not exist and therefore managers/leaders
were not able to sense or anticipating customer problems.
• There was no centralised knowledge effort, which resulted in duplication of
effort and competition instead of collaboration.
• The culture of the organisation was such that failure was said to be “safe”
and a way to learn, but the actions of managers/leaders did not support this,
which resulted in failure being covered up.