3. 09/10/2017 | 4Allan Tanner @weesandy
Launch version (March 2016 – October 2017)
Level 1 navigation changes
Revised version (Launching early Nov 2017)
9. 09/10/2017 | 10Allan Tanner @weesandy
IA is a fundamental element of findability. Don’t neglect it.
Sites evolve quickly. Don’t assume it’s working fine.
You might have to review your whole structure.
Test, iterate, test again – try not to cut corners or make assumptions
Evidence is vital to convince content stakeholders: remember peanut butter in the jam aisle
Make sure employees are involved every step of the way. Step 1 is finding out whether it’s working for them or not.
Thank you!
Things to take back with you
Notas del editor
6 months after we launched our new intranet, an employee feedback survey revealed chronic findability issues with the site. Problems with site IA, content search and people search were called out.
The intranet team began a programme to address the findability issues. That programme looked at search and profile problems but the bulk of the work concentrated on the site information architecture. It quickly became apparent that the whole site structure would need reviewed – around 1000 pages excluding news.
The initial exercise was an open card sort. Conducted with 18 people, this quickly provided a new top level navigation (L1) to test. The new L1 tested well. It was much more topic-based and moved us away from generic labelling. The initial L1 was intended to be task-focused but in hindsight didn’t achieve that. It also featured the much-maligned ‘resources’ label, under which sits an horrendous megamenu.
The new L1 provides far more meaningful, topic-based labels, which give clear routes into content. They are quite HR-focused, which reflects the fact that HR material makes up a large volume of intranet content.
We began testing the content substructures. We got bogged down quite quickly in some areas. Initial tests scored well, then scored badly after some changes, then were scoring well again. Or scores would remain low, then a final tweak would improve them hugely.
Here’s an illustration of how some of those tests progressed. On the left, we have a piece of content that needs moved after its current location tested badly. When we tested we went through multiple iterations; you can see the scores change each time. The top slide shows a different task with a mediocre score, which dips, but then scores well. The bottom one shows a task with poor scores which finally start to improve. The score is still not ideal but is substantially better than it was. The reality is that some content is very difficult to locate effectively – the success rates are difficult to improve. It might be that search or cross-linking will help.
We were using Optimal Workshop’s Treejack tool, and one if its outputs is the tree structure diagram on the right which illustrates the path a user takes to the content.
And these outputs are useful to take into discussions as evidence of the need for change because through all this testing you have to bring content owners with you. And all they see is their favoured labels disappearing, or content they expect to see sitting together becoming widely scattered across a variety of sections that don’t have their name on them. We had to convince them our approach was correct.
To help our argument for change we used an analogy of ‘peanut butter in the jam aisle’. So the example is, you are in the supermarket and want to find peanut butter. You don’t see it on the overhead signs, but you do see jam, and you make a reasonable assumption you’ll find peanut butter next to the jam.
This helps show that if you label your structure correctly, users will make the right associations which will guide them to their desired content. It also means you can use navigation labelling that might differ from page titles – which is helpful as navigation labels need to be short. But - you need multiple tests to get those routes and labels set up correctly.
Some colleagues will remain resistant to this argument - so something like the Treejack evidence is key.
One additional benefit to the restructure is that we can take a fresh megamenu approach. I mentioned how bloated Resources had become – here it is in all its glory, including some items below the fold.
The new approach allows users to see down to level 3 and beyond at a glance and is far more wieldy and usable, and can be expanded as content grows. So it’s an additional benefit for us from this work.
This comparison shows the scale of change from our original megamenu to our new set up – a substantial difference.
So, finally, we have IA we can trust, that can be extended, and is employee tested. Plus we can say that we acted positively on adverse feedback to improve the site.
Final considerations – ask questions of your IA to ensure it is fit for purpose. Begin by asking your employees if it’s working for them. If you are going to embark on a review, be aware it might have to take in the whole site. Don't underestimate the time it will take. We had a team of two mostly full time on it, plus assistance from a consultant, and it took a full 10 months. But that effort has been necessary to avoid a repeat of the findability issues.
When testing, don’t cut corners – it will compromise the integrity of the exercise.
You will need evidence to convince content stakeholders of the need for change, and that the changes you propose are the right ones.