Presentation at the Agile Experience Design Meet-up in NYC "User Research for Agile Teams." This deck addresses using Discovery Phase research done pre-project or at the beginning of a project in order to facilitate design and road mapping.
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
User Research in Agile projects
1. User Research
to support design and prioritization in an
Agile development environment
Agile Experience Design Meet-up, NYC
“User Research for Agile Teams” March 28, 2012
Lynn Leitte
Sr. User Experience Architect / Information Architect
4. Discovery research
Getting out of the features rut
Make sure that this Is aligned with this
5. Discovery research
• What are customers doing with our tool?
• What do they like
• What do they ignore
• In what capacity is the service useful to them?
6. Discovery research
• What are Research Managers doing in the tool for every call?
• What are their pain points
• What do they ignore
• What can’t they live without
And at the request of the
overall manager, were the
tool’s limitations
preventing them from
doing their due diligence
or otherwise fostering
short-cuts behavior?
7. What did the research tell us
Insights and design cues
Pain points & usability issues
Proposed product feature viability
Ideas for enhancements & features
Insight into metrics interpretation
Photo courtesy of Alder Planetarium
8. After the Research
Collaboration with Product Management
Shared vision
Clarity on the roadmap of product evolution
Decisions on accountable metrics
Agreement on what to test
Photo courtesy of Alder Planetarium
9. How did this help us in Agile?
Scope and prioritize design activities
Help clarify and articulate product goals & metrics
Identify technical debt impact on plan
10. What it means for the Agile team?
2 week sprint is this little piece of the road
It actually goes somewhere…somewhere worthwhile
11. What it means for the Agile team?
The work has a purpose.
The one(s) queuing up the work aren’t lost
Editor's Notes
I worked in Agile scrum at GersonLehrman GroupTrained in Agile, along with my assigned development teamWe were dedicated to a “product”I scrummed with them daily, but did not create “deliverable code”. My participation was for communication, collaboration, and knowledge sharing.My design activities were tracked, eventually in the same Rally tool, but split out into a different track. Design work, decision making, product owner & stakeholder reviews ALL happened ahead of my work with the developers. When I sat in to write & groom Agile user stories the design decisions were buttoned up with the stakeholders already for the piece that the developers were about to build.
When I was preparing for this presentation I came across this image. It made me think of the situation, in both Agile and Waterfall.Sun = dev team. All this potential, energy, capability. Ready to go.Stuck in the middle are the Project Managers and Business Analysts and Functional AnalystsWay out on the far right you’ve got business stakeholders and crunched in the middle, sometimes Product OwnersOver here on the lower right, in many places, you’ve got designers and UX practitioners who are pulled into a project for a while and then dropped out again. We often don’t stay in full communication with developers and stakeholders.In between…all that space.The great thing about working in a truly Agile environment (not a water-scrum) is that it reduces the space between everything.
Agile done well brings us together. It improves communication and transparencyBut where are the customers? Still at the end of the “telephone” game chain. In some companies they are still the other side (over to the right the sun)What drives decision making? Agile says to test iteratively…but test for what and why? Can customers successfully use a feature is the typical test case.Yes, but do they *want* to? Did we build the right thing? Testing in the Agile dev cycle doesn’t always answer these questions.
When I came onto the team at GLG the Product Owner was brimming with ideas, features, and significant product enhancements. He was a smart guy with amazing ideas that has dome cool UI design challenges.However, what we did not know was 1. why our site adoption was low and 2. why we had two significant drop points in our conversion funnel. In short, what we did not know was why things were broken. The plan from the Product Owner and his boss was “build some stuff, put it in front of customers and see if they like it.” AKA classic build and test. He had a list of features and work for the developers. We were going to stay stuck in this “features rut” where we were rolling out features as our “success” but didn’t have a clear idea (and therefore alignment with) of the overarching business plan and goals, the metrics that benched our product as a success to the business.
It took some work with my Product Owner to convince him that doing discover research, rather than customer feedback or usability testing after build, was worth while.We compromised with a split of the study. Some discovery interviews and some testing of features after a build of features in the dev queue.I interviewed about 14 clients, mostly on the phone with a WebEx sessions. IF they let me come to their office I did so. This didn’t happen often as hedge fund investors want to keep their work private.I wanted to learn some pretty straightforward Discovery Research things * what customers where doing in the site – what were their actual actions/clicks/downloads, etc. * what they found useful – this was about their perception of when and what about our site was useful to them. * what they like and dislikeThis was not a usability test.If you’ve done discovery research for waterfall projects this should sound very familiar.This research was a success! We got a lot of valuable information from the clients and with the majority of them I got to watch them use the site.
Because the client research was a success and the findings considered valuable y reward (no sarcasm)was that some months after that research was wrapped up I was asked by the Director of IT Ops if I would do the same kind of research for the customer support application. Win!In this later piece of research I shadowed 16 Research Managers (our customer support staff) for 60-90 minutes each. Their tool was the other side of the coin from the customer tool – it was the fulfillment half. It was important to understand what when on in the lifecycle of a transaction.The tool wasn’t making their jobs any easier. The photo is of me standing by a printout of a typical record which was 12-14 “screens” long on a 1024x768 monitor. The useful data was spread all over these pages.There was also an awkward situation during this research. The manager of all the customer support staff posed a question he wanted answered: were the staff doing their due diligence?User Research should never be for spying on employees. That’s bad for the Practice, bad for the Profession, bad for the relationship building, and just plain bad. Rather than refuse this question outright, I looked for a way to accommodate at least some of this interest. I turned the question so that it was about reporting what in the *tool* hindered or helped the employees do their due diligence. Though most of them did do their due diligence work, rather than talk about employee A doing as trained and B skipping tasks, I talked about places where redesign of the tool would make the due diligence easier, the data more clear, foster behavior that made it a seamless part of the task. I tied proposed recommendations to something the manager cared about.
We learned a number of things from these two pieces of research:Insight into our conversion drop off.At the end of the funnel they didn’t convert…they called customer serviceIn the middle of the funnel they were comparison shoppingLack of transparency and trust in our app and process were problems for the customer (thus the conversion by phone)A number of proposed feature enhancements were too complexShed light on what kinds of changes would be important to usability test Usability issues in the customer support tool were numerousWe had data they needed that wasn’t pulling into the consultant record or client recordScreen flows didn’t match their actual activityThese kinds of findings are not specific to Agile; these come up in waterfall and lean projects, too. The findings were for the same stakeholders as waterfall: designers, product owners, business stakeholders.The difference is how we choose to act on the findings to fit the Agile process.
We used it to prioritize immediate fixes and dev work for upcoming sprintsMore important we (the product owner, dev manager, ontologist, tech architect, search algorithm specialist, and me) used the findings when we built a vision for the product. Or in our case, a consensus. We didn’t quite get to Vision.We got together and put everything on the table: go-ideas, UX fixes, long enhancements, technical debt, system/platform upgrades, design activities, testing, prototyping. We prioritized. We haggled. We debated interdependencies. We juggled. We talked metrics. In the end we managed a decent road map for the product. Which Epics, which iterations, what was solid and what needed exploring. We had a decent idea of how the high level pieces fit together.It also helped to articulate and support the goals and roadmap that our product owner could share up to senior management.
The research was never hidden or withheld from my dev group. They knew generally what my activities were. They were invited to take an interest. When I presented findings to the business and product owners they were encouraged by the dev manager to attend. I also crafted a “tech oriented” presentation for them that focused specifically on hands-on, development centric items, many of which mapped to User Stories and Epics in their backlog.I did this because the dev team didn’t care about some of the very business and design oriented findings. For example, they didn’t care at all about the raging debate between design and product over badges and flags in the UI and what the research revealed about the customers’ perception of them. To my devs that was squarely in the “make up your mind” category. They *did* care about user perceptions of: performance, incidents of data corruption, the accuracy of the search algorithm, metrics tracking, and the plans for new work.We had a better backlog because of our prioritization and roadmap work. For certain pieces of work we had much better grasp of what users were trying to accomplish.
What did it mean for the Agile dev team? The research itself, I don’t know...they were not so much interested in the findings as in what was the actionable decision from findings. In the end, they were second hand consumers of the research. The most important thing is that because we succeeded and roadmapped the developers has a sense that We knew what we wantedWe knew where we were goingIt went somewhere worthwhileA two week sprint or 4 week out grooming session just can’t paint the whole picture.
Other teams that didn’t have a roadmap and a plan did a lot more thrashing and spinning than we did (which isn’t to say that we had none). They were building whatever shiny new thing that caught their product owner’s eye that sprint and had troubles rolling out completed workflows.Being scattered and unfocused is not good for any project or any team. In Agile the designers and product owners are closer to the work process and can and should feel more of the problems that lack of clarity can bring.The last thing you want is for your dev team to perceive you as this guy – lost -- scratching his head not sure where to go.