That’s a good one. That was actually an OKR a business owner had defined at Ford. When I worked with him, I actually suggested that his best OKR should be to minimize time from sales to EV home charging installation.
It depends on the objective. One team I led was trying to test a business model. One of the objectives was to maximize learnings. The KRs were around number of research studies and experiments we ran during the quarter.
Yeah I can see that. I work on a lot of emergent tech and when we are in R&D we have objectives around arriving at POC or research that will enable key decisions. Thanks for sharing.
It's the attitudinal objectives that are difficult to measure, but it IS possible.
When we measure how a person is feeling about something (e.g., Do I trust this information? Am I confident I am making the correct decision? Do I feel focused and productive?), we need both qual and quant data.
Qual data is generally self-report and observation. What do they say about their experience and expectations? What do I observe in their behavior, actions and reactions?
Quant data comes from analytics and instrumented metrics, such a time on task (or screen), errors, data displayed, interactions, bouncing, forward and backward path tracing, etc.
Anything can be operationally defined - that's a superpower of social sciences.
What should I ask about, and how do I classify, categorize, and quantify what I hear?
What should I watch for, and how do I describe and quantify my observations?
What can I measure (time, count, conditions, intensity)?
I have typically defined behavioral benchmarks in the experience. At Ford, they had built an internal tool that collects sentiment analysis over time across the customer journey. There are operational tools like this. Wanting to hear how people have implemented tools & benchmarks IRL.
The key is to collect data from multiple sources. A weakness of self-report measures is that we assume people can (or even want to) describe their internal states, needs, and experiences accurately. Humans are only OK at that, and there is much variability in the data.
We need to observe and interview, too. People do not always remember their behavior accurately, and they may not think that something important for our research and insights is something they should tell us. Maybe it seems irrelevant or insignificant to them. People filter what they share.
Analytics and quant metrics help us better understand what is actually happening in a product, but not why or for what reason. People may not answer a survey completely or accurately, so we talk to them and watch, and we inevitably see and learn things that analytics surveys could never reveal.
I’m familiar with business objectives. This was how we were trained to define OKRs. That’s the whole kitchen sink of the product framework. My question is, “What measures define a good experience? What behaviors drove those objectives? How well did we remove friction?”
They fit into some experience indicators and goals like task completion, session time, etc. I wanted to hear what people’s examples were, anything I may have missed.
I used to keep a list of various metrics. I may still have it somewhere. But I found those lists were not useful. To me, metrics are answers to questions.
So what's important (to me) is defining good questions that need to be answered. From there, we can figure out what metrics etc. are relevant.
In one team, we wanted to measure the gaps or opportunities in employee experience. Example—what bring uncertainty or lag in their decisions or actions in daily work, the frequency of those apprehensions, the resolution time, and its impact on product.
This depends so much on the nature of the product but I typically look at things like task completion with minimal interruption, decreased reliance on help, decreased ping-ponging in navigation (trial and error nav behavior). Other factors would be teasing out natural growth separate from marketing.
In the past I’ve tracked session time on value pages and improved arrival of destination or completion of task. I’ve also benchmark correlation to engagement for strategic surface areas.
What I'm missing after reading the current answers is the actual needs, motivations and experience of the users for the specific context. So, I would do research to find what is important for users first, not just rely on SUS or similar, and then set targets based on that.
I.e. — “processing X data” is slow and the KR describes the measurable value that would mean we have eliminated that weakness.
I also prefer “pairs” of metrics or conditions that must be met when achieving the quant part of the KR, like “no quality loss”. Prevents bullshit tactics.
It's got to be linked to the outcome the UX is trying to achieve.
General usability is great qualitative data (observing actual issues) but at scale it's hard to isolate; different contexts and experience with the tool.
So I much prefer to measure how many people did the thing I was UXing towards.
I would argue that that is closer to the business goals than the user’s goals, so I wouldn’t call it UX, I would maybe call it CustomerX. Throughout all my years of asa user researcher, I have rarely heard a person who prioritises the good of the company behind the product.
Yeah context dependent. I usually ask myself for the context at hand: is it accessible, understandable, usable, useful. And then I go through and check what metric will reflect those in that context. For example number of clicks to first contact or time to value or task success vs. task satisfaction
SUS for usability
C-SAT for satisfaction
NPS for current mood and random understanding of. 10-point scale.
But you can create a rubric for any qualitative data thus converting it into a scaled and moving number that can be compared and thus trended over time.
It’s less about it being actionable and more about not having seen UX-led actions reliably move the score. Moving NPS, for instance, would require a company-wide objective that gets customer success, sales, product, UX to drive KRs around.
Not exactly, if you have a great NPS analytics team w attitude analysis software you can look at the long text and correlate scores with comments and provide good analysis.
Does anything that engineering do probably impacts the business all on its own? Product management? It takes a village.
I find most stats aren’t actionable by themselves. There needs to be correlation with at least 1 other stat in order to convert a stat to insight, let alone action.
Comments
When we measure how a person is feeling about something (e.g., Do I trust this information? Am I confident I am making the correct decision? Do I feel focused and productive?), we need both qual and quant data.
What should I ask about, and how do I classify, categorize, and quantify what I hear?
What should I watch for, and how do I describe and quantify my observations?
What can I measure (time, count, conditions, intensity)?
I wrote about Product and Business Objectives here.
https://swkhan.medium.com/driving-clarity-and-alignment-via-business-and-product-objectives-6d2c9cca2046
So what's important (to me) is defining good questions that need to be answered. From there, we can figure out what metrics etc. are relevant.
O’s = core advantages/differentiators/strategy of our product. They rarely change. Something like “speed of delivery”.
KRs are defined as very concrete weaknesses in the O, like “process X data under 10s w/ no loss of quality”.
I also prefer “pairs” of metrics or conditions that must be met when achieving the quant part of the KR, like “no quality loss”. Prevents bullshit tactics.
General usability is great qualitative data (observing actual issues) but at scale it's hard to isolate; different contexts and experience with the tool.
So I much prefer to measure how many people did the thing I was UXing towards.
a few. Task completion, time to complete, first use complete rate and first use speed.
Care needed with all of them!
Did user do x ≠ did we make x easy to do.
Why would we do that? Time/money?
User behaviour is the important outcome, but user experience is an important leading metric.
What I’m saying is that teams use these kind of goals, which require cross functional support, because they reflect customer and business outcomes.
I understand, I'm just saying they're missing out on UX outcomes. 🤷♂️😁
There is no value in a great experience that doesn’t drive customer behaviour and consequently business outcomes.
Sure ideally we also measure the UX, but it’s often missed because of restraints. Agreed behaviour change isn’t ipso facto a UX success.
C-SAT for satisfaction
NPS for current mood and random understanding of. 10-point scale.
But you can create a rubric for any qualitative data thus converting it into a scaled and moving number that can be compared and thus trended over time.
I ask, because I haven’t.
It seems to me more of something you track as a health metric.
Objective: loose weight
KR: lose 25lbs
Does anything that engineering do probably impacts the business all on its own? Product management? It takes a village.