Are you making real progress? | Tim Herbig

Bonus Episode – In “Are you making real progress?”, Justin Woods speaks with Tim Herbig about moving beyond “alibi progress” and focusing on real product impact. They explore how strategy, OKRs, and discovery must work together, why teams mistake activity for progress, and how to reach “informed conviction” in decision-making. Tim also shares practical ways to prioritise problems, align discovery, and avoid over-reliance on frameworks.

As a Product Coach, Tim helps teams measure the progress of their decisions by enabling them to: Make clear strategic choices, Translate strategy into pragmatic, leading product goals, and Reduce the uncertainty of problem and solution priorities. Tim has helped companies like StepStone, Chrono24, Deutsche Telekom, and Specsavers turn product theory into pragmatic and practical application. Before going independent, he held multiple Product roles at companies like Gruner+Jahr and XING, and multiple b2c and b2b startups for 10 years. Tim now helps product teams develop and implement better practices that help them progress in their context instead of chasing artificially created "best practices." He achieves that through hands-on training, coaching offerings, in-depth courses, and intriguing content.

Here is an audio-only version if that’s your preferred medium - and you can access it through your favourite podcasting platform if you prefer (Apple, Spotify, Amazon).

Are you making real progress? | Tim Herbig
Tim Herbig and Justin Woods

In the next episode we have are closing Season 2 with a discussion between Phil Hornby and Justin Woods about what we have learned this season. So watch out for episode 26!

  • - Can they stand in for the decision they take, whether the decision is to build the thing, to build a new solution to a new problem space, discard everything altogether? They can be convinced about many things, and what that will take for many teams in just comparing B2C, B2B, internal products, like that will require a different set of data or evidence to get to it from conviction.

    - Hello, everybody, and welcome to the next series of "Talking Roadmaps." In season two, we're talking product operations, but we're actually breaking out and speaking to authors of books in the product management domain. We've had Tim on the show before, and I'm really excited that Tim's back today to talk about his book, "Real Progress." Tim is a product management coach, author, and speaker. But Tim, give us an update on what you've been up to recently.

    - I would say probably the biggest thing since the last time I've been on the show, it was the book. I think it's fair to say that, so that's the big thing. So, to me, it felt like codifying a lot of the ideas that I have been dabbling with, working with as an in-house product manager and also as a coach with clients over the last couple of, yeah, 15 years, fair to say. So, yeah, that was the main thing I was up to. And now, of course, the interesting part is also to hear how things are landing with people, what they take away from the book, and to also, of course, think about like which ideas need further development or where to take things.

    - If you're enjoying the channel, subscribe, hit the bell icon, and give us a like. Yeah, for sure. And we were chatting offline about some of the things that I really liked about the book, and for me, it was that it was not just that it was a narrative, but actually a reference that you can dip into. I think I really enjoyed the diagrams and the practical examples that you created, and the book is peppered with them, which is fantastic. It just brings that visual element, and it's not just a wall of text. And what I really liked towards the end of each of your chapters or sections was the quality checks, the downloads, and the resources. So for me, that really resonated and helped me make the book more of a practical resource. What's some of the feedback that you've had from folks?

    - The feedback that I've got, like the best feedback that always warms my heart, like, actually, just yesterday, I received an email from a reader who sent me a photo of her physical copy and all the markings she took in the notes and notes in the margins. So, and another time, I spoke at a meetup, and someone brought their fully equipped book with Post-its and notes and everything. So to me, it feels like the vision I had in my head when I was writing the book, it felt to me like I wanted to, I get the feedback that it's pretty dense, which is something I was going for, but the idea was like you read something, and you want to put the book away to think on it or maybe put it into practise, and then you maybe put it. So I never imagined it to be this like linear read you would go from front to cover, and that seems to be the case. So the interesting thing is also that I imagine that the book will be different for everyone, right? Maybe your current focus is more on the side of OKRs or strategy or discovery, and then you're like, you know, you pick your section, and for some people, that unlocks new things. But the consistent thing is that people get this vibe of, okay, here's how these other practises can really help me. Like, here's where I can connect the dots, which is what I was going for, and that seems to be the main thing people can take away, which, of course, is very satisfying to hear as an author.

    - Yeah, for sure. And so, for some folks that may not be familiar with your book, it's called "Real Progress," and there's kind of three really big topics in there for which I know you most for as well. But maybe for our audience who aren't familiar, tell us a little bit about the premise of the book.

    - Sure, so the main premise is like the big setup that I tried to do in the beginning is that, as product managers, we're oftentimes so busy with like, worrying about whether we do the thing correctly, like whether we write the OKR correctly, if we do the continuous discovery habit like it's meant to be, and all that kind of good stuff. And I've been guilty of that in my career as well to realise, okay, sometimes, I'm more focused on the what I call alibi progress of ticking the box and doing the thing, but I lose a bit sight of what is this practise actually trying to do for me. And so this idea that I'm introducing early on in the book is to think more of the value a practise brings to your work, you have to basically treat it a bit like a product. Like, can you tell me which problem this is solving for whom and how you would know it did this? And we're so used to doing that for a product, but for ways of working, we easily lose sight of that. So I introduced that of like, okay, if I take these three practises that I chose for the book, like here's how I define the value for these, like what's the core of them, and also how they play together. Like, just to be aware that, for some of these domains, of course, it's easier to go deeper into things, in the practise, but sometimes, you might just have to pick up your head and realise, oh, the reason why I'm stuck with my OKRs is because my strategy isn't as specific enough, and no amount of ChatGPT prompting or OKR templates will help me fix that in the OKRs if I don't fix the strategy. So that was the idea. And then, after that, we move into each of the domains, like strategy, OKRs, discovery, and the framing I chose, which was inspired a lot by James Clear in the book "Atomic Habits," where he says that advice is so context-dependent, but questions are pretty adaptable, and I was like, who am I to give general advice across these three domains? So what I chose to do is I dissected the three domains into individual attributes, as they call them, and to have them basically work as something like diagnostic tools in a way. You know, like, oh, I'm struggling with my OKRs. Okay, one of these four attributes could be the reason why you're struggling with it, and because it's out of balance, and then you can dive into, okay, how can I improve this attribute, and how can I reflect on that? And that's pretty much the theme of the book. So we have the attributes, we have those diagnostic questions, and then we have specific methods and approaches in between to actually do the work and improve those things.

    - Really nice. Yeah, and the bits that stood out for me was, and it's obviously infinitely more complex than this, but you framed it as the product strategy helps you to say no, the OKRs help you to measure the progress, and then product discovery helps you reduce uncertainty, and you've pulled all of that together in the progress wheel, which ensures quite rightly, as you said, if we take any of those in isolation, then you might optimise it, but you miss the context within which it sits. And so you provided that progress wheel to allow each practise support and amplify the others, which, yeah, totally makes sense to me. I wanna go in a little bit deeper, then. So let's think about, you talk a little bit, or, in fact, sorry, you talk a lot about alignment as the foundation of autonomy in discovery, so trying to make sure that you've got that alignment there. What do you think good alignment looks like in practise? What have you seen?

    - I think when it comes particularly to discovery, as you brought it up, I think big parts of alignment is, because the thing is, when you talk to many customers, I'm sure you have the similar experience, is that there's no shortage of potential customer problems you could solve all day long, right? You could skim whatever source of insights you have and then make your choice, and you could solve problems all day long. And I think the core question is because, in many organisations, there's this question of what's going on with discovery. It's like this black box. It's this secret black box sequence of activities, like what's going on there. And the thing there is that if you don't have any kind of like guardrails or orientation in there, of like what's the lens? Like, what's the lens, basically, you would use through which you would look at all your customer problems, it can become very opportunistic very quickly, you know? And I think, like, this kind of clarity around, and that's, for example, where something like strategy could come into play, like where's the biggest risk in our strategy? What are the biggest uncertainties or assumptions? Or you would look at your OKRs, whether it's your company goals or your team goals, what are we trying to achieve here, and what kind of levers do we have? What kind of customer segments or customer problems are actual levers we have to achieve these goals? I think that should give you this like bit of orientation, and within this orientation or these guardrails, you can then move very, very freely, right? You want to pick the methods, you want to explore, you want to test things. So I think that's perfectly fine to have done that.

    - Yeah, that makes sense. And I love that concept because otherwise, it's a free-for-all, right? We could solve problems all day long and not know, you know, what the constraints are within that.

    - Yeah, and I would even say, like, I mean, there's always a time and space for like being, of course, much more generative, and I think what people have to understand is it depends on the level of uncertainty that you have. Like, if your uncertainty is which registration page converts better, that's a pretty narrow uncertainty, right? You wouldn't go on in trying to figure out what are the problems people have with a checkout flow on the product page, right? You're much more specific in that, compared to your uncertainty being, oh, I wonder if we can move into this new market or serve this new audience. Obviously, there's a much wider playing field, and you could pick much different techniques and be much more generative and explore different angles before you would then narrow things down as you go along.

    - Yeah, great, great response. I totally agree. So, if the guardrails stop us from going too lateral, how do teams know when they have enough to move forwards? So, how much is enough?

    - Yeah, I think that's, yeah, probably one of the most commonly asked questions in the context of discovery, and luckily, I can refer to someone else on that. And what I really love on that, like when you're done with discovery is, I love this quote from Ravi Mehta, who used to be a CPO at Tinder, worked at Facebook, and is now a consultant, and he says, basically, with discovery, we're aiming to get to a point of informed conviction. And I love that because the problem is many teams want a more definitive answer. They want score, they want to know the number of activities they have to complete to be done with it or the time it takes. And when you present a more fluffy concept like the informed conviction, it's like what does that mean? Like, how can you standardise that? And I'm like, you can't, and that's the point. Like, you don't want to standardise, quite frankly. You want to make sure that teams have the tools and the capabilities to get to the state, whatever that looks like for them. The point is, can they stand in for the decision they take, whether the decision is to build the thing, to build a new solution to a new problem space, discard everything altogether? They can be convinced about many things, and what that will take for many teams in just comparing B2C, B2B, internal products, like that will require a different set of data or evidence to get to it from conviction.

    - Absolutely, yeah, fantastic. In fact, you just mentioned the problem space, and your work emphasises navigating the problem space before the solution space. What patterns do you see in terms of teams that think they're problem-focused, but actually they're solutioning? Have you seen that before in your time working with your clients?

    - Yeah, 100%, I think it's probably one of the biggest reasons why I get to work with companies, that they want to sort of educate the shift, because they realise, and the reason for that is that they realise that the majority of solution work that they did didn't move the needle. And, of course, that has different reasons. Like, A, you could even say maybe they even, they picked the wrong needle to move. Like, maybe they even used the wrong metrics to determine the success of a feature, which is also very, very common, which is more of a goal conversation. But I think this, and, like, over the years, I think my focus has shifted a bit more of like, I think what I recommend teams to not do these days is like even when let's say a CEO drops a feature idea on your desk, or a stakeholder's very opinionated about a thing to do, like don't try to lecture them about, no, we have to go to the problem space first and take the happy path and the right process because I think that's much more counterproductive, actually. That won't get you very, very far. And so, when you have that in place, instead you want to acknowledge like where you're starting from, like the solution space you're starting from, and where you're moving into, that you say like, okay, even the solution space focus can be a starting point for then elevating, so to speak, your discussion to a problem space level. So I think it's meeting people where they are, as Teresa Torres once said, meeting people where they are, and then trying to show to them, okay, what can we even do uncertainty-wise or discovery-wise in the solution space? And maybe, by that, you point out like gaps or problems in the problem space understanding. However, if I would say like what's the, one of my favourite ways to recommend to teams to just figure out if they're still in the problem space or already moved into the solution space is can you still name what you think is the outcome of the problem as a How Might We statement. And this sounds so stupidly simple, because we know How Might We statements from ideation sessions. But just imagine like you're invited to an ideation session. There are two groups, and one group says the challenge reads, how might we build the sharing button? Like, pff, I don't know, we put it left, right, blue, green, whatever. And the other group says, how might we enable the sharing of data faster? It's like, okay, that's a much more broader question. And I think that just in your mind, trying to ask it to yourself is, I think, very, very powerful and can help you at least give a sense of, oh, are we already talking about solution details, or are we actually trying to understand the problem space?

    - Yeah, very much so. I think meeting people where they are is vitally important. When you're going in as a coach or a consultant, sometimes you can go in, I've had this before with clients where I've gone in at levels that were too mature than they were ready to keep up to. And so you've almost gotta just go in and meet them where they are and then bring them along that journey. You mentioned also, and this is a quote from your book, about, "Outcomes over outputs is a worthy aspiration but can lead to artificially defined outcome metrics for teams that can't measure and thereby can't act on them." And I think again, that is about meeting people where they are. Some of the clients I've worked with are very large, up to 14 100 companies, and in there, it's very common to have product teams that are solution teams and then product teams that are platform teams. And when you're trying to talk about the methodologies with each of them, the solution teams have much more latitude to talk about discovery and to be more strategically aligned, whereas the platform teams, by their very nature, are often forced to be output-based because they need to deliver the building blocks that have already been defined. And so, again, it's just you have to meet people where they are.

    - Yeah, agreed, I love this, that you wrote this outcome quote, because I think what's so interesting in our industry is that we love this ambition of outcomes of output, and there are so many benefits to that without a doubt. But I think, as you pointed out, that I think there are two caveats here, like, A, as you mentioned, okay, does that even match the type of work the team is doing? Like, do I try to artificially turn a clear output into an outcome just so that it's an outcome? And then, I think going back to the point of the usefulness of a goal like becomes very doubtful because if the outcome doesn't move as you go along, like what's the point of looking at this goal, right? Because you would even say even a output goal at least allows you to track progress. Of course, there's the fair argument with teams, if I have output goals, I also have a backlog or a task miss where there's a bit of duplication, and I agree, right? I think it's a conscious choice to say, okay, what's the added value from having output goals for a certain period or in a certain scenario. But the other thing that I think becomes even more dangerous because it's less conscious is that when teams set outcome goals because they have this ambition, but they realise that they either don't really understand the outcome from a customer perspective worth driving, or they can't even measure it, right, because just, I mean, the outcome is not enough if you want to measure it. And so, I always like to, actually, I had that in a workshop this week, where I talked to a company, and like imagine this, like you write the perfect outcome goal, and then the quarter starts, and after two weeks, you do a check-in. You look at the goal, the goal is 0% progress, and this repeats six times throughout the quarter because you can't measure the damn metric. So let me ask you this, what's the point of doing that? And if the answer is there is no point, then, well, that's alibi progress, and you maybe should think about why you're doing it in the first place.

    - Massively, massively. And again, alibi progress is something that you cover a lot in your book because it's not real progress, which, of course, is the title of your book. But one thing that, again, another quote that I get from there is you say, "Choose value over correctness in every practise." And I think that summarises what we've just described here, which is, look, it might be aspirational and correct to think about outcomes, but some teams are output-focused. "Choose the value over the correctness at every practise," just it summarises that beautifully, Tim. Many teams ideate widely, but struggle with prioritising ideas, and I've seen this again time and time. I've lived it, I see it with the clients I work with. How do you help them to avoid overvaluing novelty and instead focus on ideas that genuinely change user behaviour? And I think some of that picks up on what we've talked about already.

    - I think it is. I think, to me, it feels like the core question to me is, I mean, when I think about novelty, as you mentioned, and I love that you bring this up, to me, it feels more like novelty feels like it only makes sense either if novelty really is the end, as like, hey, you want to differentiate on the market by always being cutting edge or whatever. There's a narrative around that you could get behind, that you can doubt the effectiveness, but that it's a narrative to get behind. And the thing with strategy is like no one knows it anyways. What's the right one from the get-go? So I think it's fair to say. The other angle for me is like when people, when teams talk about their problems, I encourage, or the problems they want to solve for customers, one thing I try to encourage also the leadership in the companies is that they play a part in that shift by not just demanding from the teams that they talk about why exactly this is a problem worth solving, but also by default, even saying when someone brings up a problem, how do you know this is a problem worth solving? And not to criticise or to put them down, but just like, no, literally, help me understand why you think that is. Like, tell me about the customer interactions, the quantitative data, the indicators, the competitor moves, whatever signal you have, but tell me about it. And then you can always debate, is this the right signal, do we need more or less, blah, blah, blah. That's fine, but you have to at first have this conversation. So I think it's this like this chicken egg situation of like you have to demand that kind of clarity when teams present, let's say, roadmap ideas or other prioritisation decisions, and the teams have to be equipped, of course, and ready and willing to step into that accountability, to say, yeah, here's why I think this is something we should be doing. And that, of course, ties in with all the technical stuff of actually being able to do research to separate the signal from the noise, to understand which data to pull in when. But at the highest level, I think it's about this, like making the default why is this something we should be doing, as simple as that, based on some kind of evidence.

    - Yeah, massively. And also, I think it goes back to what you shared at the beginning around sometimes the problem isn't the frameworks, but it can be our rigid adherence to them. And I think what we, and I know that was a pain point for you, and the whole journey of the learning in your book, but I've been there too. You know, sometimes, you just say, okay, well, how do we prioritise ideas? Let's use the Kano framework, or let's use, you know, MoSCoW, or let's use ICE or RICE or whatever. And actually, a lot of them are missing the point that it should be a multi-layered prioritisation, and really what we should be doing is prioritising against strategy first. Strategy is the first part, which, of course, you say it helps you to say no, you should filter your ideas through your strategy before you even, or filter your discovery work through your strategy to set those guardrails, and I think that's some of the mistakes that I see a lot of time, is that people just take a framework off the shelf, apply it, but don't really think about the wider context. And that's why your process with this is so important, because one of the things that informs the guardrails on our ideas or our discovery or the work we do is to actually understand the context of the strategy, to understand the context of discovery to tell us if it's even worth going after, or discovery, you know, just to understand the value of it, or the OKRs that help us measure the progress or the guardrails on it. I think it's you have to look at the whole progress wheel in totality.

    - Yeah, you have to. And I would say at least acknowledge that when you find yourself in a situation of struggle with any of these domains to admit like maybe I don't have to write OKRs harder or have to try to do another strategy template. Maybe the answer's already there, which is something I wanted to bring across, or I think it's like, it's almost like it's a bit more of an uplifting, encouraging spirit of like, sometimes the answer is already there. You don't need to do more digging, more work, more research. Like, sometimes the answer is there if you look at it and acknowledge that, oh, okay, my strategy actually says that. I might disagree with it, but that's the reality I'm in. And so, I can take that information as a helpful lens or decision-making tool for the struggle I'm facing right now.

    - Yeah, big time. And that's one of the reasons why I really enjoy Martin Eriksson's "Decision Stack," because the key takeaway, we're all familiar with the strategy execution models or the stacks and things like that. The big bit that I took away from that, and I know you've mentioned it and maybe even adapted it in your book, was the principles at the bottom level that can come back up again. Because what I've found working with a lotta product teams is that often the strategy isn't clear. They get a lotta demand coming in, it doesn't really get ratified, they get told to implement it, and they don't really have a way to be able to say no. What I liked about the principles is it said, look, in the absence of strategy, here are some of the principles that we're gonna have. It's mobile-first, for instance, we're gonna do that first, and that just becomes a principle, even though it wasn't strategic. And I really like that because it gives the teams something to be able to push back on or to provide that guidance on.

    - Yeah, exactly. And I think it's looking around for these inputs, right? Sometimes you might have the opportunity that these inputs are so clearly articulated, as you just named, and as Martin suggests, right? But sometimes, these inputs might be less clear. And I think that's what I love, this concept of going back to the idea of using more questions, because if you really abstract it as what's in your strategy, and one of the questions in the strategy, of course, is, okay, what's the primary customer segment we focus on? If you separate yourself from all the, let's say, dissatisfaction you might have with the company strategy and the template and the process, whatever, if you just focus on the question, like what's our primary audience, and you look around the company, or you ask around for an answer to that question, you probably will find one. And it doesn't matter the imperfect strategy template it's a part of or a PowerPoint presentation or SharePoint site, you can use that answer for your work. It doesn't have to be this polished artefact.

    - Let's talk a little bit about kind of prototyping. So I think, Tim, you're an advocate for prototyping experiences and not just UI mock-ups, right? It's about the entire experience that the user will go on. What's an example where a non-UI prototype revealed something that a polished prototype would've hidden? So, does that question make sense?

    - Makes a lot of sense. Like, one thing that springs to my mind is when I worked as a product manager, we ran a lengthy discovery, and one of the goals was essentially to understand how we could create value worth paying for a very inactive or passive customer segment, which was pretty large. So what was happening is that we identified the type of content that, per problem space understanding, should be relevant for this audience. But the problem was like, the thing is, if you're testing for desirability or willingness to pay, you can't really use qualitative methods because you need a larger scale understanding to test that. And so, of course, we did prototyping, and we tested the usability of it, but we didn't have solid evidence on whether this would be actually something people would pay for. And so it would've been like adding content to a newsfeed of some sort in an app to people. And what we did instead is, to test this further, was we basically did a hard export of these kind of events that would happen for people, and we dumped them into an existing email template, and we hand selected the users, and we basically, the email, we sent one-off emails like, hey, something like, ha, ha, ha, has a company anniversary or something like that, and then we would invite them to click on it to also reveal, hey, you might have to pay for that and to compare the open click rates. So we basically, we abstracted the content into the format of an email so that we could test the assumption quantitatively and could move on from the sheer prototype quality level testing, qualitative-based testing. So that, for me, is a good example, and that I bring it up to companies, because, in many companies I get to meet, the predominant thinking is still, when I'm prototyping and testing, it has to be a version of the final solution idea I have in mind. And that's why things like the assumptions mapping from David Bland is so powerful, I think, because writing the assumptions helps you arrive at experiments that really test the uncertainty, compared to testing the UI, as you mentioned.

    - I think you came in, we were maybe going into MVPs in that kind of discussion as well. So you talk about reduced scope MVPs are a major theme in your work. What distinguishes a genuinely focused MVP from, say, just a thin version of everything?

    - I would say like one thing I would add to that before is that I think this understanding of what an MVP is like over the years, right? Our industry love to turn this into what it might no longer be. And I think, for me, this idea is often like an MVP is sold as a cheap way to test things. When you look at what it takes to actually build an MVP, it's oftentimes not really cheap to do it. It's also not fast. And also, like what to test is actually completely put on the sidelines, and no one really remembers it. So I think keeping that in mind, there's another great quote from Casey Winters, who used to be a CPO Eventbrite, and he clearly differentiates, look, an experiment is something that you would do to test if something is there, there, like if something is there. And when you think about how most companies approach MVPs, it tells you that is not a way to test if something is there. Then you have to admit, like look, an MVP might be first iteration of a fully functioning and potentially scalable product you would put out and to see, to already move the needle of potential customer problems and business goals. So I think one way to do that is to really figure out, okay, if you remove yourself from it has to be an MVP is an experiment of the full idea, and you try to build in shitty versions of all the functionality, you think like it's the first iteration of something more scalable, then I think it's much more easy to focus yourself as well. And so, like what's the first piece of deliverables we're going to bring into the product, into the MVP, and then, and scale from there once we see some kind of traction happening.

    - Great answer, yeah, totally. You mentioned impact mapping, and I think that actually appears frequently in your workshops that you do as well. How do you use it to help teams connect the day-to-day discovery activities to bigger strategic initiatives?

    - Yeah, so the thing for me with impact mapping is, and I mean, I always try to be not too romantic about frameworks in general. And for me, it's like impact mapping, I think I got to known it in 2014. It's been stable since then. I think it's been around since, the first version's 2004 or something like that, so it's very established. And I want to admit that, I mean, there are similarities to other tree-like frameworks, like an OST or something like that. And I mean, again, if we remove the details of the framework from the value it provides, it's easy that, look, the core values to connect, as you mentioned, the solution space work to the overarching business priorities. Like, that's what it's supposed to do. And again, whether you use an impact map, an OST, whether you modify it, I would always argue, whatever you use, it should be in service of that. And what I think the biggest power here is, to stick with impact mapping, what it allows you to do is to come back almost full circle to this topic we had before. We don't want to solve customer problems for the sake of solving customer problems. You want to be sort of like specific, and to admit that, okay, it's kind of a lost cause of trying to measure solution features that you build based on business goals and business goal metrics 'cause they're too far apart. You can't isolate the effect it really has, so it's not really a good way to measure things. And, of course, also the corridor's quite wide. If you tell me improve revenue, you could argue for any feature idea on the horizon to build. And so, what we want to do is we want to make sure that we can reuse customer problems as like a proxy thing. Like, can we drive business goals by solving customer problems? And I think that order is important because then, when you make a narrative for or a case, not a business case, just a narrative or the argument for a feature, you should be saying we believe this feature is worth building because we aim to solve this kind of customer problem. Because we have high conviction, this will contribute to this business goal, loosely speaking. And there's much more around that tactically speaking, the actual research, the actual testing, whatever. But I think that's the core essence to admit that, because I think it's also helpful to non-product departments in the company. You would say, look, we're not just solving customer problems because it's so fun. We're doing it because we genuinely believe it's the highest leverage activity we have to drive business goals. And I think that just increases the level of understanding around why that matters and why you would invest so much time in that.

    - Yeah, massively. I mean, we've all seen the, or maybe have seen the meme online about the pizza. If you built everything that the customer wanted, you'd have a pizza with chocolate and a roast chicken on it, right? But it's so important that, actually, we don't start with ideas and problems. We need, you know, that strategic line of sight as to what it is, and you just mentioned that just now. It's like the discovery work, the idea work actually needs to be in service of a business goal. Itamar Gilad talks about, you know, his GIST framework, which is goals, ideas, steps, and tasks, which is that you can have those ideas, but they need to be within the context of a business goal. If we build everything that the customer wants, that might not be in alignment with our strategy and where the company direction needs to go. And I see that as a really, a big missing piece with a lot of the clients that I work with, is they just prioritise, you know, customer requests and ideas without being strategically aligned. You talk about, and, in fact, talking about those dots, you frequently talk about connecting the dots between strategy, OKRs, and discovery, and you spend a lotta time coaching and working with your clients and teams as well. What do these signals tell you that a team's strategy and discovery work are out of sync of each other?

    - One thing would be, one reason would be for me, again, if the selection of customer problems would be, if it's too opportunistic or too reactive. Like, if there's this like, oh, we have this new problem coming in, we're gonna take care of it. It doesn't mean you don't have to take care, you shouldn't take care of immediate customer problems if they are like super relevant, super critical, super urgent, or are business-wise aligned. That's fine by me. But if that's the primary decision-making criteria for prioritisation, I would be a bit worried. And also try to understand what's the strategic priority. So that's one thing. The other thing I originally find myself again trying to simplify language even further, and to just say, look, if you remove like all the fluff, all the nice things around it, like strategy's essentially about what will you do differently. Like, what will you do differently, right? I mean, that's an age-old thing. Porter talked about it. Strategy is not about being better, it's about being different. And so, if a team can't tell me what they will do differently next quarter, next month, that would be a red flag for me. And the common argument against that is, of course, that companies might say, but we want to make revenue, we will always want to make revenue. I'm like, that's fine, and you can. My question to you is, if you have a revenue growth goal, for example, well, if it's higher than last year or different, like think that differently. Like what will you do differently this year to hit the same growth goal or to exceed it? Like, what will you do differently? And I think, yeah, this is true for a company at a team level. If you can't answer that question, that would be, for me, an indicator that there might be some sort lacking strategic clarity, which has all these like downstream effects on your other activities.

    - You talk about leading indicators being a big part of your OKR guidance. And we know there's leading and lagging indicators. But how do you help teams to identify indicators that are genuinely predictive rather than just easy to measure? And I know I've fallen into that trap so many times as well.

    - I think it's, first of all, for me, it's about understanding the spectrum, right? So you mentioned that correctly. Like, if you draw a line from lagging to leading indicator, you look at the spectrum, it could be easy to say, oh, we're just gonna pick the most leading one because it's the most leading, right? So I get that. I think, again, you have to understand the spectrum. And then I tell teams the answer is not as simple. It's like there's no score that tells you you're done with discovery. And similarly, with the leading lagging indicator range, you then still have to pick, okay, which metric is more leading enough that matches my goal cycle, the way I work, to give me feedback on what I'm doing moves me in the right direction and is still meaningful enough that it's not just random, oh, I spent X amount of hours on doing the thing. So that's how I would differentiate. And I think the good thing is, these days, more and more so, you have, of course, from a quantitative side, you have a lot of tools at your disposal, right? Tools where you can say this is the lagging indicator, tell me the correlating leading indicators, the leading behaviours of customers who achieve this goal, achieve this conversion, achieve this behaviour, so which gives you one identifier, right? So it tells you what are the behaviours. Like, as a discovery 101, you still want to, you might want to understand why people behaved a certain way. Like, what were the driving forces? Like, did they click 10 times on the button because the freaking button is broken, and they tried to do it 10 times. That doesn't mean that's a leading indicator to get people to click on the button 10 times. So that's still the what and the why, but generally speaking, I think it's about understanding this range of how does the sequence, so to speak, look like, or the reverse customer journey mapping as you could call it. And then think about, okay, which metric can I actually measure from there, which metric is still meaningful enough, and which metric still moves fast enough for me to get feedback on. So that's how I typically try to guide teams through that decision-making process.

    - I'd love to talk a little bit about road maps and your thoughts about road maps. That's where the channel got the name. That's kind of my area of expertise. But before I do, is there anything that we've talked about earlier that you'd like to reemphasize, or is there a question around sort of strategy, OKRs, and discovery that I haven't asked you that you'd like to share?

    - I feel like we've definitely covered the essentials. If there's one thing I would love to reiterate, I think it is this, look at the practise that you're doing and ask yourself what problem is this solving for me or for whom? For whom is it solving a problem? To just get that kind of like awareness, so to speak, for does that make sense to engage with, or is it just a layer of process that has been added to my daily work, and no one really knows why we're doing it. I think that would be the main thing I would reiterate towards product teams.

    - Nice, I think the world would probably be a better place in general if we all asked that question of ourselves outside of product.

    - I think if we would all ask more questions, the world would be a better place, so yeah, but including this one.

    - Tim, I'm a big proponent of road mapping discovery. For me, road mapping is the output of a strategic process. I think road mapping is not just a Gantt chart of outputs. In fact, road mapping is not a Gantt chart of outputs and features. How does road mapping intersect with modern discovery practises for you? Do you see road mapping being useful to road map our discovery efforts? Do they align with maybe high-level strategic periods? What does that look like for you?

    - It's such a good question. I think I can share my current take is that because I recently worked with a company with a lot of different product teams, which is trying to adopt more mature product practises and establish a more rigorous, not rigorous, but disciplined road map review approach as well, and so the question always comes up, what are the teams expected to put on the road map? And they are expected to operate in some kind of dual-track agile mode, so they're always in some kind of discovery delivery mode. So the place I landed on for this company, at least, and I think you can draw some conclusions for others, but it's difficult to completely standardise, is that I think the road map to me is like it's like a snapshot of a team's priorities in a way, so that's what it is. And so if I follow that first principle, and a team also has discovery work in their responsibility, then, well, yeah, naturally, it should somehow be visible on the road map. So that's what I'm leading to, that's what I'm landing on. And then the interesting question becomes, as you mentioned, what's the format of the road map? Like, do I still have this bit of, like, time-based thingy going on, or do I have like a more like Now-Next-Later or issue approach? I think that depends a bit on how you want to articulate it. What I liked recently with the company was that they ran into this issue of discovery taking very long, or it's like, oh, next quarter, we're gonna do discovery. So it's sold as this quarter-long activity, and, you know, like there's some kind of law that talks about the work expanse through the time it's been given. And I don't know who it is, so I don't want to misquote, but you get it. And so then the question comes up, okay, but discovery doesn't always take a quarter, right? And so, like you have to balance this thing between artificially forcing a team to move faster and like admitting that there's this nonlinearity and lack of predictability, maybe. And so what happened there, what I really enjoyed is to work with, let's say, like a starting time box to say, okay, you have a discovery clarity. I mean, if you do like completely greenfield generative discovery, things are different, but if you would say normal ongoing continuous discovery around a team's priorities, it's different. Aim for reducing as much uncertainty as possible in two weeks, and then we will recalibrate, and I think that is a good way to put things. Because like, okay, you're setting some kind of time-bound constriction, but, again, if you then ask the team, if you can only do one thing in two weeks to reduce as much uncertainty as possible, what would it be, and why aren't you doing it if it's actually the biggest domino to push, so to speak? So that's what I'm kind of liking, like to not just say like, oh, this is going on, but like here's the time box we're setting for ourselves, here's the intention that we're having. And I think then you can utilise some of these very neat work-in-progress materials from discovery, whether that's an impact map, your research questions, your assumptions map. You don't need like fancy artefacts. You can just say here's what we're trying to understand in the next two weeks. And then, of course, it's not about two weeks, it's about how do you move on from there, if it's more time, or you actually get some kind of during. That's how I see it right now, but I will admit, it's a very complex space because discovery is nonlinear and hard to foresee, and road maps ask for some kind of prediction about what you will be busy working on. So it's something interesting to figure.

    - Yeah, I think a lot of the time, road maps are also not honest, though, because we don't put a level of confidence on the road maps to say what we think. You know, we put timing sometimes explicitly on the road map. I think a Now-Next-Later road map for discovery along with the delivery side of things is ideal as a strategic artefact that shows our direction, not our promises. And so maybe we just update the road map with a level of confidence or a rough level of timeline that we think that it's going to take. And I'd also wanna go back onto something you said, which is maybe initially time boxing it to two weeks, and a quote that you had in the discovery section of your book, which is, "We should be leveraging AI to amplify discovery, not replacing it." So if we only want to work within a two-week time box maybe just initially, leverage AI in order to make the most of that two-week period.

    - Yeah, exactly, it's this, again, I'm quoting Ravi Mehta again, who once said, "AI lets you get to the hard part faster," and I really love that because it admits that there's still this hard thinking, this hard decision-making in place, but, of course, we can and should leverage tools that allow us to speed up the efficiency of getting through these hard parts. And yeah, so I think that's, in a way, like boundaries and constraints also invite creativity for how could you make it happen in two weeks, and that again, with AI, like just asking the question what's possible now that hasn't been possible before, it's much more fun to answer.

    - Tim, I know how much demonstrating value and value is important to you, and you've talked about alibi progress. Maybe we should talk about alibi road maps here as well. What signals tell you that a road map is enabling value delivery rather than just feature delivery? When you work with teams, what do you see that makes you think that it's enabling value delivery?

    - The biggest thing is, and I'm gonna talk about the anti-pattern first because it's easier to name. It's the anti-pattern, for me, is that a team only thinks about what's the next thing they should be doing right around the period at which they have to present a road map. And it's like, only on March 31st, you think about what's the next discovery we should be doing because we have to put it on the road map for April 1st. And that, to me, is this difference between treating discovery, sorry, road maps as this like artefact I'm producing for someone else, compared to using it as a snapshot of my priorities. And I think, again, it's like a process question. Like, if you only fill in the road map so that someone else pats you on the back, gives you feedback, then might not be entirely wrong, but the question of the value-add is harder to answer.

    - Brilliant, that made me very happy to hear that from you, and a lot of fundamental things that you shared there that I agreed. Let's look a little bit forwards, and as we start to close out, I've just got this kind of this bigger question to ask you, which is, if you imagine product teams five years from now, so the tools, the practises, the expectations, especially in your areas of expertise of strategy, OKRs, and discovery, what do you think will look fundamentally different?

    - I think the hardest part to predict how it will look like is discovery because I think it is the most influenced through the recent changes of AI because it enables so much different activity. I mean, we're just scratching the surface in so many things. Like, also potentially letting the AI do interviews for you, more and more agentic analysis of feedback coming in. So I think that is the wildest for me to imagine how it actually would look like. I think there's this risk of like being completely on autopilot and just waking up to whatever Claude and waking up to whatever priorities Claude tells you to do today. I'm not so sure how much ownership that still enables the product teams and accountability, quite frankly, for the actions they take, so that, I'm not sure. I feel, again, like fundamentally speaking, for the questions you wanna answer through strategy and OKRs, I feel it will be the same, the path to get you there might be different, right? You might get more proactive invitations for here are leading indicators or here are new correlations between conversions and certain leading indicators, or it, of course, allows you to monitor competitor moves more, do market research faster. Maybe you can even simulate certain strategic scenarios much faster by letting you run through how cohorts behave differently. So I feel, to get to the same answers or to the same output, so to speak, of like strategic clarity, what to do, what not to do, I feel like that should be vastly improved. My thinking or hope would be that there's still this prevailing sense of what do I still need as a product team to actually take full ownership for those decisions. Like, how much of my own thinking is still required, and to not outsource the brain too much, so to speak. I think that's an interesting challenge I would name today.

    - Tim, you've shared so much with the community, I wanna give you an opportunity and some space to just share your offerings, the book, your newsletter. You've got a wealth of resources that I'm sure a lot of people listening will really resonate with. How can people get in contact with you, and how can people find out more?

    - Yeah, thank you. The easiest way to get in touch is probably through LinkedIn, just search for my name, Tim Herbig. Another one, as you already mentioned as another free resource, is my weekly newsletter, which you can sign up to through our website, herbig.co. And from there, I will tell you about the other stuff. But, of course, the book is a really good place to start. The book will lead you to other places. But the best way to stay up to date is through the newsletter, where you will also know more about public workshops I'm running or conference talks I'm giving, and also that gives you an idea to learn more about the in-house services that I offer to clients to help them make real progress across those different domains.

    - Is there anything else you wanted to share?

    - The one thing that starts to become more top of mind for me is something I'm looking to explore as well this year is the notion of internal product teams or platform teams or service teams, they go by different names, because I feel like they are oftentimes so invisible to customers and us as a product community as well. And I think the work they create, the work they do behind the curtain, behind the scenes, where so much of the customer-facing stuff stands on is mind-boggling to me sometimes. And sometimes learning about how little these teams are able to follow certain fundamental product practises is almost, it's difficult to imagine. So if there's one thing I would say is like really acknowledge that even internal products are products, and product teams who should embrace these kind of practises we talk about in whatever context they need to to fit their work and the requirements they have. But yeah, that's something that currently makes me very curious, and it's a space I will explore more this year.

    - Yeah, it makes me very happy that you've said that, because within my product management career, I've very much been an enabling product function. So when I was a product manager at Dell, I looked after the basket page for 16 countries in EMEA. That is part of the customer journey through to purchasing, but directly, it was just an enabling function. And then, during my time at Vodafone, I owned the customer-facing support products, so the help sites, the forum, web chat, and things like that, but also the internal agent-facing systems. So again, it was almost just a point in the customer journey, all those internal-facing systems. And again, I very much felt like we, in many cases, were treated like a feature or an output team, and sometimes that was necessary, right? We had to believe that the high-level strategy, the bigger picture, the discovery was done by the more offering or solution-based teams that had done the due diligence to tell us that they knew exactly what we needed to build, and we didn't need to do any more verification on top of that. And so I very much felt like that. And sometimes, it's like you said, meet the people where they are. Think about the value, think about what it is that you're providing. And if we just need to be an output team because all of that other work has been done, that's okay. Well, look, Tim, it's been fantastic to have you back on the show. Thank you for being a returning guest. Really thrilled to this time talk more specifically about your book. A massive congratulations from us. And to the audience, if you haven't read Tim's book yet, please do go and grab a copy of "Real Progress." Details are on his website, and we'll provide the details below. Otherwise, Tim, that just leaves me to say a massive thank you for being a friend of the channel. It's been great to have you.

    - Thank you so much for having me. Really enjoyed the conversation.

Justin Woods

Co-host of talking roadmaps. Empowering tech teams to plan strategy, track execution, and clearly communicate using roadmapping tools.

https://www.linkedin.com/in/justinjkwoods/
Next
Next

You are Product Owner? Cool. What's Your Product? - Davor Gašparac @ Product World EU