Menu

When it comes to adding new features, slow and steady is usually best

In the article From Four Wheels to Two RJ Marsan talks about the Lyft engineering team’s principles for quickly and safely adding major features to a mobile codebase. It’s full of interesting learnings as they went through the process of adding scooter rentals to the app. Here’s a good point about trying to avoid doing everything in the first release of a new feature:

Every new feature is a chance to start with a clean slate, and it’s often tempting to immediately build for scale. We all want our products to launch to massive fanfare and usage, but more often than not, the path to success for new features is slow and steady. With steady growth in mind, we designed our first architecture to support exactly what’s needed for our first product iteration, and nothing more. For Lyft Scooters, we cut out many features one might expect from a classic ride such as sharing ETA or setting a destination.

How to help teams focus, align, and start delivering at their potential

My favorite article of the week is one on change management, which, yes, I know how that sounds. Like you (probably), I hated the phrase “change management”. But in her article How to begin the invisible work of change management Cate Huston defines it as “taking teams that are struggling and helping them focus, align, and start delivering at their potential”, so now I finally understand what it is and I don’t hate the phrase any more.

Anyway, the entire article is excellent. If you’re interested in managing teams at all, you have to read this one. Here she is on diagnosing problems within teams:

Sometimes people look at teams and diagnose problems as an absence of process rather than an absence of values, or cohesion, or delivery.

And on the problem with trying to implement a process without understanding cultural context:

When teams struggle to do retrospectives, often the problem isn’t the lack of retrospective, but the lack of psychological safety that a productive retrospective requires. That might be due to lack of trust or cohesion on the team, or other problems that have not been addressed. You need to address those things first, otherwise the retrospectives will not be productive, and the process will be for naught.

Using Opportunity Solution Trees to focus on the product priorities that matter

Prioritize Opportunities, Not Solutions is a really good post by Teresa Torres about using the Opportunity Solution Tree methodology to make sure we place lots of product bets early on in a lifecycle, and only implement the ideas that matter. I also like her distinction between one-way door decisions (“once you’ve walked through the door, you can’t turn around and come back through”) and two-way door decisions:

Your opportunity assessment and prioritization decisions are two-way door decisions. Once you choose a target opportunity, you’ll test whether or not you made the right decisions by prototyping and experimenting with solutions that address those opportunities. If you learn through experimentation that you didn’t choose the best opportunity, you can always walk back up the tree and choose another opportunity.

Launching a product is the start of the learning process, not the end

Jeff Gothelf addresses a very common problem in his post on the perils of fixed time, fixed scope projects. This point on what it means when you “launch” a product is so important:

Deadlines imply that if we don’t get the product right on the day we launch, we’re doomed. This is an antiquated point of view. Launching publicly simply begins the process of learning how right (or wrong) our assumptions were. It is the start of a continuous conversation with your target audience and the fastest way to learn how to optimise your system. The sooner you can get something to market, the sooner you can make the system better. By the time you get to your deadline, your product should have been in market for multiple cycles.

From product managers to product coaches

I have Thoughts about the term “Product Thinking”. But as a general progression, I think Sebastian Saboune is right that as an organization grows, product management has to evolve from a thing we do, to a thing we help the entire company do:

Our understanding of our craft has come a long way since 2015. I believe that it is now a good time to evolve product management into product thinking. A philosophy, mindset, a common knowledge. But primarily, something that can be acquired by anyone in the organisation.

The people in charge and at the forefront of this change will be product coaches. They will be the custodians of product thinking within an organisation, and tasked with getting people, teams, and organisations to become more product led through product thinking.

In summary; product management will become product thinking, product managers will become product coaches, and this will lead to organisations being more product led.

What it means to “think like a PM”

Marty Cagan’s article on the characteristics of a good One on One meeting is great advice for managers, but it also includes some excellent points about what makes a good product manager in general. Here, for example, is a section on what it means to think like a PM:

What does it mean to think like a PM? It means focusing on outcome. Considering all of the risks – value, usability, feasibility and business viability. Thinking holistically about all dimensions of the business and the product. Anticipating ethical considerations or impacts. Creative problem solving. Persistence in the face of obstacles. Leveraging engineering and the art of the possible. Leveraging design and the power of user experience. Leveraging data to learn and to make a compelling argument.

The problem with Impact/Effort prioritization

Itamar Gilad shares a healthy critique of the Impact/Effort prioritization matrix that is so ingrained in every product manager’s brain:

As straightforward as this all seems, there are major problems with Impact/Effort prioritization that cause us to pick the wrong winners. Most importantly Impact/Effort analysis requires us to make somewhat reliable predictions on future events — the effort we will require to complete a task and the value that will be delivered to users and/or to the company once completed. As it turns out both are jobs we’re exceptionally bad at.

He doesn’t say we need to stop using it though, just that we need to move the borders of the matrix around a little bit. It’s also worth noting that there is no “one size fits all” prioritization method. I’ve written about different prioritization techniques, and am a proponent of choosing and adapting the ones that work best within the culture of an organization.

Questions to answer before adopting a new technology

Kellan Elliott-McCrea has an excellent list of questions a team should answer before they decide to adopt a new technology in their software development process. For example:

If this new tech is a replacement for something we currently do, are we committed to moving everything to this new technology in the future? Or are we proliferating multiple solutions to the same problem? (aka “Will this solution kill and eat the solution that it replaces?”)

As he mentions in the post, these questions are not subtle… but I think they’re absolutely essential. It reminds me a little bit of Marty Cagan’s Product Opportunity Assessment questions.

More

  1. 1
  2. ...
  3. 46
  4. 47
  5. 48
  6. 49
  7. 50
  8. ...
  9. 201