Menu

Product roadmaps are still all right

Scott Sehlhorst wrote a really good defense of good product roadmaps in Features do not a Product Roadmap Make:

If your roadmap says “Will include update quantity control in shopping cart” you’re doing it wrong.  Your roadmap should say “Improved shopping experience on mobile” or “Better shopping experience for spearfisher  persona.” […]

When a roadmap is being used to communicate “what” the product will be, it should be in the language of describing which problems will be addressed, for whom, or in what context.  This is the most important type of theme which would be part of a thematic roadmap.  Other themes could be “improve our positioning relative to competitor X” or “fill in a missing component in our portfolio strategy.”

And this, in the context of agile, is a great point as well:

A backlog – a prioritized list of features – is not a roadmap. It is a reflection of a set of design choices which happen to fulfill in product what the roadmap sets out as a manifestation of strategy.

A roadmap tells you both “why” and “what;” a backlog tells you only “what.”

This reminds me of an article I wrote in 2011 called Product roadmaps are safe. Good times.

Autonomous teams: challenges and recommendations

Marty Cagan has some really insightful thoughts (as usual) on autonomous teams in Autonomy vs. Mission:

In healthy teams and organizations, the way we normally reconcile these views [where the team might have one perspective and the leadership might very well have another] is that the leadership has control of two major inputs to the decision process. The first is the overall product vision, and the second are the specific business objectives assigned to each team.

Problems arise if the leadership does not provide clarity on these two critical pieces of context. If they don’t, there’s a vacuum and that leads to real ambiguity over what a team can decide and what they can’t.

The section on how to ensure consistency in design across different teams is also really good:

In the name of empowerment and also speed, my personal preference is to invest in the necessary automation (with pattern libraries and style guides) so that the team can get the design (interaction and visual) mostly right pretty easily, and acknowledge that on occasion, you will incur some “design debt” where we realize that the design needs to be corrected, and that’s fixed as soon as the problem is spotted. I like this approach because the manager of design is still responsible for developing a strong set of designers, but doesn’t have to be in the review cycle for everything (which tends to slow things way down, as well as undermine autonomy).

The difference between fidelity and resolution

John Willshire wrote a good post on the difference between fidelity and resolution in design. From Want to improve your design process? Question your fidelity:

For us, fidelity is all about the people axis; how close is this to the real world? That’s the future point, when the product is out in front of lots of people, being used often, at scale. If you want to increase fidelity, then you show whatever you have to more people.

Which leaves the vertical axis, things, to be all about resolution. Resolution is a much more technical description of what we have in front of us, used across many different fields to description the detailed specifications of what the thing involves. It’s been much more useful when you’re using that language around the thing you’re working on.

There are some good illustrations in the post to make the point clear. I think this is a pretty important distinction, since it shows how user feedback can be helpful during each phase of a project.

Technology can’t contribute to a better world while those who make it are so unrepresentative of society

Judy Wajcman’s Who’s to blame for the digital time deficit? starts off like many similar articles as she ponders the role smart phones play in making us feel time-starved. But then she takes an unexpected and well-reasoned turn:

If technology is going to contribute to a better world, people must think about the world in which they want to live. Put simply, it means thinking about social problems first and then thinking of technological solutions, rather than inventing technologies and trying to find problems they might solve.

We can’t do this while the people who design our technology and decide what is made are so unrepresentative of society. The most powerful companies in the world today—such as Microsoft, Apple and Google—are basically engineering companies and, whether in the US or Japan, they employ few women, minorities or people over 40. […] Such skewed organisational demographics inevitably influence the kind of technology produced.

And later on:

If we want technology to bring us a better future, we must contest the imperative of speed and democratise engineering. We must bring more imagination to the field of technological innovation. Most of all, we must ask bigger questions about what kind of society we want. Technology will follow, as it usually does.

Streaming music and venture capital

Ben Thompson wrote the best analysis of Tidal I’ve seen so far. From Tidal and the Future of Music:

I would again draw an analogy to venture capital: startups can spread via Twitter or new discovery services like Product Hunt; minimum viable products are cheaper to build than ever thanks to Amazon Web Services, Microsoft Azure, etc.; and distribution channels like App Stores have natural promotional channels. And yet the importance – and amount – of venture capital has never been greater.

The truth is that because so many folks can now get started it is that much harder – and more expensive – to cut through the noise. Consumer companies need massive growth for many years, and enterprise companies need expensive sales forces, and the only folks enabling both are venture capitalists.

It’s a great overview of the all the challenges Tidal will have to overcome to beat incumbents like Spotify and Pandora.

Don’t stop believing (in user research)

I’m having a hard time with Alex Schleifer’s (Airbnb’s head of design) proclamations in Why Airbnb’s New Head of Design Believes ‘Design-Led’ Companies Don’t Work. There are a lot of sweeping generalizations in the article, but I’ll focus on one specific part — user-centered design. First, this:

The solution Schleifer and CEO Brian Chesky devised actually deemphasizes the designers. The point, Schleifer says, isn’t to create a “design-led culture,” because that tends to tell anyone who isn’t a designer that their insights take a backseat. It puts the entire organization in the position of having to react to one privileged point of view. Instead, Schleifer wants more people to appreciate what typically lies only within the realm of designers—the user viewpoint.

So far, so good. It makes total sense to bake user empathy into the process and not elevate it to some elite role. Let’s continue:

Thus, every project team at Airbnb now has a project manager whose explicit role is to represent the user, not a particular functional group like engineering or design. “Conflict is a huge and important part of innovation,” says Schleifer. “This structure creates points where different points of view meet, and are either aligned or not.”

Ok, this is good too. This is usually the role the product owner or user researcher or UX designer should be fulfilling (if they’re not, something’s wrong). But if we need to call it something else to avoid stepping on toes, that’s fine. Let’s keep going:

Airbnb’s approach does seem fairly novel, simply because it deals with a problem that bedevils any product company to one degree or another: Designers tend to design for themselves, whether they intend to or not.

I agree with this, and have written about the phenomenon before in Designer Myopia: How To Stop Designing For Ourselves. But then the author goes on to say this:

User research, meanwhile, often has limits. It’ll tell you what’s wrong, but it only rarely leads directly to great products. A true user perspective is something more nuanced, specific, intuitive, and independent.

This is where he loses me. User research is not just usability testing that tells you what’s wrong. It’s also ethnography and contextual inquiry and participatory design and in-depth interviews and a slew of other things that result in exactly what they are trying to accomplish: a user perspective that is “nuanced, specific, intuitive, and independent.” As I point out in that designer myopia article:

We do ethnography to learn, not to confirm our beliefs. By using this method to understand the culture and real needs of our users, we’re able to design better user-centered solutions than would be possible if we relied only on existing UI patterns and some usability testing.

Leaving the office and spending time observing users in their own environments is the best way to understand how a product is really being used in the wild. It’s the most efficient way to get out of your own head.

I would love to know how Airbnb proposes that their project managers get these perspectives without qualitative user research, i.e. direct contact with customers.

Articles like this make for great “you’re doing it wrong” headlines, but they are often so light on detail that you’re just left feeling bad about yourself without knowing why (or how to fix it). So I just want to say, don’t let them get to you. Keep doing what you’re doing. Conduct observational user research in context, triangulate your results, and speak up for user needs. There’s no evidence to suggest that those methods have stopped working.

Usability testing and agile, together

I really like the approach described in Jen McGinn and Ana Ramírez Chang’s RITE+Krug: A Combination of Usability Test Methods for Agile Design. It’s a dense paper, but worth your time. Here’s a key part (my emphasis added):

Prior to using the RITE+Krug combination, the user research process and results had been divorced from the Agile processes, which resulted in the findings coming too late to be acted on. Because of this issue, we integrated the user research with the rest of the process, as illustrated in Table 1.

The team consists of developers, product managers, user experience designers, visual designers, quality assurance engineers, and a user researcher. The user experience designers and product managers work closely with the development team during the feature sprints—answering questions, giving feedback on progress, and fine tuning the feature as it is implemented. The bug fix sprints give the developers time to focus on product stability.

Meanwhile the product managers, user experience designers, visual designers, and the user researcher work on preparing the small set of features that will be implemented in the next iteration (see Table 1). This work includes feature selection, design, user testing, and redesigns. The whole team (including developers) gives feedback on the feature specification and design before it is ready to be implemented. Like others, our design team stays an iteration ahead of the development team. Like Patton recommends, we iterate the UI before it ever reaches development, thereby turning what is traditionally a validation process into a design process.

And here’s the table:

One of the biggest issues with usability testing and Agile is the complaint that testing slows down the process. This seems like a really good way to alleviate those concerns.

Big data and big statistical mistakes

Tim Harford has an excellent critique of the statistical issues with the “big data” trend in Big data: are we making a big mistake? First, there’s this:

But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast.

I still love the term “digital exhaust”. I first saw Frank Chimero use it in the context of social media when he said (in a post that’s now gone from the internet):

The less engaged I become with social media, the more it begins to feel like huffing the exhaust of other people’s digital lives.

But back to big data. The big problem (see what I did there?) is that statistical problems don’t just go away when you have more data. In fact, they get worse. For example:

Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is.

The article goes into the detail on this, and I think it’s important for us to recognize the limitations of big data before jumping on the bandwagon.

More

  1. 1
  2. ...
  3. 80
  4. 81
  5. 82
  6. 83
  7. 84
  8. ...
  9. 202