Menu

Human-centered design is not enough

Very interesting article by Anab Jain arguing for More than Human-Centered Design. We need to move beyond ourselves and consider the things around us:

Interdependence is a powerful concept for me: different participants—human and non-human—are emotionally, economically, ecologically or morally interdependent on each other. And this reliance is acknowledged. I think this perspective is something that would be very meaningful for many of us to consider—whether we’re interaction, service, or UX designers, entrepreneurs, researchers or people who put things out in the world for others “to use”.

Have a look at the article for further thoughts and some practical examples.

The power of “why now?” as a prioritization technique

I imagine that if you made one of those pull-string dolls of a Product Manager, it would just say “Why?” over and over1. We love figuring out the real reason behind an idea or a customer problem — as we should. But I think we often miss an important follow-up question: “Why now?”

We have so many methodologies for prioritizing problems and features, but I’ve found that this one question is able to cut through all the complex reasoning and (rightfully) stop unneeded projects in their tracks. Most things we could work on in a product are important. But going through the thought process of why it’s important to work on something right away is really helpful to separate the truly worthy projects from the ones that can wait.

The problem is that “Why now?” is not always an easy question to answer. It’s too vague, too broad. But if you flip it around and ask it a different way, things start to become clear very quickly. So here’s a question I recommend you ask yourself and the team the next time you debate a project:

What is the danger of not doing this project right now?

If we don’t solve this problem or add this feature right now, what do we lose? Are sign-ups going to drop? Are we going to lose customers? Are we going to miss a major shift in the market? If so, then, yes, now is a good time to work on it. But if the room suddenly falls silent and everyone comes up short on the downside of skipping over the idea — or if the downside is something like “this one customer will stop sending us angry emails” — that’s a pretty good indication that this thing can wait for later.

I say “wait for later”, and not “forget about for forever and ever”, because it’s quite possible that a few months from now you’ll answer that question differently, and suddenly now becomes the perfect time to work on it. The point is that we should never do something now if later is a better option.


  1. Like Woody but for Product Managers? I want one! 

Embracing the deadline: How engineers benefit from delivery dates

This is a good summary of the “healthy pressure” we strive towards on our team as well:

While working without the pressure of explicit deadlines can feel liberating, it also increases the chance of distraction. Deadlines help us stay focused, aligned and driven – and can be used to keep project scope in check.

Finding the right balance with product onboarding

There are some great product tips in Scott Belsky’s How to Shape Remarkable Products in the Messy Middle of Building Startups, but this part about onboarding particularly stood out for me:

You can’t expect new customers to endure explanation. You can’t even expect customers to patiently watch as you show them how to use your product. Your best chance at engaging them is to do it for them — at least at first. Only after your customers feel successful will they engage deeply enough to tap the full potential of your offering.

One of the hardest things to figure out with onboarding is the right balance of selecting defaults (“doing it for them”) and having users learn by doing things themselves.

For example, within Postmark’s onboarding a continuing debate is whether or not we should auto-create a user’s first “server” for them, or help them understand the concept better by making them do it themselves. Finding the appropriate amount of friction to introduce is an ongoing and important challenge for any product’s onboarding.

The business case for support-driven growth

Support is a revenue driver, and a personal touch at scale is a great way to grow a business.

— Nick Francis, The Business Case for Support-Driven Growth

What Spotify wants: that you should forget that you’re listening

Liz Pelly’s Streambait Pop is a fascinating look at the “Spotify sound” and other changes in pop music brought about by streaming:

The Spotify sound has a few different variations, but essentially it’s a formula. “It has this soft, emo-y, cutesy thing to it,” Matt says. “These days it’s often really minimal and based around just a few simple elements in verses. Often a snap in the verses. And then the choruses sometimes employ vocal samples. It’s usually kind of emo in lyrical nature.” Then there’s also a more electronic, DJ-oriented variation, which is “based around a drop … It’s usually a chilled-out verse with a kind of coo-y vocal. And then it builds up and there’s a drop built around a melody that’s played with a vocal sample.”

The really interesting part to me is how it’s a sound that’s essentially designed to make you forget about it, so that you just keep streaming endlessly:

The chill-hits Spotify sound is a product of playlist logic requiring that one song flows seamlessly into the next, a formula that guarantees a greater number of passive streams. It’s music without much risk—it won’t make you change your mind. At times, these whispery, smaller sounds even recall aspects of ASMR, with its performed intimacy and soothing voices. When everyone wants your attention, it makes sense to find reprieve in stuff that requires very little of it, or that might massage your brain a bit.

After I read this article I went through my Spotify playlists and counted how many of them had the word “chill” in it. Let’s just say I’m too embarrassed to tell you…

But moving on, I think this “inoffensiveness” in music is one of the reasons I’ve started to listen to so many more genres over the past few years. I now like music that feels like it just doesn’t quite sit right. Any artist or band that combines a little discomfort with a lot of skill has my attention. Just one recent example that comes to mind is Double Negative by Low. I still don’t really know what it is. But I know it’s something really special.

Product teams exist to serve customers

Empowered Product Teams is another gem of a post from Marty Cagan. This part stood out to me:

In most companies, technology teams exist “to serve the business.” That is very often the literal phrase you will hear. But even if they aren’t explicit about it, the different parts of the business end up driving what is actually built by the technology teams.

However, in contrast, in strong product organizations, teams exist for a very different purpose. They exist “to serve the customers, in ways that meet the needs of the business.”

The distinction is subtle, but important. If you only serve “the business”, you’re going to make decisions without asking whether something is user-hostile or not (see, for example, scroll-jacking, or Twitter’s tendency to “forget” that you prefer a timeline that shows latest tweets). Bringing customer needs into any conversation about business needs is the way to build something that’s profitable and sustainable.

The social values of artificial intelligence

A lot of words are being written about AI and machine learning these days, so it’s sometimes hard to know what to pay attention to. M.C. Elish and danah boyd’s Don’t Believe Every AI You See is one of those essays that I would consider essential reading on the topic. On the ethics of artificial intelligence:

When we consider the ethical dimensions of AI deployments, in nearly every instance the imagined capacity of a technology does not match up with current reality. As a result, public conversations about ethics and AI often focus on hypothetical extremes, like whether or not an AI system might kill someone, rather than current ethical dilemmas that need to be faced here and now. The real questions of AI ethics sit in the mundane rather than the spectacular. They emerge at the intersections between a technology and the social context of everyday life, including how small decisions in the design and implementation of AI can create ripple effects with unintended consequences.

And on the supposed “neutrality” of machines:

[There is] a prevailing rhetoric around AI and machine learning, which presents artificial intelligence as the apex of efficiency, insight, and disinterested analysis. And yet, AI is not, and will not be, perfect. To think of it as such obscures the fact that AI technologies are the products of particular decisions made by people within complex organizations. AI technologies are never neutral and always encode specific social values.

As Kevin Kelly also pointed out years ago in his book What Technology Wants, technology is never neutral. It possesses the collective values of its creators. And that’s where things so often go wrong. A great resource on this topic is Sara Wachter-Boettcher’s book Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.

More

  1. 1
  2. ...
  3. 49
  4. 50
  5. 51
  6. 52
  7. 53
  8. ...
  9. 201