When life becomes too “easy”

In The Tyranny of Convenience Tim Wu argues that life has become… well, too easy:

But we err in presuming convenience is always good, for it has a complex relationship with other ideals that we hold dear. Though understood and promoted as an instrument of liberation, convenience has a dark side. With its promise of smooth, effortless efficiency, it threatens to erase the sort of struggles and challenges that help give meaning to life. Created to free us, it can become a constraint on what we are willing to do, and thus in a subtle way it can enslave us.

It would be perverse to embrace inconvenience as a general rule. But when we let convenience decide everything, we surrender too much.

And then there’s this kicker, which I keep coming back to in my mind:

An unwelcome consequence of living in a world where everything is “easy” is that the only skill that matters is the ability to multitask. At the extreme, we don’t actually do anything; we only arrange what will be done, which is a flimsy basis for a life.

Unrelated, I’m getting pretty close to perfecting my To Do system through a combination of OmniFocus and Field Notes. Nope, definitely not related at all.

The three kinds of distance in remote collaboration, and where to focus

Erica Dhawan and Tomas Chamorro-Premuzic have some good suggestions in their article How to Collaborate Effectively If Your Team Is Remote. I found this part particularly interesting:

First, consider that there are three kinds of distance in remote collaboration: physical (place and time), operational (team size, bandwidth and skill levels) and affinity (values, trust, and interdependency). The best way for managers to drive team performance is by focusing on reducing affinity distance. Try switching most remote communication to regular video calls, which are a much better vehicle for establishing rapport and creating empathy than either e-mails or voice calls. And design virtual team-building rituals that give people the opportunity to interact regularly and experience their collaboration skills in action.

Focusing on “affinity distance” rings true for me. You can survive a long time with physical and operational distance if your team trusts each other and share certain values.

At Wildbit we use Zoom for video calls because it’s the only video conferencing software we’ve been able to find that lets us see the whole team’s faces on the screen at the same time. It’s much better than using Google Hangouts or any of the other apps that prioritize only the person who’s speaking. There are lots of way to reduce “affinity distance”, but having everyone (whether they’re remote or in the office) take video calls from their desks — and looking each other in the eyes on those calls — has had a surprisingly large positive impact.

How YouTube leads viewers down a rabbit hole of extremism

Two related articles about YouTube caught my eye over the past few days. The first, Zeynep Tufekci’s YouTube, the Great Radicalizer explains how YouTube’s algorithms almost always lead people to conspiracy theory videos:

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century. […]

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

This is bad enough, but then there’s James Cook’s article YouTube suggested conspiracy videos to children using its Kids app, in which he explains how not even the YouTube Kids app is immune to this:

YouTube’s app specifically for children is meant to filter out adult content and provide a “world of learning and fun,” but Business Insider found that YouTube Kids featured many conspiracy theory videos which make claims that the world is flat, that the moon landing was faked, and that the planet is ruled by reptile-human hybrids.

I try not to be too quick to call technology evil, but this is definitely not a “all technology is neutral” situation. Product managers and developers have the power to stop this kind of escalation from happening.

How science fiction helps us understand the economy

Annalee Newitz wrote a really interesting essay on how economic anxieties are creeping into fantasy and science fiction stories. From The Rise of Dismal Science Fiction:

We’re used to science fiction providing us with commentary on technology, and vocabulary to discuss its more worrisome consequences. But underlying our fears of robots stealing our jobs or corporations turning us into consumer droids are more basic anxieties about money—and science fiction is increasingly reflecting that. For audiences grappling with the fear of poverty, or simply bewildered by postmodern economics, stories like Game of Thrones, Black Panther, and Malka Older’s critically acclaimed novel Infomocracy function like Aesop’s Fables for the 21st century.

My current favorite sci-fi series, The Expanse, is another great example of this. Yes, it’s a story about space and scary things, but it’s mostly a story about inequality and economic oppression.

Innovation consequences: it’s complicated

In Airbnb and the Unintended Consequences of ‘Disruption’ Derek Thompson uses Airbnb as an example to explain how it’s not as easy to call tech innovation a good or a bad thing. It’s complicated…

Airbnb lowered prices for tourists, supplemented the income of renters, and simply made travel to major cities more fun. But upon inspection, it shares some things in common with more-controversial companies—albeit with less grave implications. Facebook and Twitter design for attention, but incidentally encourage mendacious outrage and trolling. eBay and Amazon design for open marketplaces, but incidentally encourage the frenzied resale of bulk-ordered toys around Christmas. Airbnb was supposed to challenge hotels by letting tourists pay renters. But its platform is unwittingly producing a subsidy of tourists, paid for by nonparticipating urban dwellers, who bear the cost of higher rental prices.

The unreadable city

I really enjoyed Christopher Hawthorne’s essay called Los Angeles, Houston and the rise of the unreadable city:

This is going to be a column, instead, about something slightly different: about the legibility (and illegibility) of cities more generally. About how we react — as reporters and critics and simply as people — when we’re confronted with a city that doesn’t make sense to us right away.

I have never liked Los Angeles. I just couldn’t get over what I simply saw as a lot of dirt and too much traffic. But this viewpoint made me realize that, as with most cities, you can’t really love a city until you’ve lived there for a while.

If I had to put my finger on what unites Houston and Los Angeles, it is a certain elusiveness as urban object. Both cities are opaque and hard to read. What is Houston? Where does it begin and end? Does it have a center? Does it need one? It’s tough to say, even when you’re there — even when you’re looking directly at it.

I highly recommend reading this piece through the lens of a city you strongly dislike. Who knows, it might change your mind…

How Facebook realized that it’s more than a platform

Nicholas Thompson and Fred Vogelstein has a gripping feature in Wired called Inside Facebook’s Two Years of Hell. It’s long, but very much worth reading. It takes us through a journey that starts with Facebook’s years of denial:

It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

And it ends at the point they are at now: starting to realize that they can’t hide behind the “we’re just a platform” excuse any more.

Two product roads diverged in a wood

Forest roads

Photo by Michał Grosicki on Unsplash

Over the past year I’ve become increasingly aware of a fundamental divide in the prevailing wisdom on how good products are built. I’m not talking about waterfall vs. post-waterfall methods. I’m also not talking about the differences between specific methods like Agile and Lean. I’m talking about different philosophies on the best way to build products in an already post-waterfall world.

These differences have been apparent for a while, but they came into stark focus for me over the past week, as I finished reading two books in quick succession: Getting Real by Basecamp and Inspired (2nd ed.) by Marty Cagan. Both books are interesting and worth reading by themselves, but even more so when you read them right after another.

Basecamp and Marty agree on the biggest challenges in building good products, but diverge quite often on how they believe teams should deal with those challenges. Let me provide a couple of examples.

The same, but different

Both books are adamant that value (defined as whether someone will use/buy your product) needs to be validated as early and as cheaply as possible, and that the old ways of doing things are expensive and wasteful. Basecamp says you do this by “racing to running software” and making sure that it’s cheap to make changes:

It’s ok to do less, skip details, and take shortcuts in your process if it’ll lead to running software faster. Once you’re there, you’ll be rewarded with a significantly more accurate perspective on how to proceed. Stories, wireframes, even HTML mockups, are just approximations. Running software is real. […]

With real, running software everyone gets closer to true understanding and agreement. You avoid heated arguments over sketches and paragraphs that wind up turning out not to matter anyway. You realize that parts you thought were trivial are actually quite crucial.

And earlier:

Change is your best friend. The more expensive it is to make a change, the less likely you’ll make it. And if your competitors can change faster than you, you’re at a huge disadvantage. If change gets too expensive, you’re dead.

On the other hand, Marty believes in building prototypes really fast, and testing those with real customers before you commit to code:

One of the most common traps in product is to believe that we can anticipate our customer’s actual response to our products. We might be basing that on actual customer research or on our own experiences, but in any case, we know today that we must validate our actual ideas on real users and customers. We need to do this before we spend the time and expense to build an actual product, not after.

And later, on prototypes:

Product discovery [coming up with a validated product backlog] involves running a series of quick experiments, and to do these experiments quickly and inexpensively, we use prototypes rather than products.

On the topic of “functional specs”, both books agree that writing long specs filled with “requirements” is a terrible way to build software. I don’t think any of us disagrees with that. But again, they diverge on the best alternative. From Basecamp:

So what should you do in place of a spec? Go with a briefer alternative that moves you toward something real.

Write a one page story about what the app needs to do. Use plain language and make it quick. If it takes more than a page to explain it, it’s too complex, The process shouldn’t take more than a day.

Then begin building the interface — the interface will be the alternative to the functional spec. Draw some quick and simple paper sketches. Then start coding it into HTML. Unlike paragraphs of text that are open to alternate interpretations, interface designs are common ground that everyone can agree on.

Marty favors a technique called the Opportunity Assessment for the vast majority of projects:

The idea is to answer four key questions about the discovery work you are about to undertake:

  1. What business objective is this work intended to address? (Objective)
  2. How will we know if we’ve succeeded? (Key results)
  3. What problem will this solve for our customers? (Customer problem)
  4. What type of customer are we focused on? (Target market)

[…] You need to ensure that every member of your product team knows and understands the answers to these four questions before you jump into your product discovery work.

Many ways to skin a cat

To summarize this another way: most people in this post-waterfall world agree that the biggest reason why software projects fail is that various risks are assessed too late, which ends up being too costly for the business to survive. Most even agree on the core principles to follow to fix this: tackle risk as early and as cheaply as possible. Where we are seeing the divide is in how this should be done.

One perspective is what we can call the prototyping movement (I’m deliberately staying away from naming specific methodologies). The goal is to align around business objectives and build functional prototypes to meet those objectives as quickly as possible, and test those with real users before engineering gets involved.

The other perspective, the real software movement, says that even that takes too long, and that nothing can replace the feedback you get from working software in production — as long as you’re able to make changes very quickly.

So which is it?

In our search for easy answers and silver bullets, the obvious next question here is, “ok, so who’s right?” But I think good product managers eschew such easy answers. Good product managers are always learning about different perspectives, but they have to learn through an added dimension—the lens of their own product and culture.

So for me, the real takeaway from these books hasn’t been the prescribed solutions — although those are certainly helpful as idea starters. The real takeaway has been all the roadblocks to good product development that I noted down as I was reading. The different kinds of risk we need to validate. The constraints we tend to miss when we brainstorm and plan. The heavy processes that do nothing more than slow teams down and make them unhappy. How to address those challenges in a particular culture is where the true art of modern product management lies, and what makes the job itself so difficult to pin down, define, and get good at.

I’ve learned a great deal about product, culture, and teams this year. But no lesson has been more valuable than this: the challenges to building good products are universal, but the solutions are not, and the biggest value I can add to the team is to work with them to figure out the best way for us, in our context to address these challenges in a way that ensures the team is happy and productive, and our customers love our products.

I’m guess if I have to make a New Year’s Resolution, getting better at this would be it.

The role of instinct in product development

As a product manager I know and understand the importance of making customers part of the product development process through research and interviews. Especially those of us who come from a design background have this philosophy deeply engrained. We know that “I am not the user” and we have the t-shirts to prove it! So it is with some surprise that I recently realized that sometimes — when the circumstances are conducive to it — it’s ok to trust our instincts and create products and features without talking to customers directly about it first.

See, the thing is, talking to customers isn’t something we do, it’s something we are. And if it’s something we are — if we really are immersed in our customers’ needs and behaviors and emotions — we should feel comfortable to trust our own instincts a little bit more.

With this kind of immersion comes an ability to channel our customers in a way that drastically reduces the additional benefit we might get from interviewing them about a specific issue or feature. When we not only have the knowledge of the domains we work in, but also a good understanding of how our customers navigate those domains, we end up with a powerful foundation to base our decisions on.

Does this mean we don’t need research? Of course not! But it means that maybe we don’t need to go out and interview users every time we make a product change or introduce a new feature. It means maybe we do usability testing on major changes to the site, but not when we fix something that we’ve lived and breathed with our customers for months or years.

Those are weird sentences to write. I am a big proponent of User-Centered Design, and obviously research is a central component of that. But what I’m advocating for isn’t less research. I’m saying that it’s possible to reduce the amount of structured research you do, if you have a culture of customer immersion in everything you do.

Customer immersion isn’t an easy culture to create, but it is very much worth it. As a start, everyone in the organization should be encouraged and empowered to talk to customers — whether that is through phone calls, support cases, conferences, or any other way you might be able to reach them. And since not everyone will be able to spend an equal amount of time with customers, it also means you have to listen to those who do spend a lot of time with them — and trust that they are acting as good conduits for customers’ needs.

Making the right choices about when to do structured research and when to trust your (informed) instincts will save you time and money — and make customers happy too. That’s not a bad combination of benefits.

Mutemath on creative collaboration and the importance of (sometimes) working alone

I’m a really big Mutemath fan. If you haven’t listened to their latest album, please do yourself a favor and get on that! In the Rolling Stone interview Mutemath’s Paul Meany on Near-Breakup, New LP ‘Play Dead’ they talk about their creative process on the album:

Mutemath assembled the track list in an unconventional way. Instead of arguing endlessly over what songs to pull from their massive pile of 30 demos, the musicians each hand-picked three and assembled the basic framework themselves before bringing the other back into the process.

”We just trusted each of us to go into our corners and materialize a vision for that particular song and bring it back to the band to finish the puzzle together,” Meany says. “And it was exciting to watch everyone in the band firing on all cylinders. The mantra was just ‘indulge,’ and we trusted each other to do that. And we wouldn’t have been able to do that a few albums ago. If you just get into ‘indulge’ mode, that’s usually the recipe for garbage. Every person in the band should always feel that – someone’s gotta to create some parameters at some point. But I think we’ve worked together long enough now and have developed the trust within that creative space to just say ‘go.’ This was the culmination of all that.”

I tend to think that’s a great way to collaborate on design as well. Go away and do your thing with no constraints, come up for air and get feedback and make changes, rinse and repeat.


  1. 1
  2. ...
  3. 5
  4. 6
  5. 7
  6. 8
  7. 9
  8. ...
  9. 121