Menu

The ethics of “empowering” users

Katherine Benjamin wrote a fantastic essay on designing for user empowerment, and what that really means. She asks, specifically in the context of digital health, When are we empowering users, and when are we just being lazy?

The World Bank talks about empowerment in terms of two things. Firstly, they talk about enhancing an individual’s capacity to make choices. They then talk about leveraging those choices into desired actions or outcomes. […]

When we think about things like wearable devices that enable people to actualise the “quantified-self”, we are usually realising just the ability of someone to self-monitor. In other words, we can make it possible for people to take better care of themselves by developing new technologies that support self-care. However, these innovations will only help those who are genuinely interested in taking greater control of their health. This type of self-determination with regard to health is a necessary pre-condition for successful adoption of digital health solutions.

Unfortunately, all too often, in the digital health industry, we get lazy and speak as though technology itself can create that individual level of empowerment. This fails to consider the inherent power dynamics between providers and users of health services, and the role this dynamic plays in facilitating agency among the users of health services.

When we design to empower users we can’t just think about giving people the information they need to act. We also need to help them develop the desire to act on that information.

The power of a secret in the age of over-sharing

When everything about your life is out in the open, there is power in keeping some of it secret. The ironic side-effect of social media is that it makes it easier to hide. When people think that you share everything, they don’t expect you to keep anything secret.

I recently went on a brief trip to South Africa to visit family, and I stayed (mostly) off social media. It felt weird—I felt this strange guilt, like I was “hiding” something because so many of my friends didn’t even know I was in the country. I know it was the right thing to do considering the circumstances of my visit, but still. Our minds can be deceptively cruel to us.

Anyway, I started thinking about it because Jim Farber explores this from a celebrity standpoint in his really interesting article The New Celebrity Power Move: Keeping Secrets:

Meanwhile, the stars get to both circumvent the media and to float an image of utter transparency through their promiscuous use of social media. In fact, that may only obscure them further. “Digital media creates this notion that we can know everything,” [Kathleen Feeley, co-editor of a scholarly study of celebrity gossip] said. “But it’s still a performance. It just creates a false intimacy.”

The audience’s belief in social media as the most direct route to a star exacerbates “the expectation that everyone will tell everything,” said Daniel Herwitz, a professor at the University of Michigan who wrote “The Star as Icon.” “Against all that, it becomes totally extraordinary when somebody doesn’t tell. On one hand, the public is in awe of the fact that the star, for the moment, resisted the system. But they’re also disappointed, as if somebody let them down. ‘Why didn’t I know this? The media dropped the ball!’”

“Why didn’t I know this”, also known as Why wasn’t I consulted?

We need a renewed focus on Information Architecture

Abby Covert wrote a brilliant and passionate plea for a return to the basic principles of Information Architecture in our design work. From The Pain With No Name:

In too many cases, educational programs in design and technology have stopped teaching or even talking about IA. Professionals in the web industry have stopped teaching their clients about its importance. Reasons for this include “navigation is dead,” “the web is bottom up, not top down,” and “search overthrew structure”—but these all frame IA as a pattern or fad that went out with tree controls being used as navigation.

These misconceptions need to be addressed if we are going to deal with the reality of the impending “tsunami of information” approaching our shores. The need for clarity will never go out of style, and neither will the importance of language and structure. We will always need to have semantic and structural arguments to get good work done.

The most dangerous thing about self-driving cars

Cliff Kuang makes some interesting points in his essay The Secret UX Issues That Will Make (Or Break) Self-Driving Cars:

Recall that first principle that [Brian Lathrop at Volkswagen] laid out for designing autonomous cars—that the driver has to know whether the car is driving itself. That harks to probably the oldest dictate in interface design; mode confusion causes 90% of airplane crashes, and that insight helped invent the field of human-computer interaction. Think about all the times you’ve heard news reports about a pilot being confused about whether the flaps of the wings were down, or whether to auto-pilot was properly set. If you’ve ever failed to realize that your car was in park when you hit the accelerator, or you’ve ever tried typing into the wrong window on your computer screen, you’ve been a victim of mode confusion.

So, the scariest thing about self-driving cars is not whether or not the car can drive safely, but whether it can effectively communicate when it is driving and when it is not. It’s the age-old Visibility of system status UI heuristic in action.

Beyoncé, Coldplay, and the myth of the “average” user

I am not qualified to talk about politics so don’t worry, I won’t. That said, Spencer Kornhaber’s essay on Beyoncé’s Radical Halftime Statement is so incredibly good (and very applicable to product design) that it’s worth discussing here. The part that I found particularly interesting is how differently Beyoncé and Coldplay view their “target markets”. Beyoncé is very focused:

But in pop and in politics, “everyone” is a loaded term. Stars as ubiquitous as Beyoncé have haters, the “albino alligators” who “Formation” informs us she twirls upon. And in a more general historical sense, “everyone” can be a dangerous illusion that elevates one point of view as universal while minimizing others. Beyoncé gets all of this, it seems. As a pop star, she surely wants to have as broad a reach as possible. But as an artist, she has a specific message, born of a specific experience, meaningful to specific people. Rather than pretend otherwise, she’s going to make art about the tension implied by this dynamic. She’s going to show up to Super Bowl with a phalanx of women dressed as Black Panthers.

Whereas Beyoncé is very specific about who her music is for (and not for), Coldplay tries to please everyone:

The poor guys of Coldplay, meanwhile, actually think they can work solely at the level of the universal. “Wherever you are, we’re in this together,” Chris Martin cried out, early on, last night. I don’t want to diss that intention, nor the take-home message at the end: “Believe in Love.” But from their first hit, “Yellow,” to their recent Holi-appropriating music video with Beyoncé, to their pan-cultural rainbow rally at Levi’s Stadium last night, their theme has only been about love to the extent that it’s been about how everyone loves colors. It’s music about being awed by the blandest kind of harmony: ROYGBIV, yeah yeah yeah!

Coldplay’s approach reminds me of this classic Sharp Suits poster:

The problem with a target market of literally everyone is that you end up with a heavily compromised experience that appeals only to the very few people who identify with the “average” experience. Bringing this back to product design, this is why I’m still such a big fan of design personas. As opposed to a mythical “average” user, personas are solid people we can imagine using our product to achieve their goals. This is helpful because by focusing on a few different individuals that are closer to the edges of an experience, instead of the average, we end up catering for a larger portion of the user base:

Persona edges

This is what Beyoncé does so well. She makes music at the edges, so it’s exciting and anything but bland. It’s a lesson that Coldplay clearly hasn’t learned yet.

There’s nothing wrong with reading ebooks

Paula La Farge challenges the idea that ebooks are inferior to physical books in The Deep Space of Digital Reading:

There’s no question that digital technology presents challenges to the reading brain, but, seen from a historical perspective, these look like differences of degree, rather than of kind. To the extent that digital reading represents something new, its potential cuts both ways. Done badly (which is to say, done cynically), the Internet reduces us to mindless clickers, racing numbly to the bottom of a bottomless feed; but done well, it has the potential to expand and augment the very contemplative space that we have prized in ourselves ever since we learned to read without moving our lips.

Last year I went through a phase of reading physical books again, but I gave it up pretty quickly. There are two things about the Kindle platform that I missed too much:

  • The ability to highlight sections, share to Goodreads, and access those highlights any time at the hugely under-appreciated kindle.amazon.com (I tried the app TextGrabber for a while to turn passages from a book into digital text, but it’s just not worth the effort).
  • I can’t live without the X-ray function that lets you look up details about the book and its characters.

Anyway, one of the major academic complaints about e-books is that reader comprehension is lower. But, hey, turns out…

It’s true that studies have found that readers given text on a screen do worse on recall and comprehension tests than readers given the same text on paper. But a 2011 study by the cognitive scientists Rakefet Ackerman and Morris Goldsmith suggests that this may be a function less of the intrinsic nature of digital devices than of the expectations that readers bring to them. Ackerman and Goldsmith note that readers perceive paper as being better suited for “effortful learning,” whereas the screen is perceived as being suited for “fast and shallow reading of short texts such as news, e-mails, and forum notes.” […]

If those same students expected on-screen reading to be as slow (and as effortful) as paper reading, would their comprehension of digital text improve? A 2015 study by the German educator Johannes Naumann suggests as much. Naumann gave a group of high-school students the job of tracking down certain pieces of information on websites; he found that the students who regularly did research online—in other words, the ones who expected Web pages to yield up useful facts—were better at this task (and at ignoring irrelevant information) than students who used the Internet mostly to send email, chat, and blog.

My guess is that a generation from now this simply won’t be a debate any more.

When the internet makes us relive bad memories

Facebook’s “On This Day” feature has always felt really strange to me. It’s an algorithm that’s aware of its weirdness, hence the almost apologetic “We care about you and the memories you share here” message that surrounds it. As if it knows it’s bound to get it wrong and show you something you don’t want to be reminded of.

Leigh Alexander provides an interesting perspective on that feature and our social media “memories” in What Facebook’s On This Day shows about the fragility of our online lives:

Part of the palpable dissonance comes from the fact that many of our posts were never intended to become “memories” in the first place. An important question gets raised here: what’s the purpose of all this “content” we serve to platforms, if it’s useless in constructing a remotely valuable history of ourselves? Are we creating anything that’s built to last, that’s worth reflecting on, or have social media platforms led us to prize only the thoughts of the moment? […]

We generally think of social media as a tool to make grand announcements and to document important times, but just as often – if not more – it’s just a tin can phone, an avenue by which to toss banal witterings into an uncaring universe. Rather, it’s a form of thinking out loud, of asserting a moment for ourselves on to the noisy face of the world.

Despite multiple attempts I still don’t understand how Snapchat works, but from what I understand from the Young People this is a big reason for its appeal. There isn’t an expectation that something you post on Snapchat has to be profound enough to become a permanent memory. As one of my friends Simon1 put it: Snapchat is there to “Share (not remember) moments.” (Side note—if you haven’t done so yet, please read Ben Rosen’s My Little Sister Taught Me How To “Snapchat Like The Teens”. It is absolutely bonkers.)

So Alexander’s point is an interesting one: how do we take control of our online memories? It’s not possible to know for sure, in a moment, if we’re experiencing something we’d like to remember forever. Maybe the best solution is to keep it the way it’s always been: rely on our brains to remind us of things. We can always then dig up those old photos ourselves—without the help of an algorithm—if we really want to relive the moment.


  1. Some of my best friends are Young People. 

Netflix and the problem with established interface mental models

There were some interesting Netflix articles over the past week or so. First, Nathan McAlone writes that Netflix wants to ditch 5-star ratings:

The problem, [CPO Neil] Hunt tells Business Insider, is that people subconsciously try to be critics. When they rate a movie or show from one to five stars, they fall into trying to objectively assess the “quality,” instead of basing the stars on how much “enjoyment” they got out of it.

Here’s an example. Let’s say you had fun watching a crappy movie, but still gave it a two-star rating because you know it’s not a “good” film. That presents Netflix with a problem. The system thinks you hated the movie.

I think embarrassment plays a part in this as well. Even when ratings are private, we’re worried that word might get out. For example, I’d be happy if all my Netflix recommendations consist of “Movies similar to Battleship”, but I certainly don’t want any of you to know how much I liked that terrible movie.

Related to this, McAlone also wrote about Netflix’s most important metric:

That means that the most important economic metric for Netflix is how much a TV show or movie contributes to Netflix’s ability to sign up and retain customers.

The problem is that the current star rating system doesn’t give them that metric because it’s associated with “quality”, not “enjoyment”. So I might rate Out of Africa 5 stars because I objective know it’s a good movie, but if Netflix starts recommending “Boring movies with Meryl Streep” to me, I’m out of there.

Netflix has a very difficult product design challenge ahead of them. They have to change an established user mental model (“stars=quality”) to something different (“tell us what you enjoy watching”) that will help them provide a better and more compelling service.

More

  1. 1
  2. ...
  3. 65
  4. 66
  5. 67
  6. 68
  7. 69
  8. ...
  9. 201