Menu

The Apple Watch won’t save you time

Matthew Panzarino wrote something that historians will reference in thinkpieces on Medium 40 years from now. From The Apple Watch Is Time, Saved:

And that is the target market of the Apple Watch. Not “rich people” (though there’s a model specially for them), not “tech geeks” and not “Apple fanatics.” It’s people who want more time, and that is a very large target.

This, for some reason, is the thing that Apple has had a hard time articulating. This is the primary use case of the Watch. It’s not just that it’s a “notification center”; it’s that it allows you to act without any additional distraction.

The idea that some new technology will give us more time to do “other stuff” is as old as technological innovation itself. By now we should have learned that no, actually, this time isn’t different. But we’ll never learn. We approach every new technology with starry eyes and hopes and dreams of a life less time-consuming. When I read something like this, I always think about this classic scene from Arrested Development:

It might work for us

In just one of several historical examples of the time-saving delusion, John Maynard Keynes published an essay in 1930 called Economic Possibilities for our Grandchildren [PDF], in which he predicted that technological innovation will save people so much time that they won’t know what to do with themselves:

Thus for the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.

That, alas, has not happened. We are busier than ever these days. Instead of giving us more time, our technologies have instead given us more ways to be connected, to stay in touch with work, to never have to leave the office. I don’t see how one can argue that the Apple Watch will reverse this trend.

What the Apple Watch will do instead, I believe, is to accelerate a different trend, described by Douglas Rushkoff in Present Shock:

Our society has reoriented itself to the present moment. Everything is live, real time, and always-on. It’s not a mere speeding up, however much our lifestyles and technologies have accelerated the rate at which we attempt to do things. It’s more of a diminishment of anything that isn’t happening right now—and the onslaught of everything that supposedly is.

I’m not saying the Apple Watch won’t be wildly successful, or that I don’t want one — I definitely want one. I just don’t think we should fool ourselves into thinking it will somehow give us more time because we might look at our phones less. If history teaches us anything, it’s that we’ll find a way for the watch to fill up our “saved” time in other ways — and then some. And in doing so we’ll continue on the path Kevin Kelly lays out in his excellent book What Technology Wants:

Our lives today are strung with a profound and constant tension between the virtues of more technology and the personal necessity of less: Should I get my kid this gadget? Do I have time to master this labor-saving device? And more deeply: What is this technology taking over my life, anyway? What is this global force that elicits both our love and repulsion? How should we approach it? Can we resist it, or is each and every new technology inevitable? Does the relentless avalanche of new things deserve my support or my skepticism—and will my choice even matter?

That said, this post is only about the first version of the Apple Watch. The next watch is a different story. The next watch might be the one that finally saves us time. Just wait. You’ll see.

A URL to call home

Robinson Meyer reflects on Medium and What Blogging Has Become:

And I too, a lowly twentysomething, pine for days of less centralization. As I wrote a few days ago, in a New Medium-style short post, “I still find the idea of a diverse blogosphere — arrayed across tens of thousands of URLs, with sites organized by author and shaped by distinctive interests — really, distinctively, unavoidably cool.”

But is there a place in the web ecosystem for this kind of writing anymore? And is the cost of using Medium, which will centralize writing and create a kind of publisher/publishee power inequality, worth the ease? What will happen when widespread abuse comes to Medium, the way it’s come to Twitter? And social media companies have proven tremendously malleable, product-wise, to the desires of other companies — will Medium be the same? What does a piece of advertising look like on Medium anyway, when the line between journalism and PR on it is already so thin?

I’ve been around long enough for Blogger to rise (and fall), for MySpace to be the best (and then the worst) place to write your thoughts, and for Posterous and Windows Live Spaces to disappear (along with all my posts there). So I will stubbornly hold on to writing on this here, my very own URL.

Posterous

A technical guide to mobile usability testing

I wrote a guest post on mobile usability testing for my friends at Unboxed Consulting. It’s something I’ve mentioned briefly here on Elezea before, but in this post I go quite deep on the ins and outs of setting up a mobile usability lab. From A technical guide to mobile usability testing:

Setting aside the details of recruiting, script writing, and interviewing, from a technical perspective doing usability testing on desktop web applications is pretty simple, thanks to software like Morae and Silverback. There is, however, no straight-forward, single solution for doing usability testing on mobile devices. I recently went through the process of setting up our own mobile usability testing process at Jive, so I thought I’d share some of what we learned about the components of a good setup.

User testing and long-term product planning

Steve Barnett makes some great points on long-term planning in Plans, Details, Dates, and The Future1. I especially like the point about how user research fits into planning:

Before development starts on a new bit of work, you should be building prototypes and doing user testing with them. This always results in some changes to the plan, and often results in rather large charges. You can’t plan what these changes will be: you don’t know until you’ve done your user testing.


  1. And bless his heart for using an Oxford comma. 

The problem with surveys

Erika Hall speaks so much truth in her post On Surveys:

If you are treating a survey like a quantitative input, you can only ask questions that the respondents can be relied on to count. You must be honest about the type of data you are able to collect, or don’t bother.

My first role at eBay, years ago, was as a quantitative user researcher1. We ran surveys to measure satisfaction with different areas of the product over time. If that period taught me anything, it’s that surveys are extremely useful when combined with analytics as well as qualitative user research (triangulation), and pretty useless when looked at in isolation. There just isn’t enough context by itself.


  1. One of my early experiences at eBay was getting to work one morning and discovering that Peter Merholz wrote a scathing blog post about a survey I was running. This was my second month on the job, so I was pretty sure I was going to get fired. The worst part of it was that he didn’t have the full context, so his criticism wasn’t even valid. We were doing a controlled experiment where each group saw only one of the images in the survey, and the “likelihood to purchase” question was just a decoy as an introduction. We weren’t trying to get absolute numbers of likelihood to purchase (that would be ridiculous) — we were comparing responses to different pages to figure out what iconography would be best for ratings (stars, bars, or check marks). Subsequent questions were more specific about the ratings aspect. It went all the way up to our VP of Product and my manager had to write an explanation. I was mortified. I still sometimes wake up in the middle of the night in a cold sweat, screaming “survey!!!!!!!!!” 

A framework for empathy in design

The Paradox of Empathy is such a great post by Scott Jenson that I pretty much want to quote the whole thing. But I’ll stick with just this one gem, and encourage you to read it in full. It is a fantastic exploration of empathy in design, and includes a framework for making empathy part of our everyday work in a very practical way:

Designers will be the first to admit that not every empathic observation leads to a miraculous insight. However, it’s called “Design Thinking” for a reason: it’s how we process and explore, taking a complex problem and breaking it down before we build it back up. Product managers seem to expect a designer to walk up to a product, say something brilliant, and drop the mic. Experienced designers deeply understand a simple fact: design isn’t a deliverable, it’s a process. A process paved with dozens of small empathic observations that lead you, slowly, iteratively to a better product.

The problem for us designers is that our fellow teammates don’t always think this way and unfortunately, we as a community don’t reflect on this difference. It’s ironic that designers are passionate about how a product interacts with people but not how they themselves interact with their team.

Twitter text shots, and what design wants

I’ve been spending a lot of time thinking about how product design decisions aren’t neutral. The way we design a product has a direct effect on how people use it. This is obvious, but I think we often forget the real implication: Design wants something from its users. And we are the architects of those wants. We have a direct impact on user behavior, and we need to recognize the weight of that responsibility.

Let’s look at some recent product changes on Twitter as an example.

In October 2013, Twitter introduced more visual tweets, with photo previews within the timeline. It’s almost hard to imagine now, but before they introduced this you had to click on a link before you could see a photo.

This change had an immediate effect on how people used the product. I’m sure some of it was intentional — people began to tweet a lot more photos. Some of it was probably not intentional but still made sense: social media marketers caught on to the fact that if they attach a photo to an article tweet, they’ll get more attention since the tweet will take up more screen real estate. A little annoying, but ok, so far so good.

But then there was what must have been a fairly unexpected behavior change. People started to use screen shots of text to bypass Twitter’s 140 character limit. At first, only a few people did it. But then publishers and marketers started to notice, and it took off:

Some have even come up with guidelines for the best way to stand out in these “textshots”:

Following an exchange with MG Siegler a while back, I settled on a specific textshot style: sans-serif text with a sepia background pulled from Pocket. The idea of using the app’s sepia theme for these came from MG, who noticed that yellow screenshots had more contrast in Twitter’s native apps.

I’ll admit, the temptation to do this is strong. Earlier this week I used a textshot and it became my most retweeted tweet ever:

Nice, right? Win!

But wait… Let’s step back for a moment and have a look at the metrics on that tweet of mine:

Tweet activity

Almost 25,000 impressions, with a 0.9% click-through rate. That’s worse than a crappy banner ad, and it’s the sum-total of the amount of traffic I sent to Scott’s excellent post. As Derek Thompson points out in The Unbearable Lightness of Tweeting:

Is the social web just a matrix of empty shares, of hollow generosity? As Chartbeat CEO Tony Haile once said, there is “effectively no correlation between social shares and people actually reading.” People read without sharing, but just as often, perhaps, they share without reading. […]

There used to be a vague sense that Twitter drives traffic, and traffic drives renown (or fame, or pride, or whatever word defines the psychic benefit of public recognition). Instead, the truth is that Twitter can drive one sort of renown (there are some people who are Twitter-famous), and traffic affords a different psychic currency. But they are nearly independent variables.

All of this culminated in Medium’s Text Shots announcement yesterday:

Text shots

So there you have it. There is now a very real chance that most of our Twitter timelines will become nothing but screenshots of Medium articles that no one reads. That doesn’t help Medium, it doesn’t help authors, and it frankly doesn’t help us to experience and learn, which is kind of the point of reading. This trend does help Twitter, though. Quoting from The Unbearable Lightness of Tweeting again:

In the last month, I’ve created nearly 2 million impressions for Twitter. Whether that is good for my Twitter persona and my pride is a qualitative question whose answer resides outside the bounds of an analytics dashboard. But it is quantitatively not a good deal for The Atlantic. Something I already suspected has now been made crystal clear: 99 percent of my work on Twitter belongs to Twitter.

Twitter is a business, and impressions are how they make money, so this isn’t inherently evil or wrong. But Twitter is, if nothing else, not what we think it is. Not to get too curmudgeonly about “early Twitter”, but there was something amazing about the 140 character limit. Something about the constraint that brought out people’s creativity. And because it was all text, timelines were easy to scan. Now, all of that is different.

Putting all my personal feelings about this trend (and its implications on traffic and reading) aside, it’s time I get to the point. This fundamental change in the way Twitter is used can all be traced back to a single, fairly simple design decision back in 2013: expanding photos natively in the timeline. Without that change, none of this would have happened.

As designers we can’t possibly know how all the ways our decisions will affect behavior in a product. But we have to, at the very least, recognize that design has an opinion, and that it wants people to behave a certain way. I like the way Jared Spool phrases this:

Over the last year, we’ve started explaining design as “the rendering of intent.” The designer imagines an outcome and puts forth activities to make that outcome real.

We have a responsibility to do our best to ensure design wants things that are good for users as well as the business. We have to think ahead as much as possible, because what design wants is up to us. And once it wants the wrong things, it might be too late to change.

Don’t just design features, design systems

Rune Madsen wrote a really interesting post on how our design methods need to change when we work in software (as opposed to print). He explains in the post On meta-design and algorithmic design systems:

So what is meta-design? In a traditional design practice, the designer works directly on a design product. Be it a logo, website, or a set of posters, the designer is the instrument to produce the final artifact. A meta-designer works to distill this instrumentation into a design system, often written in software, that can create the final artifact. Instead of drawing it manually, she is programming the system to draw it. These systems can then be used within different contexts to generate a range of design products without much effort.

I’ll add my vote for the need to spend much more effort on design systems (like Atomic Design) upfront, to standardize (and eventually speed up) later development.

More

  1. 1
  2. ...
  3. 81
  4. 82
  5. 83
  6. 84
  7. 85
  8. ...
  9. 201