Menu

The dangerous gap between those who make software, and those who use it

The gap between the technical skills required to use the software we make, and what the majority of people are actually capable of, is widening at an alarming rate. Not only that, but we often appear to not even like the people we design and develop for. We champion empathy as a core tenet of user experience design, yet we are mighty quick to point out how bad our moms are at “computers”, and how we hate going home for the holidays because we end up spending all our time on family tech support.

I worry that technology is advancing so quickly that it’s no longer able to ground our thinking in a bit of reality. Some might see this as a good thing, but I don’t. Unless we make a conscious effort to get back into the minds of our users — and not chuckle at what we find when we peek in there — we’re going to run away with the web and leave most of the world standing around in bewilderment, wondering what just happened.

Don’t believe me that this is a thing? Ok, here are some examples. First, CNET’s Greg Sandoval describes the downfall of Netflix in his must-read article Netflix’s lost year: The inside story of the price-hike train wreck. It’s long, so it’s easy to skip over this sentence that perfectly sums up why things went so wrong for Netflix:

But even visionaries can misread their customers when they are blinded by their past success.

CEO Reed Hastings thought he had his customers figured out, but he didn’t. At all.

The second example is Digg, which, of course, made headlines last week because they were sold to Betaworks for $500,000 after once being valued at more than $160 million. From Kevin Rose: Digg Failed Because ‘Social Media Grew Up’:

Among the missteps: Digg botched its re-launch in the summer of 2010, and, more importantly, he said the company was slow to respond to the criticism. ‘We were desperately trying to figure out how to get traffic back,’ he said. ‘A bunch of the community had already revolted by the time we fixed it.’

Once again, they thought they knew their customers, and once again, they didn’t.

For the last example we’ll go even further down the technical totem pole, lest we forget what goes on in the bottom half of the Internet. In 2010 ReadWriteReb wrote an article about Facebook Connect and AOL Instant Messenger called Facebook Wants To Be Your One True Login. But their SEO was so good that if you went to Google and typed in “Facebook login”, that article would be the first result. It wasn’t long before they started receiving comments like this:

Is this Facebook?

That’s right — people thought that they were on Facebook, and that the “new design” had inexplicably taken away the ability to log in. Things got so bad that they had to put up this message in the middle of the article, which is still there today:

No, it isn't

The editors of ReadWriteWeb made one more mistake, though. They assumed that people know what a browser is. Watch this:

You may be asking yourself, “How do these people survive on the Internet? How do they get anything done online?” Well, we’d better believe it — they’re here, walking among us in plain sight. One can argue that things have gotten even worse since that article and video came out. Matthew Berk recently did an analysis of 1.3 Billion URLs and found that 22% of Web pages contain Facebook URLs. Google used to equal Browser for most people. Now Facebook is becoming the browser — it is people’s viewport to the Web.

And what are we doing about this? Well, mostly nothing, because we can’t be bothered to notice. We’re too busy arguing about Twitter’s API, and whether it’s worth reviewing a tablet that doesn’t exist yet. Speaking of the Surface, let’s not forget Greg Cox’s point about the people who are likely to buy it, which serves as another example for the point I’m trying to make:

Ha ha, we scoff. Who wants to do Microsoft Office on a tablet? Office is boring. And tablets have a completely different use case to laptops. Who would want one to run full Windows?

Answer: Lots of people. People with different priorities, working different jobs, living in different countries. People we don’t quite understand.

The rabbit hole just doesn’t end, no matter how deep you go. We haven’t even talked about YouTube comments or Clients From Hell. But I’ll stop here, and just say this: the landscape we create software for is scary. It’s terribly comfortable over here on Twitter, but how can we design software and applications for people we don’t hang out with?

It seems like such a simple problem to solve, but I’m not seeing much evidence that talking to customers is a widespread thing among startups and even many established companies. We love talking about User Experience Design in the abstract — especially if it means we can argue about whether it exists or not.

But you know who are the real heroes of UX (you know, if it actually exists)? The ethnographers. The user researchers (who had to change their name from “usability engineers” because definitions blah blah blah). The real heroes are the people who spend their days understanding user needs, and fighting with all their might to get people to make things that solve real needs, not things that floor us with their beauty and radiance and lack of utility. Douwe Osinga’s description of Google Wave comes to mind:

Wave started with some fairly easy to understand ideas about online collaboration and communication. But in order to make it more general and universal, more ideas were added until the entire thing could only be explained in a 90 minute mind blowing demo that left people speechless but a little later wondering what the hell this was for.

Let’s stop that from happening to the things we make. We don’t have to leave this job to The Researchers. We can all talk to the people who use (or might use) our software. We can go to a coffee shop and get feedback on wireframes for the price of a few cappuccinos. We can sit and watch some of our family members use the web, and make notes. We can try to spend a fixed percentage of every week talking to users. If we don’t, we’ll continue to widen what Jakob Nielsen calls the Usability Divide:

Far worse than the [digital] economic divide is the fact that technology remains so complicated that many people couldn’t use a computer even if they got one for free. Many others can use computers, but don’t achieve the modern world’s full benefits because most of the available services are too difficult for them to understand.

Whereas the [digital] economic divide is closing rapidly, I see little progress on the usability divide. Usability is improving for higher-end users. For this group, websites get easier every year, generating vast profits for site owners. That’s all great news for high-end users, but the less-skilled 40% of users have seen little in the way of usability improvement. We know how to help these users — we’re simply not doing it.

We know how to help these users — we’re simply not doing it. We don’t have a choice, we have to talk to them. It’s easy to start: take your laptop with you on one of your coffee breaks, and ask some people if you can show them what you’re working on. They’ll love giving you feedback, and you’ll walk away with a better understanding of the usability divide — and some very real ideas about how to narrow it.

The fetishization of the offline, and a new definition of real

The impact of the Internet on society and relationships is a common theme on this site. I recently stumbled on a few articles on this topic that I think are worth highlighting. Yes, this idea has been covered a lot, and Sherry Turkle’s recent New York Times article brought the discussion to the forefront yet again. But don’t roll your eyes — there are some interesting arguments in these articles. As usual, I’m going to quote some key sections from each, but I highly recommend that you queue all of these up in Instapaper and read them in order. It’s great weekend reading!

It all started with Nathan Jurgenson’s The IRL Fetish — an excellent reflection on the stark (and fairly recent) distinction we make between being online and offline:

We are far from forgetting about the offline; rather we have become obsessed with being offline more than ever before. We have never appreciated a solitary stroll, a camping trip, a face-to-face chat with friends, or even our boredom better than we do now. Nothing has contributed more to our collective appreciation for being logged off and technologically disconnected than the very technologies of connection. The ease of digital distraction has made us appreciate solitude with a new intensity. In short, w’ve never cherished being alone, valued introspection, and treasured information disconnection more than we do now. Never has being disconnected — even if for just a moment — felt so profound.

He goes on to describe the obsession with the analog and the vintage — like the resurgence of vinyl — as the “fetishization of the offline”. An interesting, provocative phrase. The core of his argument follows:

In great part, the reason is that we have been taught to mistakenly view online as meaning not offline. The notion of the offline as real and authentic is a recent invention, corresponding with the rise of the online. If we can fix this false separation and view the digital and physical as enmeshed, we will understand that what we do while connected is inseparable from what we do when disconnected. That is, disconnection from the smartphone and social media isn’t really disconnection at all: The logic of social media follows us long after we log out. There was and is no offline; it is a lusted-after fetish object that some claim special ability to attain, and it has always been a phantom.

Nathan’s essay kicked off a slew of thoughtful responses that commend him for the article, but also disagree on some key points. First, the always brilliant Nicholas Carr responds in The line between offline and online:

I’m going to resist the temptation to quote some Wordsworth or Thoreau, but I will say while our present age may be tops in some things, it’s far from tops in the area of solitary strolls. The real tragedy — if in fact you see it as a tragedy, and most people do not — is that the solitary stroll, the camping trip, the gabfest with pals are themselves becoming saturated with digital ephemera. Even if we agree to turn off our gadgets for a spell, they remain ghostly presences — all those missed messages hang like apparitions in the air, taunting us — and that serves to separate us from the experience we seek. What we appreciate in such circumstances, what we might even obsess over, is an absence, not a presence.

I find that comment interesting because where Nathan claims that being online is inseparable from the experience of being offline, he doesn’t say anything about the negative effects of that. Nicholas points out that even though online experiences can enhance our offline relationships, it’s also true that those relationships can be affected negatively by our inability to let go of the online.

Next up, Michael Sacasas has similar objections in his piece In Search of the Real, but he also adds this thought on the distinction between being offline and online:

I would not say as Jurgenson does at one point, “Facebook is real life.” The point, of course, is that every aspect of life is real. There is no non-being in being. Perhaps it is better to speak of the real not as the opposite of the virtual, but as that which is beyond our manipulation, what cannot be otherwise. In this sense, the pervasive self-consciousness that emerges alongside the socially keyed online is the real. It is like an incontrovertible law that cannot be broken. It is a law haunted by the loss its appearance announces, and it has no power to remedy that loss. It is a law without a gospel.

Aha — now we’re getting somewhere. The distinction between online and offline is legitimate, but calling one experience real and the other not doesn’t work. Instead, the only part of this discussion where the word “real” should come in, is when we talk about our realization/self-awareness that there is a distinction between online and offline — and it behooves us to figure out what that distinction means.

Adam Graber takes the discussion in a slightly different direction in Offline:

The same is true for every technology. It makes new things possible, but it also alters what we consider normal. Every technology is a new normal. The point though is not to try and “fix” it by logging off or downgrading or abandoning technology altogether. The point is to be aware of it. To understand not only what technology makes possible, but also what it normalizes, and even what it makes impossible.

There’s the “awareness” concept again. He continues:

Impossible like living offline IRL and seeing a beautiful sky without being tempted to Instagram it or having a brilliant idea and not writing a blog about it. Because online, the only things that exist are the things you put there. Otherwise, offline, all the ephemeral grandeur and intricacy of our daily lives does not exist unless we somehow capture it with our technology. The only other way to revel the fleeting moments of our lives is to experience it with someone else — a meeting of sorts. But technology makes it so we don’t have to.

The theme is clear by now. Online and offline experiences are both real, but they have positive and negative effects on each other. As we discussed earlier, online experiences can enhance offline relationships because we bring our online interactions into those relationships, but they can also be broken down by online’s constant and relentless hold on our consciousness.

Finally, Nicholas Carr weighs in again and pulls it all together with I was offline before offline was offline:

But the fact that we now consciously experience two different states of being called “online” and “offline,” which didn’t even exist a few years ago, shows how deeply technology can influence not only what we do but how we perceive ourselves and the world. Certainly we didn’t consciously choose to look at our lives in this way and then formulate the technology to fulfill our desire. The defense contractors who started building the internet didn’t say to each other, “For the good of mankind, let’s create a new dichotomy in perception.” And when we, as individuals, log on for the first time (or the ten-thousandth time), we don’t say to ourselves, “I’m going to use this new technology so I’ll be able to think about my life in terms of being online and being offline.” But that’s what happens.

It’s not that technology “wants” us to think in this way — technology doesn’t want a damn thing — it’s that technology has side effects that are unintended, unimagined, unplanned-for, unchosen, often invisible, and frequently profound. Technology gave us nature, as its shadow, and in a similar way it has given us “the offline.”

Some might say that these types of discussions are a waste of time. That people react with hand-waving alarmism every time a new technology emerges — the telephone and the printing press were going to make us stupid long before Google might be doing it. And it’s true that for every good discussion about this, there’s an equally bad one (looking at you, Newsweek). But I think that we have to keep talking and arguing about this, because it is in the extremes of these arguments that we find the middle ground that approximates the true impact of technology on our lives.

I recently went searching for my first tweet, and it’s about as inane as I expected:

I can honestly say that after more than 4 years, I still don’t know how this thing works. I know that being connected has altered my life in profound ways — some good (I get to write here!), some bad (I definitely struggle to put the phone down). But I think I’m ok with not knowing as long as enough people are coming together to try to understand how this online/offline thing affects us — and to challenge each other’s ideas in a thoughtful way.

I agree with Nicholas — technology doesn’t care what we do with it. But we cannot stumble blindly ahead without striving for the self-awareness that this still-real new reality requires. Because once we understand it, we’ll truly be able to regain control over the technology that is shaping us.

Typography, invisible design, and windows to words

In 1955 Beatrice Warde wrote an essay on typography, book publishing, and advertising called The Crystal Goblet, or Printing Should Be Invisible. It is one of the best descriptions of the concept of invisible design I’ve ever read. I pretty much want to quote the whole thing, but I’ll stick with this gorgeous paragraph, and let you click through for the rest:

The book typographer has the job of erecting a window between the reader inside the room and that landscape which is the author’s words. He may put up a stained-glass window of marvelous beauty, but a failure as a window; that is, he may use some rich superb type like text gothic that is something to be looked at, not through. Or he may work in what I call transparent or invisible typography. I have a book at home, of which I have no visual recollection whatever as far as its typography goes; when I think of it, all I see is the Three Musketeers and their comrades swaggering up and down the streets of Paris. The third type of window is one in which the glass is broken into relatively small leaded panes; and this corresponds to what is called “˜fine printing’ today, in that you are at least conscious that there is a window there, and that someone has enjoyed building it. That is not objectionable, because of a very important fact which has to do with the psychology of the subconscious mind. That is that the mental eye focuses through type and not upon it. The type which, through any arbitrary warping of design or excess of “˜colour’, gets in the way of the mental picture to be conveyed, is a bad type. Our subconsciousness is always afraid of blunders (which illogical setting, tight spacing and too-wide unleaded lines can trick us into), of boredom, and of officiousness. The running headline that keeps shouting at us, the line that looks like one long word, the capitals jammed together without hair-spaces — these mean subconscious squinting and loss of mental focus.

(link via Retinart)

BREAKING: Buying Facebook likes is a giant waste of money

From Facebook ‘likes’ and adverts’ value doubted:

A BBC investigation suggests companies are wasting large sums of money on adverts to gain “likes” from Facebook members who have no real interest in their products.

One can only assume that this study was done by the BBC’s Department of Obvious. But that’s just the opening paragraph — it gets better:

“Likes” are highly valued by many leading brands’ marketing departments. [“¦] Some companies have attracted millions of “likes”.

I don’t know why, but it just suddenly strikes me as really weird that we’re able to read sentences like that on the BBC, and not on The Onion where it belongs. Whatever happened to just making good products, and telling people about them? Look, I like Colgate, I really do. But I have no need to “engage” with my toothpaste brand of choice. This whole like-hunting business is utterly bizarre.

But the best part of the article comes towards the end, where Facebook is asked to comment on the assertion that they have lots of fake profiles, created to spread spam:

“We’ve not seen evidence of a significant problem,” said a spokesman.

Here’s a question. If a fake Facebook account falls in a forest and no one is there to detect it, does that mean it’s not a “significant problem”, or could it mean that we have to get better at surveying the forest?

(link via @CathrynR)

Facebook is entangled in about a fifth of the web

I don’t want to quote too much from Matthew Berk’s fascinating URL analysis because it’s worth reading the whole thing. So I’ll just tease the following line from Study of ~1.3 Billion URLs: ~22% of Web Pages Reference Facebook:

It’s taken roughly a decade for Facebook to not only accrue roughly a billion users, but to entangle itself in about a fifth of the Web.

Ok, maybe one more bit, where he talks about the implications of all this traffic flowing through proprietary web properties:

Increasingly, people and organizations will seek to write themselves not to Web sites, but to the big “platforms” (APIs) like Facebook and Twitter. And more and more, Web sites are being rewoven into those social networks, whether by simple inclusions of “like” or “+1″ buttons, or through more complex reflections of social connection. [“¦]

My key takeaway here is that although Facebook may know about a sizable portion of the Web, the Web barely knows anything about what’s inside of Facebook.

Check out the full post.

The key to becoming a better designer: learn to [something]

Alex Maughan adds some fresh perspective to the reasonably stale “Should designers learn to code?” debate. In The click of a well-made box he writes:

I don’t just believe that having development knowledge helps me and others get stuff done. I believe it makes me a better designer. It does this in the same way that being empathetic to both user and business need does; in the same way that aesthetic theory in visual design does; in the same way that content awareness does; in the same way that knowledge of cultural semiotics and iconography does; in the same way that all sorts of knowledge systems do. Many things can positively influence a designer, and many things ultimately do, just as many areas of knowledge can enhance the value and efficacy of on’s work in any discipline.

Spot on. The code aspect is just something we’ve recently been focusing on, but let’s not forget all the other things that can make us better designers. The key to becoming a better designer is not necessarily to learn to code (although it could be — as Alex argues for very effectively). The key to becoming a better designer is to keep learning something related to the craft, always. I love how Alex Charchar (too many Alexes!) describes the current shift to more knowledge-based design in The Principles of Style:

The importance that designers place in their skills is increasing at a staggering rate. We have always taken our craft seriously, but now we are treating it as the architects do. We are working hard to shake the shadow of the artiness and whimsy of our work and are showing that being creative is serious stuff. Some of us have nerded out over theory forever, but the dusty tomes are no longer propping up the wonky table of our profession. Things are getting increasingly balanced and level.

It happened so quickly and what was once hard to find knowledge is now base knowledge. A dependency upon style has been replaced with fundamentals. Theory has become methadone and sobriety looks damn pretty.

So let’s relax a bit about learning to code, and rather stress out about whether we’re learning something. And as Alex (Maughan this time) points out at the end of his essay, make sure it’s something you enjoy:

I also simply enjoy the hell out of it, no matter what value others end up placing on it.

The good and the bad of grid-based web design

I really enjoyed Josh Clark’s post on New York’s grid-based urban design, and how that relates to web design. In Grids, Design Guidelines, Broken Rules, and the Streets of New York City he writes:

That’s what visual designers get from the grid, too: efficiency, ease, and cheap builds. No question, a well-deployed grid also bestows order and visual harmony on a layout, and those are worthy goals (perhaps the best goals!) of good design. But when you look around at how we use grids on the web, one has the strong impression that we lean on them more for efficiency than aesthetic delight.

His post reminded me of Nishant Kothary’s Rap it in a Grid, which I’ve linked to before:

The reality is, a grid makes the act of solving design problems seem predictable, but says nothing for supplying the appropriate design solution. The grid is akin to the beat. But it’s hardly ever the flow, which is the true design solution.

I highly recommend both articles.

The hidden cost of code

Joel Spolsky wrote a great post on some of the hidden costs of software development. From Software Inventory:

The “cost” of code inventory is huge. It might add up to six or twelve months of work that is stuck in the assembly line and not yet in customers’ hands. This could be the difference between having a cutting-edge product (iPhone) or constantly playing catchup (Windows Phone). It’s nearly impossible to get people to buy Windows Phones, even if the iPhone is only six months better. A lot of markets have network effects, and being first has winner-take-all implications. So getting rid of inventory in the development process can make or break a product.

He goes on to propose an alternative to Backlog Grooming — one of the central tenets of Agile development:

The trouble is that 90% of the things in the feature backlog will never get implemented, ever. So every minute you spent writing down, designing, thinking about, or discussing features that are never going to get implemented is just time wasted. When I hear about product teams that regularly have “backlog grooming” sessions, in which they carefully waste a tiny amount of time and mental energy every day or every week thinking about every single feature which will never be implemented, I want to poke my eyes out.

His proposed solution is quite radical from an Agile perspective, and I’m not sure how it would work on large redesign/replatform projects, but it’s certainly worth considering.

More

  1. 1
  2. ...
  3. 164
  4. 165
  5. 166
  6. 167
  7. 168
  8. ...
  9. 202