Menu

Why Google might just be right about responsive design in Africa

Phillip Kruger argues that responsive design shouldn’t be used in Africa in a Memeburn article called Why Google might just be wrong about responsive design in Africa. He lays out his argument in two parts. First:

Responsive design only works on smartphones. So by default you are already ignoring 80+% of users in Africa. You are also reaching the 20% of users that possibly have internet access at home or work.

This is an argument I see a lot, but it’s valid only in the context of target audience and use cases. If the target market for your service primarily uses low-end phones, then by all means don’t bother with responsive design. But let’s say you’re building a site to order take-out food and deliver in major cities, the situation changes dramatically. Now you’re most likely looking at a target market that sits squarely in the 20% of people who have smartphone access (and who don’t want to get off their couches and walk to their PCs to place an order).

This is why personas, scenarios, and use cases are so important. If you’re building a service for ALL THE PEOPLE (which isn’t advisable), then averages are appropriate. And those averages will rightly guide you to focusing on the 80% of people who do not have smartphones. But in most cases, the analytics that matter are not the averages of all users, but the specifics of the market you’re going after. Don’t dismiss responsive design in Africa because of averages. Dismiss it if it doesn’t make sense for your target market.

Phillip continues:

Responsive design is not lightweight. When using responsive design, the size of the download to the browser is still very big (in fact it’s very similar to what the webpage would be). All the HTML is still being downloaded (even parts that are hidden on a smartphone if you use media queries to set display:none in your CSS). Sure, you can have rules to download separate images for separate display sizes and that should help a bit.

Tim Lind wrote a comment on the article that addresses the problem with this logic:

Responsive design does not mean you can’t do server side optimisations. In fact it can help you to do these, and encourages a more efficient design process. You can design the content for the feature phones, and responsive media queries allows you to upgrade the design with a single stylesheet file for the smartphone or desktop (which server side optimisation could exclude).

One of the many good things about responsive design is that it forces designers and developers to spend a lot of time on optimisation to ensure light, fast pages. This is just good practice for web development in general — not just for mobile. Page bloat is a huge problem, with the average web page now being more than 1 MB big.

Page speed optimisation is just good web citizenship, and it should be a requirement regardless of whether or not a responsive design approach is taken. The other point to remember is that mobile networks already do a lot of compression on served images (see How should we handle responsive images?).

What worries me about this debate is that there appears to be no room for nuance. Responsive design is either the answer to all of Africa’s problems, or we shouldn’t do it at all. But as with most things, the appropriate approach is to say “it depends.” A mobile strategy shouldn’t be a decision between a native app or a separate mobile site. A mobile strategy should form part of a larger web strategy, and it needs to include a discussion about the appropriateness of responsive design. It might not be the right thing for your project, but it should be on the table.

I keep reminding myself of Ben Callahan’s statement in The Responsive Dip: “Just because you can’t, doesn’t mean you shouldn’t.” Just because this is a difficult problem that we haven’t quite figured out, doesn’t mean we should throw it away and go back to how we’ve always done things. What we need to do now is push through and find elegant ways to apply responsive design in Africa. Where it makes sense, of course.

Update 2012/11/26: Phillip responded to all the feedback on his Memeburn post. See Google might be wrong – part 2. It’s good to get additional clarification on the Google talk that formed the backdrop for his original post. This isn’t the last discussion we’ll have about RWD in Africa, and that’s a good thing. We need to figure this out…

What design is really about

There’s quite a fight going on in the comments of Elliot Jay Stocks’s A conversation with Erik Spiekermann. If you’re able to wade through the mudslinging you’ll find some good points in there, like this paragraph by Erik himself:

Design is first and foremost an intellectual activity which has nothing to do with what medium you work in. It is about looking at a problem, understanding it, translating it into visuals, actions, and messages. That is solving the problem, whatever medium the solution may end up in.

The worst work is done by designers who have decided on a medium before they even know the problem that has to be solved. Just like a print designer (and I do not make that distinction myself) should not immediately think brochure or poster, an interaction designer should also be able to think about other media besides websites or apps. Otherwise you end up behaving like the infamous hammer: every problem looks like a nail.

This relates to a point I made earlier that we shouldn’t let technology or devices (what Erik calls “the medium”) guide product decisions. The problem and the use cases should guide those decisions.

How to be less boring

Scott Simpson tells us something I think we all desperately need to hear in his article in Issue 4 of The Magazine:

You are boring. So, so boring.

Don’t take it too hard. We’re all boring. At best, we’re recovering bores. Each day offers a hundred ways for us to bore the crap out of the folks with whom we live, work, and drink. And on the Internet, you’re able to bore thousands of people at once. […]

The Big Bore lurks inside us all. It’s dying to be set loose to lecture on Quentin Tarantino or what makes good ice cream. Fight it! Fight the urge to speak without listening, to tell a bad story, to stay inside your comfortable nest of back-patting pals. As you move away from boring, you will never be bored.

This relates really well to a recent post by Able Parris called Focus Means Ignoring:

We need to spend less time looking to others for interesting things, and start spending more time doing the things that make us interesting. […]

Similarly, and I am saying this more for myself, it’s easy to give time and attention to the things you enjoy or are easy, but true character comes when you give focus to the things that are difficult but must be done. This means you have to ignore everything else, and know that you will be better because of it.

Just imagine the virtuous cycle this could set off… As people post fewer boring things like Foursquare checkins and retweets of how awesome they are, and we all take the conscious decision to read fewer boring things and instead spend that time listening, learning, and doing new things, we could slowly and collectively pull the current state of the social web out of that cesspool of boringness. Well, that’s a pipe dream, of course. And to be fair — there’s nothing wrong with clicking on a good animated gif every once in a while.

Anyway, back to Scott’s article. One of his recommendations for fighting the descent into becoming boring is what he calls “Expanding your circles”:

When you expand your social and intellectual range, you become more interesting. You’re able to make connections that others don’t see. You’re like a hunter, bringing a fresh supply of ideas and stories back to share with your friends.

This is very much related to Mark Granovetter’s 1973 theory of weak ties1. The theory states that because a person with strong ties in a network more or less knows what the other people in the network know, the effective spread of information relies on the weak ties between people in separate networks.

In other words, to get more interesting information out of Twitter or any other social network, you need to follow people who give you access to additional knowledge clusters. If you see too many tweets about the same thing in your timeline, or if your RSS reader shows 5 consecutive links to the same tech article, you may have too many strong ties.2

Go and and find those weak ties at the edges of your interests, and strengthen them. Otherwise we’ll just continue to talk about the same stuff over and over and over again. And that’s boring.


  1. “The Strength of Weak Ties”, Mark Granovetter, 1973. PDF link

  2. I wrote about this extensively in How to get more out of Twitter

Why mobile and desktop operating systems shouldn’t be combined

Dmitry Fadeyev makes the best case so far for why it’s not a good idea to combine mobile and desktop operating systems into a unified experience, like Windows 8 has done. From Blurring of the Lines:

The road to a good OS is not a blurring of the lines between PCs and tablets, but rather an amplification of the differences through a strong focus on the uses that each category serves. The desktop OS should make use of large screen real estate and the precise targeting of the mouse cursor. The mobile OS should be optimized for the small screen and for the rough tap of the finger. The desktop OS should focus on power users and multi-tasking, the mobile OS should focus on content consumption. The environments they run on are different, the use cases are different, and the solutions should be different.

That’s exactly right. This “unified experience” sounds like a decision made from the viewpoint of devices and technology, not use cases. For example, if you make decisions based on devices and technology, you may decide to create an iPhone app before you know what kind of phones people will use your service on. If you make decisions based on real use cases, you may actually find that very few people would use your service on a mobile device, so a better solution would be to (gasp!) optimise for desktop use1.

The irony is that even though Microsoft made a huge deal about their “no compromise” design philosophy, the Windows 8 experience will have to make compromises if the same software needs to work on both mobile and desktop devices.


  1. Wait, don’t slaughter me. I love Mobile First. I’m just saying that some services or applications just don’t lend themselves to mobile use. I’d argue that tax return software falls into that category. 

Quote: Jordan Moore on designing for the right people

If you are build­ing a web­site (re­spon­sive or oth­er­wise) and your pro­ject per­sonas be­come in­dus­try he­roes rather than those you painstak­ingly iden­ti­fied at the be­gin­ning of a pro­ject then it’s time to worry.

— Jordan Moore, Be careful who you build for.

Fewer ads for a better world

In The Banner Blindness Cure: How Fewer Ads Can Equal More Revenue Dave Zinman points out something all readers know already, and publishers will hopefully take note of:

It’s no wonder, looking at these stats, that banner blindness is such a glaring issue. Talk about losing sight of the forest for the trees: We’re so busy looking after our bottom line, we’re not paying attention to the user experience. We hit our visitors over the heads with ads like sledgehammers, then wonder why our ads aren’t “performing.” It’s absurd. Clearly, we’re doing it wrong. […]

Wouldn’t a publisher be far better off serving fewer ads, and taking top dollar for one or two premium placements? With highly relevant ads that aren’t forced to compete against several other ads on the page, odds of interaction and possible conversion are tremendously improved. And when ads perform better, publishers, advertisers and consumers win.

Somewhat related, here’s what’s happening over on the far end of the creepiness scale:

The odds are that access to you — or at least the online you — is being bought and sold in less than the blink of an eye. On the Web, powerful algorithms are sizing you up, based on myriad data points: what you Google, the sites you visit, the ads you click. Then, in real time, the chance to show you an ad is auctioned to the highest bidder.

Not that you’d know it. These days in the hyperkinetic world of digital advertising, all of this happens automatically, and imperceptibly, to most consumers.

If I may use a John Gruberism: Gross.

Two legacies to strive for

The Great Discontent just published another great interview, this time with Cameron Moll. The final two paragraphs, where he speaks about the kind of legacy he’d like to leave, really spoke to me. First, on a personal level:

I think the legacy I hope to leave for my family is that they, of all people, knew me in the most intimate way and regardless of how the public saw me, I hope they will be appreciative and thankful for who I was in their presence.

Or to quote CJ Chilvers:

As noble as you may believe your pursuit of excellence is, it means nothing if you go home at night to people who do not recognize you or want you around.

I’ve been thinking about family a lot lately, since the birth of our 2nd daughter 6 weeks ago. The first child is mostly a physical adjustment — the long, hard process of getting used to very little sleep, very little time, and no room for selfishness. The second child is more of an emotional adjustment. Suddenly you’re a family of four. Suddenly you’ve become your parents. Suddenly the people close to you can be scattered in many different places, and your heart somehow needs to stay in your body and not freak out because of all the evils in the world that can possibly hurt them. From physical exhaustion to emotional exhaustion — that’s the move from one to two kids.

But for me it is also a move to a better understanding of what it means to be a family, to be bound together through thick and thin, to care more for these people than I ever thought would be possible. And with that comes the realisation that I don’t want to be that guy. That Dad at the park who’s always on his iPhone. The one who’s never home in time for bath time. So I obsess over these things — it pretty much takes an act of God for me not to be home to give my 3-year old a bath. And when I fail, I fall hard, and sometimes stumble rather slowly back on my feet.

So anyway, I’ve been thinking about family a lot lately. And as much as I love my work and my side projects, I cannot allow that to become more important than my family is. So I identify with Cameron and CJ’s words. I feel like I often fail at building towards that legacy, but I’m going to steal a buzz phrase from startup parlance and say that I think I at least “fail forward”. I hope.

And then, on a professional level, Cameron says this:

I don’t have it all figured out; I’ve made so many mistakes, but I hope that through some of the work I’ve produced or the efforts I’ve championed, people feel inspired to try harder and be better.

These things seem like pretty good legacy goals to strive for. Sign me up.

The importance of user experience design specialists

Abby Covert wrote a very important article on the just-relaunched Boxes and Arrows. I know we’re all pretty tired of the “What is UX?” debate, but A Perfection of Means and a Confusion of Aims addresses the damage that the umbrella term “user experience design” is doing to specialist UX functions. Some key passages (but you really should read the whole thing):

I am afraid that there is a shortage of specialist jobs, and it isn’t because those specialities aren’t needed. I believe it is because the value of those specialities, and the impact of not considering each carefully, is in too many cases not clearly called out to our clients and partners. […]

In my experience when “UX” is the term sold-in, the resulting project plans are less likely to reflect the points at which various specialities will be relied upon to progress the team. Often prescribing a stacked to the gills list of tasks reduced to the nebulous “Design the User Experience” on the Gantt chart. The makers of these types of plans leave it to “UX Designers” to divide the time they have amongst the various specialities of a “UX” and arrange their time against it. […]

The worst case scenarios result in teams jumping right to wireframes, prototypes and documentation. I see far too many UX designers that have become wireframe machines.

This is, by and large, an agency problem (but you see it in some internal design teams too). As agencies start to see the value of selling “this UX thing” more and more, many put out unicorn job specs that aim to find some generic skill combination that can be sold to clients as user experience work. And I’ve talked to several people who fell into the trap of these jobs, only to find that their new realities consist of making wireframes and getting dirty looks because their colleagues feel they’re just slowing down the design process.

I have nothing against wireframes — in fact, I remain a huge proponent despite some recent calls to move away from it. It’s not the focus of this post, but I still find huge value in wireframes (particularly HTML prototypes) to (1) work through the complexities of finding elegant solutions to difficult interaction problems, and (2) get early user feedback before moving into high-fidelity design and development. But the point of all this is that wireframes are not the end goal of user experience design, they’re part of the process. A process that involves research, information architecture, customer journey maps, content strategy, and all the other specialities Abby points to in her post.

And that’s where I’m 100% in agreement with Abby. That we have generalised User Experience Design to a black box process that makes it very hard to convince clients (and some agencies!) that the specialist functions that go into designing holistic experiences for people are absolutely essential. We need agencies to:

  • Understand the different specialities that make up the User Experience Design process (I’ve taken a crack at a model here),
  • Hire specifically for some sensible combination of those skills, and
  • Educate clients on the value that each of those specialities can add to make their products successful.

Yes, UX unicorns do exist, I won’t dispute it. But they’re few and far between. The rest of us need to know that there is value in becoming really good at our chosen specialities, and we need to be confident that we can sell those skills to our clients.

So, I’m not arguing that we should throw away the title of UX Designer, as some have suggested. I’m saying that we have a responsibility to know that the field is made up of many specialities, and if we ignore those we do ourselves, our industry, and our clients a huge disservice.

More

  1. 1
  2. ...
  3. 147
  4. 148
  5. 149
  6. 150
  7. 151
  8. ...
  9. 201