Menu

Work hard; be good to your mother

When I lived in Australia there was an ad for Pizza Hut that ran about 5 times a day for over a month. It featured Dougie the delivery guy — always on time, always courteous, always immaculately dressed. As he hands over the pizza and gets his money, he asks, “So… how’s about a tip?”

The customer thinks for a bit, starts closing the door, and then says: “Work hard; be good to your mother.”

No, you’re right, it’s not a very funny ad. Nevertheless the words have stuck in my head for over a decade now. Because I realise that in life, as in business, these might be the only two non-negotiable rules we all need to adhere to in order to be successful at what we do. Work hard. Be good to your mother.

Work hard

I recently made the mistake of using the hasthag #leadership in a tweet. I immediately got 5 auto-follows, and they all fit the same profile:

  • Their bios all had some version of the term “leadership coach” in it.
  • They all had more than 20,000 followers, and they followed almost exactly the same number of people themselves. (This is, of course, because they auto-follow everyone who mentions the word “leadership”, and automatically unfollows that person if they don’t follow back in about 3-4 days)
  • They all tweet excessively, usually through API’s that generate random “inspirational” quotes every few minutes.

They basically automated their social media presence, and fine, that works for them. But that doesn’t inspire me. Mitch Joel says the following in a brilliant post called Wanting Something:

In the end, the majority of the answer is not about the talent or the ability to pull a thought together, it’s about the commitment. The blank screen does not care… it’s agnostic. If you write, good for you. If you don’t, good for you. That being said, if you keep at it… If you use these platforms to think deeply about what you’re about and why you think your industry is the way it is, then slowly over time you’ll find your groove and your talent will shine.

Sadly, most people want it fast and easy. That’s good news for those who are truly committed to it, because they’re the ones who actually get what they want.

Or, as Dave Duarte says in The Ultimate Social Media Strategy is Not Having One:

Ultimately, social media is not just a set of technologies to be mastered, it is a cultural reality to be engaged with. It promises to expose the corrupt and reveal the extraordinary, and if nothing else it is guaranteed to keep us on our toes. It is chaotic, unpredictable, and uncontrollable. So the best social media strategy, then, is not a strategy at all, it is to be purposeful, ethical, and transparent and let our communications and behaviours flow from that.

Those are the people I admire, and the ones I want to follow on Twitter and in life. The ones who show up every day, work hard to get better at what they do, and don’t look for shortcuts.

Be good to your mother

Well, not just your mother, but everyone around you. Be nice. There really is no excuse to be rude to people on Twitter or elsewhere on the web. But of course, you only have to spend 2 minutes reading comments on YouTube to give up the dream of a civil Internet forever.

In a great post on commenters online, Dmitri Fadeyev quotes the following Thomas More passage from Utopia:

Ther’s a rule in the Council that no resolution can be debated on the day that it’s first proposed. All discussion is postponed until the next well-attended meeting. Otherwise someon’s liable to say the first thing that comes into his head, and then start thinking up arguments to justify what he has said, instead of trying to decide what’s best for the community. That type of person is quite prepared to sacrifice the public to his own prestige, just because absurd as it may sound, h’s ashamed to admit that his first idea might have been wrong””when his first idea should have been to think before he spoke.

If only we could follow this rule before we reply/comment, the web would be such a nice neighborhood. Sure, it would probably be less interesting as well. And maybe I’m getting old, but I’d actually prefer nice at this point.

By the way, this doesn’t mean we shouldn’t criticize where it’s appropriate. It just means we should be respectful when we do it. As Mike Monteiro says in Giving Better Design Feedback:

Good feedback is not synonymous with positive feedback. If something isn’t working for you, tell the design team as early as possible. Will they be hurt? Not if they are professionals. A good designer will argue for their solution, and then will know when to let go.

By all means, be respectful, but don’t hold back in order to spare an individual’s feelings. Taking criticism is part of the job description. The sooner they know, the sooner they can explore other paths.

So make this your motto for a week or two, and seek out those who do the same. Who knows, maybe a nice Internet is out there after all.

UI Conventions and Inverted Scrolling in Mac OS X Lion

My favorite sentence from John Siracusa’s epic review of Mac OS X Lion is this one:

Apple appears tired of dragging people kicking and screaming into the future; with Lion, it has simply decided to leave without us.

And nowhere in Lion is this more apparent than what appears to be everyone’s least favorite feature: inverted scrolling on the trackpad. As I’m sure you know, what this means is that scrolling now mirrors how it works on iOS devices: you essentially drag the content up and down the screen, as opposed to moving the viewport of the application like we’re used to.

Natural scrolling in Mac OS X Lion

I love this change - it took me about 5 minutes to get used to it. But I appear to be in the minority with this opinion. It sounds like the first thing most people do once Lion is installed is head over to Settings and change it back to the old way of scrolling. So I’d like to step back a little and use this change to talk about UI conventions and when it’s ok to change them. To do that, let’s first look at what we know about Apple’s direction for their operation systems.

Data Is The Future

We got our first glimpse into Apple’s future at WWDC, where John Gruber summed up the keynote as follows:

Googl’s frame is the browser window. Appl’s frame is the screen. That’s what w’ll remember about today’s keynote ten years from now.

Robert X. Cringely touched on the implications of this in an article about Facebook where he says this:

The trend is clear from “the computer is the computer” through “the network is the computer” to what’s next, which I believe is “the data is the computer.”

The point is this. Up to now the metaphor we’ve had about computers is that data = files, and we view this data through windows (with a small “w”). We then manipulate these windows around to get things done. With the introduction of iOS, Apple noticed that the metaphor is not only unnecessary, it’s also not the most effective way to do things.

Instead, Apple wants us to remove the current abstraction from our data (the file system and the “window”), and instead focus on and interact with the data itself. Our data no longer has to be served to us through a middleman - we can go straight to the source. In this context, inverting scrolling behavior makes total sense. Why would you move a window around to see data that sits somewhere behind it, when you can manipulate that data directly? If the data is the computer, scrolling down should move your words down the page, not up.

Inverted scrolling is only one piece of the puzzle. Full-screen mode, disappearing scroll bars, auto-save - these are all new features in Lion that build on this fundamental shift away from file-based computing to data-based computing[1].

But there is a problem with this shift, as we’ve seen from the outcry. People are used to doing things a certain way, and you can’t just go ahead and change that without asking permission. So how do you deal with a change like this?

Floppy Disks And UI Conventions

Another example of this kind of conundrum is the trusted old “save” icon - the floppy disk. My 2-year old daughter will probably never see a floppy disk in her entire life, yet she will learn that the floppy disk icon = save action. Some have tried to change this - recently David Friedman proposed a baseball home plate as replacement icon.

But getting every software developer (and user) in the world to adopt a new standard like this seems nearly impossible. So, we’re stuck with the floppy disk for now[2], even though it is an outdated metaphor, similar to how scrolling currently works.

So this is where we need to go back to the theory. In essence, reversing scrolling behavior lines up with one of the fundamental heuristics of UI design: there has to be a match between the system and the real world:

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

There is a tension here. Users are familiar with the current concept of scrolling. Yet, I’ve tried to argue above that the new way is actually more natural and logical. Apple is essentially caught in the middle of this UI heuristic, and they had to make a choice. So the question becomes, when is it ok to change what’s familiar to something that’s different but more natural and logical?

You’ve Got To Leave It Behind

The answer is that you make such a change when you believe it’s part of a much bigger trend in computing, and you’re willing to take the negative backlash because you know you’re doing it for the greater good. Ok, stop rolling your eyes. Yes, I’ve been accused of drinking the Apple Kool-Aid just a little bit too much lately. But hear me out, and re-read that Seracusa quote in the beginning of this post.

Apple is undeniable moving iOS and Mac OS X closer to each other. And in their future, direct manipulation of the data (primarily through touch) is at the center of a larger computing shift first introduced by the iPad. So they are making this tough call now, saying, “this is where we’re going, don’t get left behind.”

In short, I implore you to take John Gruber’s advice on this:

My number one Lion tip: No matter how wrong it feels, stick with the new trackpad scrolling direction. Give it a week.

Six months from now I think we’ll look back at Lion and iOS 5 as the operating systems that ushered us into the era of the data as the computer. And we’ll be better for it.


  1. Apps like Notational Velocity have been going this route for a while, where the file system is completely hidden. You don’t interact with it at all, unless you really want to.↩
  2. At least until all developers follow Apple and Google Docs (to a certain extent) and replace save icons with auto-save options.↩

No More Banner Ads: Alternatives to Ad-Supported Media Sites

This morning I read an article about something that’s been on my mind for a while: Banner ads on media sites/blogs. In The Truth About Display Advertising, Mitch Joel writes:

Go to the website for your local newspaper. How many display ads, banners, buttons, text links, etc… do you see that are ads? Mine has over 15. That’s not in consecutive order… that’s all at once. It’s hard enough to get consumers to sit through four TV ads in a row, so what did you expect to have happen when you blast them with 15 ads on one page, all at once? Foregoing the aesthetics and the basic Marketing lesson that an ad will experience diminishing returns based on how cluttered the environment that it’s placed in is, does anyone really believe that this is the best way to advertise to consumers in the digital spaces?

No. I don’t think this is the best way to advertise to consumers. In fact, I don’t even think advertising is the best way to monetize media sites either. But are there viable alternatives? I think there are at least two business models that could work.

Distraction-Free Reading

One of my favorite services on the web is Readability. Users sign up for at a low monthly fee (minimum $5), and it then allows them to read articles in a beautiful distraction-free environment with all the ads stripped out. But here’s the best part: publishers also get something out of it:

70% of all Readability membership fees go directly to writers and publishers. Every time a subscriber uses Readability on your site, a portion of that subscriber’s fees are allocated to you. Whether in a web browser, iPhone, or just about any mobile or tablet device, Readability puts reading ”” and your content ”” at the center of the experience.

Here’s a 1-minute video that summarizes the experience:

You’ll also see that the Readability buttons are the only content sharing buttons I have on my blog apart from the Tweet button. There are many reasons for only choosing those two, but with Readability it’s simple - I think they have a fair business model where both reader and publisher win.

How would this work as a replacement for ads? Sites could integrate the “Read Later” functionality in some innovative ways. Sites that publish a lot of content could provide an ad-free home page with content snippets and “Read now/later” buttons to get to the full article. Users without a Readability (or an equivalent) account could view ad-supported full articles if they prefer. My hope is that content would win and readers would start to prefer paying small amounts of money for ad-free reading environments.

This is by no means a well-explored alternative for ad-supported sites, but it could be the beginning of something great that rewards both readers and publishers.

Business Class Subscriptions

Oliver Reichenstein recently posted another very interesting alternative to traditional paywalls on sites like the New York Times. He refers to it as Freemium for News, and the idea is that instead of paying for additional content like with traditional paywalls, you pay to get a better experience (just like paying for Business Class still gets you to the same destination, but in a much more comfortable way). Think of it as a Readability season ticket for a specific site. Here is one example he shows:

Now, think about how this might work for ad-supported sites. I would certainly pay $0.99/month to access a Business Class version of TechCrunch. Would you?

But Can Any Of This work?

Realistically, could either of these ideas provide viable alternatives to the traditional ad model for media sites and blogs? Probably not yet. But I don’t think we’re seeing enough discussion about alternatives, particularly those that focus on user experience as opposed to “monetizing traffic”. I also don’t think these ideas would ever replace ads completely (just being realistic), but at the very least it could provide an additional revenue stream that’s actually based on what users want, not on what advertisers want to push down our throats.

Let me end with something I probably should have begun with. I am no expert in the area of publishing, so it’s easy for me to back-seat-drive media sites out of their biggest source of revenue - after all, it’s not my car. I am in the lucky position where I don’t need to monetize this blog, so I don’t really have to make tough decisions about these things.

But I do hope that if I ever need to make money here, there would be a viable alternative to putting ads all over the page. I just don’t think an ad-supported User Experience Design blog is a good idea. So from the back seat I just ask those who make a living in the publishing industry: Can you please figure out how to do this so I don’t have to?

Google+ is going to be huge! No, it's not!

I like Google+. I like it because it’s clean and well-designed. I like it because it feels fresh - like moving into a new neighborhood after the one you came from got taken over by fake farms and endless profile picture changes. But most of all I like it because it’s quiet.

Since it’s in limited Beta it means it’s still mostly populated by early adopters. So I can interact with brands like Mashable and Smashing Magazine and feel like I’m part of the conversation - something you can’t really do on Twitter and Facebook with mass-brands like that.

This thing is going to be huge

But alas, this will probably not last. Sooner or later the floodgates will open, and before you know it the once pristine Google+ neighborhood will once again get overrun and fall prey to the meaningless graffiti that also transformed Facebook from social network to chaotic metaverse. Rocky Agrawal sums it perfectly in When Google Circles Collide:

[Google+] doesn’t do anything to solve the biggest problem with social networks today: increasing the signal to noise ratio.

So the masses will descend, and we’ll be back to hunting for pockets of information among the endless streams of data. I’m getting tired just thinking about it.

Well, maybe it won’t be such a big deal

I could be wrong. The smart money might actually be on betting that Google+ never even gets enough adoption to become the loud mess that Facebook is today. The reason for that lies in an article that made the rounds a few weeks ago, A Brief History Of The Corporation:

Take an average housewife, the target of much time mining early in the 20th century. It was clear where her attention was directed. Laundry, cooking, walking to the well for water, cleaning, were all obvious attention sinks. Washing machines, kitchen appliances, plumbing and vacuum cleaners helped free up a lot of that attention, which was then immediately directed (as corporate-captive attention) to magazines and television.

But as you find and capture most of the wild attention, new pockets of attention become harder to find. Worse, you now have to cannibalize your own previous uses of captive attention. Time for TV must be stolen from magazines and newspapers. Time for specialized entertainment must be stolen from time devoted to generalized entertainment.

What does this mean? Google+ time has to be stolen from Facebook time. And good luck with that, Google. It’s all because we have this stupid thing called limited time:

Each new “well” of attention runs out sooner. Every human mind has been mined to capacity using attention-oil drilling technologies. To get to Clay Shirky’s hypothetical notion of cognitive surplus, we need Alternative Attention sources.

So that’s the real problem for Google. Theirs can’t be an acquisition strategy, because most people who are on a social network are already on Facebook. So it will have to be a migration strategy. As Dare Obasanjo put it:

For Google+ to be successful it means people will need to find enough utility in the site that it takes away from their usage of Facebook and Twitter, and perhaps even replaces one of these sites in their daily routine. So far it isn’t clear why any regular person would do this.

Google+ wants Circles to be the thing that convinces users to switch. They’re betting that enough users will want to share different things with different groups of people that they’re willing to give up their networks and start a new one. I just don’t think that’s a strong enough argument. Coming back to Agrawal’s point: the real problem is how to get better signal out of the noise of social networks. That’s a need that no one has filled yet.

There’s a parallel to the tablet market here. Trying to compete with the iPad is absolutely futile - you will lose. Instead, HP has a very smart strategy with their TouchPad:

HP acknowledged Appl’s dominance in the tablet market, but said Apple wasn’t its target with the TouchPad.

“We think ther’s a better opportunity for us to go after the enterprise space and those consumers that use PCs,” said Kerris. “This market is in it’s infancy and there is plenty of room for both of us to grow.”

They looked for a gap in the market, and they’re working actively to fill it. So it’s certainly not impossible that enough people migrate to Google+ for Metcalfe’s Law to kick in and we start to see some real network utility. But it’s going to be a tough sell unless they find that real gap in the market.

So which one is it?

Which way do I want it go? I’m on the fence. For now I’m enjoying the peace and quiet in the new neighborhood. But that can also get boring pretty quickly. So I want my cake and eat it too. I want Google+ to scale and at the same time figure out how to solve the signal to noise problem in social media. Is that too much to ask?

Hierarchy and Aesthetics: Separating Science from Art in Visual Design

In this post I argue that we need to communicate the differences between the science and art of Visual Design better to help change the common perception by stakeholders and clients that user experience is purely subjective.

One of the most difficult aspects of visual design is finding the right science:art ratio to accomplish user goals. I’ve always subscribed to what Tim van Damme calls the mathematics of design. You start with the science:

If art is about talking and expressing yourself, interface design is about listening and disappearing into the background. You listen to the content and its context, and take it from there, one step at a time. Don’t worry about the looks, just start with the variables. 1 + 1 + 1 + ”¦ Baby steps, over and over again until what you have on your screen feels right.

And then you mix in art where appropriate:

But sometimes, even 1 + 1 is too much to handle, and you need to clear your head. This is where art comes into play, in the broadest meaning of the word: Paintings, illustrations, architecture, human beings, even nature is art. They won’t help you decide whether you should draw a 1 or 1.5 pixel highlight, but allow you to take a step back and just decide on what’s more suitable or pick one and move on.

Of course, this is not a serial process. Great designers are able to design within that delicate balance between science and art, and find the right ratio as they’re doing it. And even though it’s not easy, I do feel that most designers inherently get this - that visual design is science and art combined in different levels based on the needs of the user and the application.

What’s even harder is explaining this to stakeholders and clients in a convincing way. Over the past week I’ve seen so many comments about how “UX is subjective” and “standards always change” that it got me thinking about a possible solution to this problem. I haven’t figured it out, but I’d like to write down some initial thoughts for discussion.

The problem with Visual Design

I think as a UX community we’ve done a good job of splitting out the different elements of UX Design. Stakeholders and clients are slowly starting to understand the difference between Information Architecture, Content Strategy, Interaction Design, etc. And most people also now understand that those functions are not just gut feel or whatever is the trend of the day. We’ve done a decent job of showing the evidence behind the decisions we make - thanks in large part to the results of user experience research methods like ethnography and usability studies.

But Visual Design is the odd one out in this equation. It walks the line between science and art so tightly that most stakeholders and clients only see the art part. So they look at a design, make a gut call, and think that it’s all just whatever style the designer fancied on that particular day. Sure, some of it is our own fault, and many designs don’t have enough science at all. As Zeldman pointed out:

When Style is a fetish, sites confuse visitors, hurting users and the companies that paid for the sites. When designers don’t start by asking who will use the site, and what they will use it for, we get meaningless eye candy that gives beauty a bad name ”” at least, in some circles. Not enough designers are working in that vast middle ground between eye candy and hardcore usability where most of the web must be built.

We have to find a better way forward.

Breaking down the elements of Visual Design

So how do we fix this? One way is to provide a much clearer distinction between the different aspects of visual design. I’m not saying we should split the job title into two functions, I’m saying we should be more explicit about the goals and outcomes of visual design. And it needs to be simple, so it can’t be too detailed. I’m not 100% sure how to do that yet, but here is one suggestion:

  • Hierarchy Design could refer to decisions made during the design process that sets the appropriate visual hierarchy based on the scientific principles of visual perception (such as contrast, grouping, balance, symmetry, etc.). See Designing for the Mind as an example.
  • Aesthetic Design could refer to decisions made during the design process to help the design fit the brand promise and elicit an appropriate emotional response (such as choice of color, typography, button styling, etc.). See In Defense of Eye Candy for more.

Now, as I already mentioned, there is a lot of overlap between these activities, and you can’t have one kind of visual design without the other. But there has to be a way for us to talk to our stakeholders and clients about the visual layer of design that is not based on style preference but on “hardcore usability” as Zeldman puts it.

As we continue to grow and define the different elements of user experience I believe that Visual Design has the most baggage to overcome simply because of the history of web design and its initial focus on what’s pretty vs. what works. What works is not subjective, and we need to communicate that effectively to our stakeholders and clients. It’s not their fault for not “getting it”, it’s our fault for not explaining it properly. Let’s change that.

The problem with Flash and Ster Kinekor's new web site

South African movie site Ster Kinekor just relaunched their web site to much fanfare. Much of the discussion I’ve seen on Twitter about the new site is about their decision to remain completely reliant on Flash. I agree with all the technology arguments against Flash, but I want to take a slightly different approach here and talk about Flash as an enabler of bad user experience.

You see, Flash is like the guy who keeps giving your alcoholic uncle a drink while the rest of the family is trying so hard to help him get sober. Every time he gets close to quitting he gets “one more drink” from somewhere and falls back into bad habits. And this is what Flash is to user experience.

Every time you might get close to following standard UI conventions or have a simple flow, Flash comes in to whisper sweet animatic nothings in your ear… “Just one more flyout,” it says. “Just one more hover state - come on, everybody’s doing it.” Designing a boring old button? “No man,” says Flash, “we can make this thing move and light up with Flash, wouldn’t that be cool?”

And before you know it, you have this:

In my view, most of the user experience issues with the old Ster Kinekor site have not been addressed in the redesign. For example:

  • There is no visual hierarchy on the site. Everything is important, so nothing is important. I just don’t know where I’m supposed to click.
  • Animations are intrusive and adds to the confusion.
  • Standard UI conventions are ignored. Buttons don’t look like buttons, links don’t look like links (links are grey on the site…).
  • Forms are non-standard and not easy to fill out. For example, the checkout flow uses skeuomorphic design to make the credit card look like a real card, but it’s just confusing. And you can’t copy and paste your card number from a different document.

There are more issues, but that’s not really what this post is about. This post is a call to cutt off Flash as a primary development technology on a web site, not just because it’s slow, difficult for SEO, doesn’t work on iOS, and all the other technical arguments against it.

We need to cut off Flash mostly because it makes it way too easy to design bad user experiences. The web is undeniably moving beyond Web 2.0 (whatever that was) and into an era where simple designs that put content first provide the best user experience. And Flash simply doesn’t fit that mold.

Apple as "the third who benefits", or why developers shouldn't be upset

Perhaps the most succinct summary of Monday’s Apple WWDC keynote is this tweet by Dustin Curtis:

Screen shot 2011 06 08 at 9 57 47 AM

I understand the sentiment, and a lot of the post-keynote blog posts echoed this general statement. The most measured response, in my view, came from Marco Arment, the creator of Instapaper:

If Reading List gets widely adopted and millions of people start saving pages for later reading, a portion of those people will be interested in upgrading to a dedicated, deluxe app and service to serve their needs better. And they’ll quickly find Instapaper in the App Store.

I’m certainly not going to stop using Instapaper. I’m deeply invested in the service and can’t see myself moving to Safari any time soon. But that’s beside the point. Here’s the point.

I find it strange that people are freaking out about how Apple is going after successful apps and integrating them deeply into Lion and iOS. Here’s Rich Mulholland (well, censored a little bit):

Screen shot 2011 06 08 at 10 03 12 AM

For my part, I agree much more with Justin Williams when he says:

Some people grow frustrated by Apple continually making inroads in existing developer’s territory, but it comes with being a part of the platform. The key is to ensure your product lineup is diverse enough that you can survive taking the blow Apple may offer at the next keynote.

The Theory

And this is where we have to start talking about Sociology theory (No, don’t go away, this is going to be great!). One of the key concepts in Social Network Theory is Ronald Burt’s theory of ‘structural holes’. This theory aims to explain how competition works, and argues that networks provide two types of benefits: information benefits and control benefits.

  • Information benefits refer to who knows about relevant information and how fast they find out about it. People with strong networks will generally know more about relevant subjects, and they will also know about it faster.
  • Control benefits refer to the advantages of being an important player in a well-connected network. In a large network, central players have more bargaining power than other players, which also means that they can, to a large extent, control many of the information flows within the network.

Burt’s theory of structural holes aims to enhance these benefits to their full potential. A structural hole is “a separation between non-redundant contacts” (Burt, 1992). The holes between non-redundant contacts provide entrepreneurial opportunities that can enhance both the control benefits and the information benefits of networks.

To understand the role of structural holes in this regard, it is necessary to understand the concept of tertius gaudens. Taken from the work of George Simmel, the tertius gaudens is defined as “the third who benefits” (Simmel, 1923). It describes the person who benefits from the disunion of two others. For example, when two people want to buy the same product, the seller can play their bids against one another to get a higher price for the particular product.

Structural holes are the setting in which the tertius gaudens operates. An entrepreneur stepping into a structural hole at the right time will have the power and the control to negotiate the relationship between the two actors divided by the hole, most often by playing their demands against one another.

Apple’s Strategy

This is exactly what Apple is doing, and have been doing from the start when the first iPhone came out (maybe even before). They saw the structural hole between 3rd party developers and consumers, and walked right into it. Through the app store, they built an enormous network (information benefits) where they broker the relationship between developers and users (control benefits). By providing developers with a massive audience, they became “the third who benefits.”

I also don’t think they’ve been particularly secretive about this strategy, so it shouldn’t come as a surprise to developers that if they have a one-platform strategy, and that platform is iOS, they might get disintermediated at some point.

Which brings us back to Marco Arment and Instapaper, and why I don’t think he’s in trouble. Instapaper is an ecosystem that’s intimately part of my workflow. It’s integrated with Firefox, iPhone, iPad, Twitter, Google Reader, Flipboard, Zite, … the list goes on. I’m not going to switch away, because I don’t see Instapaper as an iOS app. I see it as a solution to my reading needs.

So should developers still make iOS apps? Of course. But it’s important to realize that the product shouldn’t be the app. The product should be the problem you solve for users, on multiple platforms and in a simple, integrated way. Those are the apps that will survive (and even thrive) despite any changes that occur on Apple or another platform.

Product roadmaps are safe

Over on the 37signals blog they just reposted an old article entitled Product roadmaps are dangerous. Jason Fried says the following:

Instead of the roadmap, just look out a few weeks at a time. Work on the next most important thing. What’s the point of a long list when you can’t work on everything at once anyway? Finish what’s important now and then figure out what’s important next. One step at a time.

It’s hard to disagree with a person (and a company) you have great admiration for, as I do for Jason and 37signals. But I do think it’s important to set the record straight on product roadmaps - particularly when it comes to large organizations. The post highlights two main concerns with product roadmaps:

  • Product roadmaps assume you know what’s going to happen 6 - 18 months from now
  • Product roadmaps set expectations, so you can’t change them (and if you do change them it becomes a worthless exercise)

So let’s look at each of these points in turn.

Product roadmaps assume you know the future

Jason writes:

When you let a product roadmap guide you you let the past drive the future. You’re saying “6 months ago I knew what 18 months from now would look like.” You’re saying “I’m not going to pay attention to now, I’m going to pay attention to then.” You’re saying “I should be working at the Psychic Friends Network.”

This is not what a product roadmap is, or what it’s supposed to do. The purpose of a product roadmap is to set forth a long-term vision for the business, and break that up into smaller, meaningful pieces of work, based on what you know now. It’s fallacy to believe that this is an unchangeable list of dates about where the business is headed. A product roadmap that doesn’t react to day-to-day changes in the market and within the company is a pretty dumb document.

At my organization we are very clear that the product roadmap is a flexible guideline that can (and must) change frequently as needed. But it gives the teams (and the management team) something to work towards. It’s a common vision, a sense of direction that’s more than just fluffy language - it’s concrete evidence that we’re headed somewhere good, and we know how to get there.

We can change direction as many times as we want. This doesn’t make it a useless exercise, it means that instead of starting fresh on a new “roadmap” every few weeks, you build on your past successes, don’t make the same mistakes twice, and keep making measurable progress since you can see where you came from.

Product roadmaps set the wrong expectations

Elsewhere in the post:

The other problem with roadmaps is the expectations game. People expect you to deliver what you say you will in 4, 5, 6 months. And what if you have a better idea? What if ther’s a shift in the market that you need to address? What if what you thought wasn’t what actually happened? Any change in the roadmap nullifies the roadmap. Then the map isn’t a map at all.

If you have this problem it doesn’t mean that product roadmaps are wrong, it means that you’re doing it wrong. As long as everyone in the organization buys into the fluid nature of the roadmap, you won’t have this problem. In our organization we do this mainly through the mechanism of what we call the Product Council (I was partial to Intergalactic Product Force, but for some reason that didn’t fly so well). Here’s how it works.

The Product Council is made up of the heads (VPs) of every department in the organization: Engineering, Marketing, Support, Category, etc. This body has a weekly meeting where we discuss the current product roadmap and priorities. We ask ourselves, Are we still working on the most important things? If something more important comes up, we prioritize it higher in the roadmap, and something else shifts down. If we’re happy with the direction, we do nothing. If a new opportunity arises we ask ourselves, Is this more important than what we’re working on right now? Or is this something we should work on next? If so, what moves down in the priority list?

From here, I communicate with my Product Team about any changes, and we discuss this to make sure no one missed anything. But then - and this is important - the Product Manager has complete autonomy and ownership over the implementation of the roadmap. The Product Council sets the priorities (with input from all parts of the organization), but the Product Managers work with their development teams (and others) to set the timeline, the implementation details, the design, everything.

There’s more to it, but in the interest of brevity I’ll stop there. This process has three main advantages:

  • It gives the management team complete transparency into what the Product team is working on, and it allows for anyone to make the case for a change in priorities. Why this kind of transparency is so essential is a subject for a different blog post, but in short, it takes away a vast majority of the politics you see in many organizations, and it frees up the teams to do what they do best - execute.
  • It prevents scope creep. Nothing can go on the roadmap without something else moving out or down. As anyone who ever worked at a large organization knows, this is an absolutely critical part of a successful development cycle.
  • It gives the Product Manager and their teams what they need to be successful: direction and autonomy. As Jocelyn Glei says: “Give your team members what they need to thrive, and then get out of the way.”

Why product roadmaps are safe (and essential)

At a practical level I went through the exercise of figuring out how we could execute in my organization without a roadmap. And I just can’t see it. Changes to current pages/flows affect changes we’ll make down the line - I have to think about that.

If you’re serious about frequent incremental change as opposed to large redesign projects (as we are), you can’t live without a roadmap because you’ll have no idea how far you’ve gone, what you still need to do, and what’s more important than something else. And perhaps most dangerous of all, everyone in the organization will come to you and want all their projects done right now, and you’ll have no systemic method for dealing with that in a way that’s best for the business.

Andy Wagner summed up my feelings on this issue quite succinctly in a comment on the 37signals post:

[Product roadmaps are] an opportunity to dream about what the future might look like so that as you make your day-to-day responses to the customer, you can do so consistent with building the future state. It emphatically should not be anything to be slave to, it should be dynamic and notional, not static and specific.

Jason says, “The further you get from now, the less you know. And the less you know, the worse your decisions will be.” We agree on that. My argument is that without a roadmap you only see now. And if you only see now without seeing yesterday and tomorrow, you don’t see a whole lot. And “the less you know, the worse your decisions will be”.

My notes from Oliver Rippel's NetProphet talk on "The current state & future of e-commerce in Africa"

These are my notes from Oliver Rippel’s talk at NetProphet 2011. Oliver is the CEO of MIH, a group company overseeing African and Middle East online properties like Mocality and kalahari.net.

The state of e-commerce in Africa

  • As soon as e-commerce becomes more than 1% of retail sales, that’s when it becomes mainstream
  • US not the most successful e-commerce market - Korea is, with 9% of retail sales online. US is at 4%
  • E-commerce in Africa is still nascent:
    • Egypt - 22% Internet penetration, less than 0.01% online retail penetration
    • Nigeria - 29% Internet penetration, less than 0.01% online retail penetration
    • South Africa
      • 6 million Internet users, 12% penetration
      • 0.4% online retail penetration
      • 16.7% credit card penetration
      • 14 e-commerce sites in Top 100 SA sites

Positive e-commerce macro-indicators in Africa

  • Big average projected real GDP growth
  • There is a growing middle class of 320m Africans
  • High mobile penetration (World average: 60%; South Africa: 92%)
  • The promise of accessible and affordable broadband Internet is there

Lessons for building a winning e-commerce business in Africa

MIH’s focus is on the full e-commerce value chain The brands cover the whole purchase cycle: awareness, interest, decision, action, post sale, resale

  • Embrace mobile
  • Leverage offline
    • Go where the users are - online marketing on its own simply won’t work
    • Go to shopping malls and put up posters - whatever works
  • Cash is king
    • 50m million banks accounts in Africa, 95% of transactions are cash-based
    • The only mobile payment system that is scaling is M-Pesa in Kenya: P2P payments
    • They are converting a cash economy into a digital economy, so that can now also be used for e-commerce
  • Build trust
    • Open marketplace model is inadequate in low trust early stage environment - unlike eBay
    • Instead, MIH uses controlled marketplaces that reduce barriers for buyers by building a trusted brand

How long can BlackBerry hang on to its smartphone market in South Africa?

BlackBerry maker Research In Motion just cut their earnings guidance for Q1 2011, blaming slower sales. Even as the future of RIM looks bleak from a US perspective, you wouldn’t think so looking at the South African market. BlackBerries are simply everywhere. I’ve always wondered why BlackBerry has such a large portion of the SA smartphone market, and I can think of two four reasons:

  1. Most BlackBerry contracts come with unlimited free data, which (to my knowledge) no other smartphone handset does at a reasonable cost.
  2. When it comes to business users, it’s still the only phone trusted by corporate IT departments.
  3. A capable smartphone at a reasonable price (although an influx of cheaper Android and Nokia phones might make this a moot point). (Thanks Steyn for pointing this one out in the comments)
  4. The popularity and cost-effectiveness of BBM (although WhatsApp largely takes this away as a selling point). (Thanks Stafford for pointing this one out)

Now, here’s where it gets interesting. The latest earnings guidance cut clearly spells big trouble for RIM, and in a great blog post on Forbes, Eric Jackson lists 10 questions he would ask CEO Jim Balsillie based on that news, including the following:

Your bullish analysts used to say “yes, the US business is dying but International is going to keep growing.” You seemed to be saying last night that demand is drying up in Latin America too.  Does that mean the US was a sign of what is to come for your future International growth?

Now combine that with a recent IDC report that predicts Africa would become the first truly post-PC continent:

IDC estimates that in South Africa, 800,000 PCs were shipped in 2010 and the number is expected to decline by about four percent annually to reach 650,000 by 2015. Meanwhile, 1.3 million handsets were shipped in 2010 and that rate is expected to increase at a compound annual growth rate (CAGR) of nine percent to reach 2 million annually by 2015.

You have to ask yourself: how long can BlackBerry keep its apparent dominance in the smartphone market in South Africa? As mobile demand increases it appears that they will simply be unable to produce hardware that can keep up with consumers’ ever increasing smartphone requirements.

The “How Angry Birds would look on a BlackBerry” joke is funny, but there is certainly some truth behind the joke. As the line between work and life continues to blur, you don’t want a business phone that can also make calls. You want a personalised handset that can also be used for work. This is something RIM simply hasn’t figured out how to do, so they continue to double down on the “corporate security” angle. As Slate recently pointed out in a review of the PlayBook:

The incoherence, I think, is a sign of something deeper: Research in Motion doesn’t know what kind of company it wants to be. It made its fortune selling gadgets to chief information officers””IT guys who wanted to give their employees access to office e-mail on the go, but only in a way that accorded with corporate security policies. When they talk about RIM’s strengths, the company’s leaders like to point to their “CIO friendliness.”

The trouble is, being friendly with CIOs doesn’t matter as much as it used to. Nowadays people don’t ask the tech guy which mobile gadgets pass muster. Instead, tech guys look to employees to decide which gadgets to support. RIM’s strategy””to infiltrate companies as a first step to becoming a mass-market hit””has been eclipsed by the Apple approach, which is to infiltrate schools and homes, and then hope that regular people nag their IT guys to let them use iPads at work, too.

Meanwhile, Nokia appears to have given up on the US, but they’re coming for Africa in full force:

Nokia is already working with developers in several African countries and Peng feels that Nokia’s next big growth opportunity is to go beyond bringing affordable voice and SMS to delivering affordable web and applications.

“Rural populations live their lives largely outside of the reach of high quality services; through solutions like Nokia Data Gathering, we are already supporting field workers to collect, send and receive information quickly and securely via a mobile phone helping circumvent infrastructural challenges and speed up data collections needs in sectors such as health, agriculture, environmental conservation, population census and emergency services,” added Peng, in a press release sent after her speech.

It might not happen in the next few months, but I think there is a dangerous trend on the horizon for RIM. Between mobile handset growth in SA, trouble in the US market, and huge competition on the way, there’s a perfect storm brewing in BlackBerry land.