Menu

Design is about meeting business goals, not making you feel good

In this post I make the argument that companies should hire designers that have a strong foundation in psychology and design theory. I further make the case that when Product Managers and other UX practitioners give feedback on designs, they should move away from their personal visual preferences and instead focus on feedback that relates to the user and business goals they are trying to meet.

I recently had a discussion with an Interaction Designer friend of mine about the perils of Design by Committee.  His take on it was, in essence, as follows (paraphrased a little bit):

The problem is that everyone thinks they’re a designer. Not everyone can code, so they don’t go to developers telling them their markup needs to be more semantic or more valid. But everyone has a gut feel about design - they like certain colors or certain styles, and some people just really hate yellow. So since everyone has an emotional response to a design, and “it’s just like art,” they think they know enough about design to turn those personal preferences into feedback.

It is a real shame that we have come to this point where design just means “making things pretty” for some. Even among people who do understand it’s about more than that, there is sometimes this misconception that there is so much personal taste involved that “your guess is as good as mine.” And to add insult to injury, they way that feedback is given to the designer is often more destructive than helpful. Look, I’m not against design feedback at all - it’s an essential part of the process. But if you’re going to give feedback, do it right. More on that a little later.

The truth is that design is science. Yes, of course it’s a creative discipline, and every designer has a style they like, and strong ideas on aesthetics.  But it is important to understand that aesthetics is the last mile of design. Before that (if you do it right) comes hours and hours of analysis and thinking, resulting in solutions that are built on very real psychological principles, easily proven using the user experience validation stack.

The major problem with focusing too much on aesthetics (and our personal views of it) in design is this: it makes the site harder to use.  Jeffrey Zeldman’s excellent 1999 piece Style versus design places some of this blame on designers themselves, but it sums the problem up perfectly:

When Style is a fetish, sites confuse visitors, hurting users and the companies that paid for the sites. When designers don’t start by asking who will use the site, and what they will use it for, we get meaningless eye candy that gives beauty a bad name ”” at least, in some circles.  Not enough designers are working in that vast middle ground between eye candy and hardcore usability where most of the web must be built.

So let’s be clear. Design is not about what feels good to you. Design is about meeting business goals. It’s about solving user needs so that your business can make more money. In Mike Monteiro’s recent post on Giving Better Design Feedback he builds on this idea and gets to the crux of it:

First rule of design feedback: what you’re looking at is not art. It’s not even close. It’s a business tool in the making and should be looked at objectively like any other business tool you work with. The right question is not, “Do I like it?” but “Does this meet our goals?” If it’s blue, don’t ask yourself whether you like blue. Ask yourself if blue is going to help you sell sprockets. Better yet: ask your design team. You just wrote your first feedback question.

This madness has to stop. I’m not arguing that we should create ugly web sites. In fact, as long as the focus is first and foremost on meeting user needs and business goals, beautiful sites can make the experience simply wonderful. But I am arguing that those of us in the Product Management and User Experience field need to fight for three things:

  • A design culture that is routed in theory, analysis, and critical thinking.
  • A company culture that hires designers with strong roots in theory and analysis
  • A managerial culture that trusts their designers to do their jobs.

So how do we as Product Managers and UX practitioners help affect this change?  We need to gain a much better understanding of what goes on under the hood of design. We need to do this so that we can give better design feedback, but also for the sake of the business in general, so that we are able to make a strong argument for what’s right when it comes to design culture. And lucky for us, there are plenty of great resources online to help us learn how to objectively look at design, and understand the science behind it. Here is my initial suggested reading list:

  • The psychologist’s view of UX design is an incredibly detailed look at user behavior on websites, with additional outgoing links to more great content. I go back to this post often.
  • Designing for the Mind explains how the brain interprets visual information, and how that translates to web design that works (and is also aesthetically pleasing).
  • Gestalt Principles Applied in Design goes further down the path of using established psychological principles to create better designs.

But just how important is it that we solve this problem? In a scary, almost prophetic way, Zeldman ends his post on style vs. design with the following words of caution:

Most of all, I worry about web users. Because, after ten-plus years of commercial web development, they still have a tough time finding what they’re looking for, and they still wonder why it’s so damned unpleasant to read text on the web ”” which is what most of them do when they’re online.

I fear that now, more than 10 years after he wrote those words, we’re still in the same boat. Let’s change that. The next time you need to give feedback on a design, don’t go with your gut. Go with science, and keep your user/business goals in mind. Ask your designers goal-oriented questions, and if they can defend the decisions they made, trust them. If they can’t, you just gave constructive feedback. As you should.

Inspiration for designers stuck in the 'sheer undiluted slog'

I recently read two book excerpts, both about art and the creative process, that I think are extremely relevant to web design, so I wanted to share it here. The first is from the book Art & Fear: Observations on the Perils (and Rewards) of Artmaking, and it tells the story of a ceramics teacher on his first day of instruction:

A ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of the work they produced. All those on the right would be graded solely on their works’ quality.

His procedure was simple: On the final day of class he would bring in his bathroom scales and weigh the work of the quantity group; 50 pound of pots rated an A, 40 pounds a B, and so on. Those being graded on quality, however, needed to produce only one pot — albeit a perfect one — to get an A. At grading time, the works with the highest quality were all produced by the group being graded for quantity.

It seems that while the quantity group was busily churning out piles of work — and learning from their mistakes — the quality group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of clay.

The second is from Absolute Truths, where a character in Susan Howatch’s novel talks about the struggles she encounters as a sculpter:

But no matter how much the mess and distortion make you want to despair, you can’t abandon the work cause you’re chained to the bloody thing, it’s absolutely woven into your soul and you know you can never rest until you’ve brought truth out of all the distortion and beauty out of all the mess - but it’s agony, agony, agony - while simultaneously being the most wonderful and rewarding experience in the world - and that’s the creative process so few people understand.

It involves an indestructible sort of infidelity, an insane sort of hope, an indescribable sort of… well, it’s love isn’t it? There’s no other word for it… and don’t throw Mozart at me… I know he claimed his creative process was no more than a form of automatic writing, but the truth was he sweated and slaved and died young giving birth to all that music. He poured himself out and suffered.

That’s the way it is. That’s creation. You can’t create without waste and mess and sheer undiluted slog. You can’t create without pain. It’s all part of the process, it’s in the nature of things.

So in the end every major disaster, every tiny error, every wrong turning, every fragment of discarded clay, all the blood, sweat, and tears - everything has meaning. I give it meaning. I reuse, reshape, recast all that goes wrong so that in the end nothing is wasted and nothing is without significance and nothing ceases to be precious to me.

These stories are some of the best descriptions of the creative process that I’ve ever read. The next time you’re mid-design and feel like you’re stuck in the “sheer undiluted slog” that is sometimes the reality of what we do, think of this.

You’re not just theorizing about perfection - you’re doing.

Think about how you can reuse, reshape, and recast all the failed efforts.

Give it meaning by letting it lead you to the next, better solution.

Plaxo registration and the benefits of good microcopy

Remember Plaxo? I do too, but up to now I remembered them like I remember MySpace: “That site that used to be popular for something-or-other.” But recently a bunch of people I used to work with moved to Plaxo, and they’re colleagues I respect, so I thought I’d check it out again. You know, one day. But to his credit, Preston’s incessant tweeting about how awesome Plaxo is finally got me off my procrastinating butt to sign up for the thing.

And I am impressed.

I only signed up this morning so this isn’t really a review of the service, but I did want to make a couple of points about the sign-up process and some very effective use of microcopy. UX designer Joshua Porter has written extensively about the value of good microcopy, and so have the folks at Polon. To quote from Joshua’s post:

Microcopy is small yet powerful copy. It’s fast, light, and deadly. It’s a short sentence, a phrase, a few words. A single word. It’s the small copy that has the biggest impact. Don’t judge it on its size”¦judge it on its effectiveness.

Below is the first screen in the Plaxo registration flow. I’ve circled the microcopy on the form:

The copy on Date of Birth and Gender are particularly interesting, for two reasons:

  1. They’re extraneous fields that you usually don’t see on sign-up forms that are optimized for maximum conversion. More fields usually equal higher drop-off, so this has the potential to be dangerous.
  2. In particular, these are fields that people feel a little uneasy about from a privacy perspective. People are especially skeptical about providing Date of Birth.

Plaxo does quite a few things right with this microcopy:

  1. Short messages to explain why the fields are required.
  2. Plainspoken language that is easy to understand - no legal jargon.
  3. The explanation is phrased as a user benefit. They want your date of birth not for some sinister reason, but so that your friends can wish you Happy Birthday. That’s good copy.

I’d love to see some data on the conversion rate of this form with and without the microcopy. Maybe Plaxo can do the user experience community a favor and run an A/B test for us? :)

I also wanted to briefly mention the second screen in the sign-up flow:

We’ve all seen these confirmation screens that tell us we need to confirm our email address. But I haven’t seen an image of what the email looks like on the confirmation screen before. In an age where users are terrified of fraud, this is another small detail that probably has a pretty significant impact on users’ comfort with the Plaxo service. Well done, guys.

But hey, it’s not all good. I spent about 10 minutes trying to set things up and I got pretty overwhelmed with all the information being thrown at me, so I took a break to write this post instead. I hope I’m the only one with that reaction…

How to stay sane as a Product Manager

As an enthusiastic (but mediocre) runner, I’ve been thinking a lot about the similarities between marathons and product management.  With all this focus on sprints, we sometimes forget that as one sprint runs into the next one, and the next one, and the next one, at some point it doesn’t feel like a sprint at all any more.  And before you know it you’re tired, overworked, and excessively grumpy — basically teetering on the edge of sanity.  There has to be a better way, right?

So when it comes to long runs, I think you can plot the relationship between distance covered and your level of joy as follows:

In general, you start off feeling great.  But as the first mile progresses, you start getting weird thoughts.  “What have I done?” “This is the stupidest decision I’ve ever made!” “I’m going to die!!!” And by the time you finish the first mile, you’re ready to collapse in a crying heap of shame.  But then something strange happens: you start enjoying yourself.  You hit your stride, you look around, you enjoy the scenery and the people… And before you know it you think that you can carry on like this for forever.

But of course, you can’t. At some point, things start going South again, and you feel like you’re never going to be able to finish. A downward spiral of self-doubt ends with a dose of endless self-pity in the dark depths of hell. And you’re ready to give up again. But then you see it: “1 mile to go.” And suddenly everything feels ok again. You pick up speed and race across the finish line like the champion that you know you are. And you say, “That was great! When can we do it again!”

This is a post about what I call First Mile Problems and Last Mile Problems in day-to-day Product Management.  If you replace “Distance” with “Project timeline”, and “Joy” with “Sanity level” in the graph above, I think you’ll have a good overview of what it means to be a PM. And the problems you experience at the beginning of a project are often distinctly different from the problems you experience later on. So I’m hoping that we can help each other here, and hopefully end up with a graph that’s a little more… stable.

First Mile Problems

1. Aligning the team

One of the toughest challenges in Product Management is getting everyone on the project team working in the same direction towards the same vision. Sure, you all like each other and work well together, but how do you take all those differing opinions, stop the brother/sister-like bickering, and get to work? This isn’t something I’ve completely figured out, but I do think some of the answer lies in what Business Week calls I-shaped people.

We’ve all hear about T-shaped people — those who have a deep understanding of one specific topic, and a basic knowledge of a broad range of different topics. But I-shaped people are different:

These have their feet firmly planted in the mud of the practical world, and yet stretch far enough to stick their head in the clouds when they need to. Furthermore, they simultaneously span all of the space in between.

[They are able to] rise above the specifics of a particular problem to think about them in a more abstract, and in some ways, more general way.

I-shaped people are able to build consensus around the team’s direction because they’re able to not only see  where the product is going, but they’re also stuck in the weeds enough to have credibility to make the day-to-day decisions. I look at it this way:

When you set the vision for the product, keep your head in the clouds:

  • Communicate the vision clearly to the team, don’t expect they’re just going to implement whatever you tell them to do. Bring everyone along for the ride — it will make the vision better, and keep everyone invested in what you’re doing.
  • Show how you plan to get there, so that the team knows it’s more than a pipe dream. Use broad strokes to communicate ordered priorities that will get the team to the vision. Which brings me to my next point…
  • Use pictures. It doesn’t matter what kind. Sketches, wireframes, hi-fi mocks, whatever it takes to get the message across and allow everyone to have the same vision in their heads. Not showing at least some framework is a recipe for running into trouble in the last mile.
  • Above all, remain flexible. The vision is a plan, and nothing more. Things will change. New ideas will come up. The interface will change. That’s more than ok — it’s a good thing. But you have to start walking in the right direction, otherwise you’ll never get anywhere.

When you get going on the project, keep your feet on the ground:

  • Be in the details. Understand every pixel on the page. Know about every single Jira ticket. Show that you care about the journey, not just the destination. Be there for your team to remove every and any roadblock that might come up and prevent them from making progress.
  • Communicate clear expectations, then trust your team. If you did your job right, you hired a team of amazing people. So don’t second-guess them. Ask questions, but allow the designer to defend his/her decisions, and trust them as long as the thought process makes sense.
  • Most importantly, don’t hover. This is a hard one for me, but don’t stand over the designer’s shoulder. I have a rule — if a designer has Photoshop/Fireworks open, I stay away. They’ll come to me when they’re done. As the excellent article World Wide Mush points out: “If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Making everything open all the time creates what I call a global mush.” Designers need time to design on their own before they share their thoughts with the rest of the team.
  • And finally, learn what you don’t know. This one might be a little controversial, but I tend to agree with Six Revisions in their post Should Web Designers Know HTML and CSS: “Web designers should know HTML/CSS ”” even if it’s just limited to the fundamentals ”” for the sake of being able to create web designs and web interfaces that work on the medium.” If you don’t know how to code, learn. You don’t have to be great at it, you just need to understand it. If you came from a technical background and don’t know much about UX design principles, LEARN it. It will garner respect, make you more effective, and above all, result in better products.

I want to end this section with a final word on trust, which is a topic that’s very close to my heart. The site “99%” recently posted a fantastic article called What Motivates Us To Do Great Work. It’s a must-read for all managers of creative teams. The crux is this:

For creative thinkers, [there are] three key motivators: autonomy (self-directed work), mastery (getting better at stuff), and purpose (serving a greater vision). All three are intrinsic motivators.

As creative thinkers, we want to make progress, and we want to move big ideas forward. So, it’s no surprise that the best motivator is being empowered to take action.

When it comes to recommendations for creative leaders, [authors of a recent study] don’t mince words: “Scrupulously avoid impeding progress by changing goals autocratically, being indecisive, or holding up resources.” In short, give your team members what they need to thrive, and then get out of the way.

Like I said: trust, don’t hover…

2. Stakeholder buy-in

Once you have team alignment on the project and the vision, it’s time to turn your attention outward and focus on stakeholder buy-in. This part of the process often makes me feel like I’m on the wrong end of a very angry mob. It’s a stage when Product Managers need to use all the communication skills they can muster. Because when it comes to design, everyone has an opinion.

So this is where we need to talk about the phenomenon of Design by Committee.

If you haven’t seen the Oatmeal comic How A Web Design Goes Straight To Hell, you need to go read that first, and then come back here — it’s a perfect summary of the problem. The ultimate post on this problem has also already been written, so I’m not going to spend too much time on it — just go read every word in Why Design by Committee Must Die in Smashing Magazine. I do want to highlight a couple of areas in that article, and add some of my own comments.

One of the main problems we have in web design today is that everyone thinks they’re a designer. With coding it’s different — not everyone can code. But design is different. Like art, everyone has an opinion on design. You like it or you don’t. And because you have this immediate visceral reaction to a design, it’s tempting to confuse that with knowing what makes a design good. But that’s simply not true.

As posts like Designing for the Mind and  Gestalt Principles Applied to Design have shown, what makes a design “good” has very little to do with taste, and everything to do with the proven psychology of visual perception. “Pretty” is a small part of design compared to applying the principles of solid user experience design to an interface. So please, let’s leave design to the people who are trained in this stuff. Have I mentioned the importance of trust?

The Smashing Magazine article ends with this advice, which I agree with:

The sensible answer is to listen, absorb, discuss, be able to defend any design decision with clarity and reason, know when to pick your battles and know when to let go.

On a practical level, this is how I apply that piece of advice:

  • Respond to every piece of feedback. This is tiring, but essential. Regardless of how helpful it is, if someone took the time to give you feedback on a design, you need to respond to it.
  • Note what feedback is being incorporated. Be open to good feedback, don’t let pride get in the way of a design improvement, and let the person know what feedback is being incorporated.
  • Explain why feedback is not being taken. If a particular piece of feedback is not being taken, don’t just ignore it. Let the person know that you thought about it, and explain the reason(s) for not incorporating that feedback. It’s hard to get upset at you if you explain clearly why you’re taking the direction you’re going with. And if you’re not sure how to defend the decision…
  • Use the user experience validation stack to defend decisions. Read the post Winning a User Experience Debate for more detail, but in summary, try to first defend a decision based on user evidence — actual user testing on the product. If that’s not available, go to Google and find user research that backs up the decision. In the absence of that, go back to design theory to explain your direction. I mentioned some sample posts above that should be helpful, and also check out The Psychologist’s View of UX Design.

When it comes down to it, everything once again comes back to trust, and a clear definition of roles and responsibilities. Senior management needs to trust their hiring judgment, trust their Product Managers, and give them the authority to make decisions for the projects they are responsible for. And, of course, hold them accountable if those decisions end up being wrong. It’s all we want: give us the tools we need to get the best possible product out the door, and hold us accountable if we’re not successful.

Last Mile Problems

Ok, so let’s assume you have a team that’s moving along, working away on a project, and all the stakeholders are bought in, so you’re on a roll. But it is almost inevitable that things will get rocky again towards the end of the project. Here are some Last Mile Problems and thoughts on how to deal with them.

1. People issues

I probably don’t even have to describe this one. We all know what happens. People get tired and cranky. They start distrusting each other. A small mistake creeps in and the whole team jumps in to point it out. Whatever the catalyst, the outcome is all too familiar — a team that fights more than they work. So what do you do?

I’m not a fan of self-help books at all. Let me start by saying that. But a previous manager recommended Crucial Conversations — Tools for Talking when Stakes are High to me, and it’s been one of the most helpful books I’ve ever read. You have to read it cover to cover, but in summary, it describes how, when people are put in situations where they feel threatened for one reason or another, they react in one of two ways:

  • Silence. When things get rough, some people simply withdraw, either by going quiet, glossing over the issues, or masking what’s really going on by changing the subject and talking about other things.
  • Violence. Others resort to attacks (both directly or indirectly through passive aggressive behavior), putting labels on people (“he’s an idiot who writes bad code…”), or trying to control the situation without considering the opinion of others.

Both of these responses are destructive and not conducive to good teamwork.  So the rest of the book focuses on how to deal with those behaviors.  How do you create a safe environment where everyone feels they can voice their opinion without feeling disrespected or unvalued? I’ve found the following process really helpful in situations of team conflict:

  • Commit to seek a mutual purpose. Recognize that there is a problem, and get the team to commit to fixing it and not just sweep things under the rug.
  • Recognize the purpose behind the strategy. If different people on the team want different things, ask them to explain why they want this done a certain way. Even better if this can be phrased from a user’s perspective. For example, saying “I want this button to be red because I believe it will convert better” shows that the purpose behind the strategy is to increase conversion.
  • Invent a mutual purpose. After everyone voiced the purpose behind their strategy, discuss a purpose everyone can agree on, before looking at specific tactics/strategies to get there. For example, if one team is concerned about conversion and the other team about user confusion, can you bring those together, and invent a purpose that encompasses both ideas? Can the purpose be to reduce drop-off while staying within the site’s visual style guide?
  • Brainstorm new strategies. Once you know what the purpose is, the team can brainstorm solutions that will fulfill that purpose. Continuing our example, move the discussion away from button color, and find ways to move elements around to reduce user confusion and drop-off, while committing to testing different button colors and adjusting the style guide based on whatever the data says.

I used an imperfect example to explain those points, but the principles work — give it a try!

2. Delivery issues

Stress-levels rise as delivery dates approach, especially if it looks like those dates aren’t going to be met. The first thing we need to clear up is that humans are horrible at estimation (here’s a very interesting post trying to solve that problem). I think the solution to this kind of stress is, as controversial (yet recently very popular) as it is, to do away with detailed estimates.  Tom Fishburne sums it up really well in his post Waterfall Planning:

It’s important to know the business inside and out and have a clear trajectory where w’re headed.  But there is a point where planning becomes overplanning.  All that time spent planning is time spent not doing.

Gantt charts are particularly problematic in this regard, as summed up by another great post called Project Managers, Not Calendar Monkeys:

I’ve been working on a way to balance the benefits of Gantt charts (ability to input dependencies and adjust an entire schedule with the push of a button) with the best way to communicate project schedules to clients. Most people can’t read a Gantt chart, and no client should have to. Gantt charts are more useful for planning next steps than providing an at-a-glance project status anyway.

So what’s the solution? I don’t think we really know yet. I am all for planning and knowing where you’re going, as I pointed out earlier in this post. But I think holding a team to specific deadlines isn’t the right way to increase velocity, and results in more stress than it’s worth. So what’s the alternative? I think you build (and sell to executives) a team driven by different types of goals:

  • Build a team driven by priorities, not timelines. Know what are the most important things to work on, work on those first, and move on to the next thing. Have goals (that can include timelines), but realize that things can change. Trust your team that they will work towards those goals, and then knock off the priorities one by one. Of course there will be deadlines driven by external pressures, but keep those at a high level so that the team has a clear goal to focus on. This will ensure better quality (because you’re not cutting corners and rushing to get things done) and higher velocity (because everyone is focused on doing the right things in the right order).
  • Build a team driven by action, not meetings. In progress meetings, I prefer to represent my team so that they can go on with their work uninterrupted. If someone wants to attend the meeting and speak for themselves, that’s fantastic, but I don’t think it should be a requirement that everyone on the team be present for strategy and update meetings. Note that this is different from working design meetings, where actual solutions are being produced. A meeting where you walk out with a deliverable (a sketch, a wireframe, a product requirement) is a good meeting. All others should be handled by the Product Manager/Owner so that the team can go about their business uninterrupted.

Speaking of which, allow me to bring in one more voice of reason on meetings, Mike Monteiro. In his excellent post The Chokehold of Calendars he writes:

Meetings may be toxic, but calendars are the superfund sites that allow that toxicity to thrive. All calendars suck. And they all suck in the same way. Calendars are a record of interruptions. And quite often they’re a battlefield over who owns whose time.

Make sure the focus is on working, not telling other people what you’re working on.

So there that is

As I read through this post again, I realize there might be some resistance to some of these ideas. Not that anything I said here is particularly new, but because I know there are several counter-arguments that can be made to these points. That’s exactly how it should be.

I’d love to get some input on this, particularly ideas and tips on how you’ve been able to deal with some of these problems. This is all a work in progress, as I assume it will be until the end of time.

This post is based on a recent talk I gave at the South African Scrum User Group, entitled Roadmap to serenity - How to stay sane as a Product Owner.

How to measure the effectiveness of web content

Content strategy is starting to get its much-deserved time in the spotlight as part of the user experience design family.  As basic examples of confusing/bizarre content like this one and this one show, getting serious about content is way overdue.  But I’m a little worried that we haven’t seen much talk on how to measure the effectiveness of web content.  It is unfortunate that in some companies it is still a struggle to sell the benefits of UX design, but it is the reality, so we have to deal with it.

Selling content strategy to clients and stakeholders is, of course, not the only reason why measuring its effectiveness is important. It is also essential as part of the whole design process:

  • How do we select the best content if we have a variety of different alternatives, each with its own group of supporters who want to get it on the site right away?
  • Since the voice of a web site can be such an abstract, arbitrary decision sometimes, how can we apply methodologically robust research methods to help make these decisions?
  • How do we know that the content we wrote made a difference on the site?

So that is what this post is about — a proposal for how to measure the effectiveness of web content.

What makes content effective?

First, I would define “effectiveness” in this context as the optimization of the following three concepts:

  • Do users understand what you are trying to tell them and what action they should take to be successful in their task?
  • Are you invoking the desired emotions with your content?
  • Does the proposed content result in higher conversion rates than other alternatives?

It’s so important to combine the user perception data (the first two concepts) with business metrics (the third concept).  From my experience the only way for user experience designers to affect change is if we can show the positive impact these changes have on engagement/revenue metrics.

Measuring content effectiveness

My proposal is to map each of the three concepts to a research methodology that is specifically designed to get the needed information:

Each of the three methodologies can be used to measure the effectiveness of different versions of the same content before it goes live, as well as measure what difference it makes once it is live.  This is also a really nice way to progressively reduce the number of alternatives down to the best solution.

I’ve written about usability testing and A/B testing before so I’m not going to go into more detail on that, but I do want to spend a little time on Desirability testing since it’s a method I really like, and I think it’s not used enough to measure design/content effectiveness.

Desirability testing

In desirability testing, a survey is sent to a large number of users where they are asked to rate one of the proposed design/content alternatives using a semantic differential scale.  The survey is done as a between-subjects experiment, meaning each user sees only one of the proposed designs, so that they are not influenced but the other design alternatives.  The analysis then clearly shows differences in the emotional desirability of the proposed  alternatives.

So, for example, you could show one group a design and ask them how they feel about it:

And then show a different group another image and ask them the same question:

When you then compare the averages of the different groups, you’re able to make an accurate relative comparison between the two designs.

Putting it all together

In summary then, to apply all of this to measuring the effectiveness of content:

  • Usability testing. Start with several different version of the content (~10), along with the current version (if it exists).  Ask users in a lab setting what they understand the content to mean, and any other thoughts they have on the way it sounds.  This should help narrow down the alternatives to 4-6 possibilities.
  • Desirability testing.  Use the Desirability method in a large sample online survey as a between-subjects experimental design.  In the survey, ask users to rate the content on different brand and design attributes.  This way you can determine what emotional response the content elicits from users.  You’d also be able to ask users which version of the content they’d prefer, and why.  This method has the added benefit of large numbers to give you confidence in the statistical significance of the results.
  • A/B testing.  Once you’ve narrowed the alternatives down to two or three, live A/B testing can help you determine which of the alternatives perform better from a revenue or engagement perspective, by looking at differences that can be attributed purely to content changes.  This obviously works easiest when the content is directly related to a revenue-generating task, like the call to action on a checkout page, for example.  But it’s not just about revenue — there are great ways to measure metrics of engagement with the page, which is just as powerful.

Now, I can see a few issues that make this a pretty difficult task, and it’s the reason why the above three methods should not be used in isolation.  In combination, they help tell the whole story.

  • It is difficult to know what users really read on a page.  In the first two methods you pretty much have to show people what to read — that doesn’t happen when they visit your site organically with no one looking over their shoulder.  This is why A/B testing is so important as it gives you a sense of how behavior will change based on content.
  • It is difficult to isolate the effect of content changes from the other influencing factors on a page.  This is the really difficult part.  How do you know that conversion/engagement improved because of the content and not of some other factor on the page, like visual design changes?  That is why it is important to keep the rest of the page exactly the same, and also why usability and desirability testing is important to bring out the perceptual data from users.
  • [Update] This method doesn’t scale well. When you are doing a major redesign or re-write, you can’t do this for every single change (as Eric Reis points out in the comments of this post).  The method is mostly suited for microcopy and incremental improvements once the base content has been written.

And the biggest problem is of course that this is an idealistic approach.  Finding the resources/time/money to do this for every content change is obviously not feasible.  But for high-value landing pages, in-line help, etc. this approach could be well worth the investment.

This is also by no means the only way to measure content effectiveness, but I think it’s a good approach that balances methodological rigor with the dangers of not overdoing it.  I’d be curious if anyone has any thoughts or ideas on how to improve on this proposal.

PS. Last week I discussed this topic at the first Cape Town Content Strategy meetup.  I uploaded the slides here, and you should join the Cape Town group here.

A few thoughts on effective icon design

Earlier this week I was struggling, as usual, to navigate my way around our Web Analytics provider, Omniture (SiteCatalyst in particular).  Specifically, I once again had to hover over every single icon in one of the views to figure out what they mean.  I was looking for a specific menu item called “Add Metrics,” which ended up not being an icon at all, but rather a text link.

Anyway, in frustration I tweeted this screen shot, and implied that if your icons require users to hover over them to figure out what they mean, you’re dong it wrong:

To their credit and my surprise, quite a few people at Omniture follow mentions of the company on Twitter, so pretty soon I got this comment on the post from one of the Product Managers at Ominture:

Hi Rian - this is great feedback you have given us. We are taking steps to address your specific point in an upcoming release of SiteCatalyst. The icons will be clearer, and tool tips will not be necessary.

But not everyone was happy.  I also received this reply from @RRS_ATL:

We moved the conversation to email, and Rudi is a really nice guy — we had a very healthy debate about the issue.  I wanted to spend a little time here to summarize my thoughts on this particular issue and icon design in general.

First, let me say that I am not against tool tips and alt text when they provide additional context for text where space is limited, or more information about an element on the page. My specific issue with the Omniture icons is that they are not easily recognizable and mapped to what they actually do.  That breaks an essential UI rule which states there needs to be a match between the system and the real world:

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. (from Nielsen’s 10 Usability Heuristics)

For example, as you can see in the photo above, the Omniture icon for “link to this report”  has a chart and an arrow in it - that’s simply not intuitive enough.

What I am proposing in the Omniture case is one of two solutions:

  1. Redesign the icons to be more intuitive (which, from the PM’s comment on my post, is what they’re doing), or
  2. Turn them into text links (because what’s the use of icons if you have to use text to explain them?)

Icon design is hard, and I think it defeats the purpose if your icons are not easily understandable.  In a great post on UX Magazine called Realism in UI design they discuss this issue in depth, and explain why it is so hard to get right:

People are confused by symbols if they have too many or too few details. They will recognize UI elements which are somewhere in the middle.

The trick is to figure out which details help users identify the UI element, and which details distract from its intended meaning. Some details help users figure out what they’re looking at and how they can interact with it; other details distract from the idea you’re trying to convey. They turn your interface element from a concept into a specific thing. Thus, if an interface element is too distinct from its real-life counterpart, it becomes too hard to recognize. On the other hand, if it is too realistic, people are unable to figure out that you’re trying to communicate an idea, and what idea that might be.

This only scratches the surface of icon design, so for more interesting reading on the topic, see:

To summarize, I stand by my view that if you have to hover over every single icon to see what it means, you either have to design better icons, or just use text.  But I’m open to counter-arguments for sure… What do you think?

Incorporating the right business and technology needs into product requirements (Product Managegement series, Part 3)

This is the third post in a series I recently started on software development and the role of the Product Manager.  If you haven’t already done so, it might be a good idea to read Part 1 (Overview) and Part 2 (How to ensure that product requirements are informed by user needs) before your read on.  This post continues the discussion on Product Requirements and the different sources that should feed into requirements.

In Part 2 of this series I discussed the role of user needs in product requirements, and in this article I’d like to talk about the role of business needs and technology needs, and making sure that the right balance is struck when incorporating these (often loud, often conflicting) voices in the organization into what gets built.  So, let’s dive in…

Business needs

When I was at eBay, we often heard the mantra from our executive team, “If you fix the user experience, you fix the business.”  Lovely words, but when it comes time to decide what to build, “Fix the business” usually comes first.  This is, of course, not a bad thing, but unfortunately the best user experience often means taking revenue-generating features out of the product.  Would we have banner ads if UX really was king?  Don’t think so…

Still, you have to make money.  That is, after all, the point of the business.  The trick is to understand the difference between good revenue streams and bad revenue streams, and opt for the good ones as much as possible.  A good case study on this is eBay’s interesting approach to photos in product listings on the site.  eBay started charging users to add photos to their  listings pretty much from the very beginning.  This was back in 1995, and in those days storage wasn’t dirt cheap, so it was a natural thing to do.

As the years went by, and more and more photo sharing services popped up that allows users to upload and stores pictures for free, this approach became increasingly frustrating for users.  The other side of the story is that it’s actually in eBay’s best interest for users to upload photos of their items — items with photos convert way better than those without photos.

Still, it took many months to convince the executive team to make it free for users to upload photos of their items.  This is an example of a bad revenue stream — it brings in money, but to the detriment of users and the overall success of the business.  When it comes to adding revenue streams to your product, the important question should always be: are you doing this so people will buy it, or are you doing this so people will want to use it and be willing to pay for it****?

In a recent interview on Microsoft and tablets, Steve Ballmer said the following:

And so we are working with [our] partners, not just to deliver something, but to deliver products that people really want to go buy.

And in that lies the core of what’s wrong with Microsoft — the difference between making products users want to buy vs. making products they want to use.  When you make products people want to use, charging for the value it brings (i.e., looking for good revenue streams), becomes so much easier.  Approaching it from the more negative side, I guess you could also say it like this:

Technology needs

One of the dangers of product roadmaps and the PM’s role is that back-end maintenance and optimization can start to take a back seat.  This is a huge mistake, best explained through the metaphor of technical debt. In Steve McConnel’s great post on this topic, he defines technical debt as follows:

The first kind of technical debt is the kind that is incurred unintentionally. For example, a design approach just turns out to be error-prone or a junior programmer just writes bad code. This technical debt is the non-strategic result of doing a poor job.

The second kind of technical debt is the kind that is incurred intentionally. This commonly occurs when an organization makes a conscious decision to optimize for the present rather than for the future. “If we don’t get this release done on time, there won’t be a next release” is a common refrain””and often a compelling one.

He goes on to explain why this can become a problem:

If the debt grows large enough, eventually the company will spend more on servicing its debt than it invests in increasing the value of its other assets. A common example is a legacy code base in which so much work goes into keeping a production system running (i.e., “servicing the debt”) that there is little time left over to add new capabilities to the system. With financial debt, analysts talk about the “debt ratio,” which is equal to total debt divided by total assets. Higher debt ratios are seen as more risky, which seems true for technical debt, too.

Technical debt isn’t always wrong — quick hacks to get a product out the door is often the right choice.  But as with most debt, it’s important to start paying it off in small chunks as soon as it’s incurred, before you get into too much trouble.  If you’re interested in this topic, also read Andrew Chen’s great post called Product Design debt vs. Technical Debt.

Striking the right balance

Now that we’ve discussed user needs, business needs, and technology needs, the obvious question is: how do you decide what to build now vs. later vs. not at all?

For that, the right answer is unfortunately, in my experience, the traditional cop-out answer: it depends. It depends mainly on the following factors (in no particular order):

  • The level of user engagement and involvement.  If users are screaming for a particular feature, or if there are rumblings around “why haven’t you done anything for us recently?”, it could be a good time to up the level of customer needs you meet.
  • The stage of the product in its lifecycle. If the product is just at the beginning, customer needs will most likely come first.  As the product matures, technology and business needs become more important and should start taking precedence.
  • The financial state of the business.  If there are ways to add good revenue streams, those opportunities should always be taken.

Depending on where the business is on each of these 3 factors, the different inputs might be weighted differently. If the product is going through a growth spurt with lots of buzz, more attention could be placed on user needs. If the product is mature and making good money, technical needs might get more weight.

Exactly how this is balanced in each version/release of the product has no clear answer, and it’s where the art of product management comes in. But one thing is for sure — none of these needs can be ignored for any extended period of time. Take too long to pay down technical debt, and your platform will become bloated and unable to scale. Focus on making money too much, and users will fall out of love with your product.

Successful products have clear product management leaders who are able to take all the different requirements inputs, place it into context with other external and internal pressures, limits, and opportunities, and design a product vision and a (flexible) product roadmap that ultimately increases product/market fit (which I mentioned briefly in Part 2 of this series).

But what do product requirements look like, and what is the Product Manager’s role in that process?  That will be the topic of the next post…

How to ensure that product requirements are informed by user needs (Product Management series, Part 2)

I recently started a series on software development and the role of the Product Manager.  If you haven’t already done so, it might be a good idea to read Part 1 (Overview) before continuing.  In this post I’d like to write about the first step in the development process, namely Product Requirements, and the various sources of input that go into deciding what to build and how to improve your product.

As I started writing I realized the topic is just too big for one post, so I’m going to split it up into a few different posts:

  • Part 2 (this post) will be about user needs as an input to product requirements.
  • Part 3 will be about business needs and technology needs as inputs to product requirements.
  • Part 4 will be about  the PM’s role in the Product Requirements phase.

Even though the focus here is not on what kind of product/service your company should develop and sell, I do want to briefly mention product/market fit, because it is probably the most important aspect to figure out to be a successful business.  No one talks about this better than Marc Andreesen, so I wanted to quote from one of his (now deleted) blog posts:

The quality of a startup’s product can be defined as how impressive the product is to one customer or user who actually uses it: How easy is the product to use? How feature rich is it? How fast is it? How extensible is it? How polished is it? How many (or rather, how few) bugs does it have?

The size of a startup’s market is the the number, and growth rate, of those customers or users for that product.

The only thing that matters is getting to product/market fit.  Product/market fit means being in a good market with a product that can satisfy that market.

With product/market fit figured out (no easy task), and a workable product to start with, it’s time to get serious about building and improving the product — and that’s the stage where this post starts.  At the heart of a good product roadmap stands a Product Manager that is able to strike a balance between user needs, business needs, and technology needs.  So let’s look at each of those in detail, starting with user needs.

User needs

Identifying user needs is at the core of a user-centered design process, and it involves gathering feedback from the users of your product through a variety of methods to uncover unmet needs and opportunities for improvements.  This is called user experience research (UER), and there are plenty of resources available on the topic, so I’m just going to provide an overview here.

First, it’s important to mention the fundamental difference between market research and user experience research:

  • Market research seeks to understand the needs of the market in general, and is generally more focused on areas like market fit, brand perception, advertising and pricing research, and ways to uncover perceptions of the company and its products.
  • In contrast, user experience research focuses on users’ interaction with the product.  It relies heavily on observational techniques, since users are notoriously bad at describing their experiences or predicting their behavior.

There are many ways to classify different UER methodologies, but my preference is for a classification that lines up the different methods with the outcomes required by different phases of the product development process.  In that approach, there are three classes:

1. Strategic research

Strategic research is done with the goal of coming up with new product ideas and business opportunities, or preliminary design explorations during the product discovery phase to  help with the brainstorming of design solutions.  Here are some of the methods that fall into this class:

  • Ethnography.  A technique long used in Anthropology that has only recently found its way into the toolkit for research on interactive products.  It involves going to users’ homes or offices, and observing how they use your products in their natural environment.  It allows the researcher to see users in action where they feel most comfortable.  Ethnography is all about observing and listening.  It is generally not task-based like usability studies, but follows a loose interview script with the goal of uncovering needs and insights that users are unable to articulate.  I have an extreme positive bias towards ethnographic research, especially in contrast to focus groups (of which I’m not a fan at all, but that’s a subject for a different post).  I have seen first-hand how ethnography sparks innovation when it shows how users make up their own workarounds for the limitations of software, which in turn reveals opportunities for product improvements.  When it comes to exploratory methods to help with product strategy and roadmaps, there simple is nothing better than a good ethnography study.
  • Participatory design.  Another favorite, this technique brings users together to solve design problems in a way that would make sense to them.  The purpose is not to take users’ designs and implement them, but to find out which elements and activities are most important to them.  My preference is to do this in diads, where 2 users work together on a design.  This forces both participants to be active, and you learn as much from their conversation with each other as with the designs they put together.  The usual process is to provide users with a blank page or basic framework, cut-outs of various elements that could go on a page, and watch and listen as they make trade-offs and design decisions on what should go on the page based on how they would use the product.  This technique especially helps guide interaction design because it gives a glimpse into users’ process as they go through the site.
  • Concept testing.  This is a good way to gather feedback on an approach before wireframes or mockups are created.  Storyboards/comics are great artifacts to use for this kind of testing, since it takes design out of the process, and gathers feedback from users on the process you intend to design.  Although mostly done in-person with 6-8 users, there are also great tools for large-scale concept testing, like Invoke.  Below is an example of a storyboard one of our researchers at eBay used during early concept testing for one of our products:

Further reading on strategic research:

2. Design research

Design research includes most of the methods that are associated with traditional user experience research.  Its role is to improve and refine existing designs in various levels of fidelity.  Some of the methods include:

  • Usability testing.  This is, of course, the most well-known UER method, and what most people default to for any kind of design feedback.  Task-based usability testing in a lab is a fantastic tool, but it has become a little bit too much of a “when all you have is a hammer…” method.  Usability testing should be used to uncover usability problems with a proposed interaction design.  It should not be used for feedback on visuals, finding out which design users prefer, or uncovering new product ideas.  There are other techniques that are much better suited to that task.  Usability testing involves 1-on-1 sessions with users where the researcher observes them as they perform assigned tasks.  This kind of direct observation is a great way to understand what users would actually do on the site (as opposed to what they say they would do), as well as to uncover the reasons why they do what they do.
  • RITE testing.  Rapid Iterative Testing & Evaluation (RITE) is a very specific form of usability testing, but I wanted to call it out because it is my preferred way of testing.  It involves a day or two of focused usability testing, followed by a design cycle where the feedback is immediately injected into the design before the next round of testing.  Doing several of these cycles quickly means your outcome isn’t a bloated Powerpoint deck with a bunch of recommendations; your outcome is a better design that incorporated user feedback in real time.  As the debate continues on how UX can be more involved in Agile development, this technique should become increasingly important since it fits in perfectly with the Agile mindset.
  • Desirability studies.  Invented by researchers at Microsoft (yes, really - see this Word article where they outline the approach), this has become another favorite technique for me if I want feedback on the visual aspects of a site (not so much interaction), and specifically which visual approach users like more when there is more than one alternative.  It involves a survey to a large number of users where they are asked to rate one of the proposed design alternatives using a semantic differential scale.  The survey is done as a between-subjects experiment, meaning each user sees only one of the proposed designs, so that they are not influenced but the other design alternatives.  The analysis then clearly shows differences in the visual desirability of the proposed design alternatives.

Further reading on design research:

3. Assessment research

Often neglected, the role of assessment research is to measure the impact design decisions have made, and to evaluate success and continued areas that need improvement.  This requires larger sample sizes to ensure the ability to compare before/after metrics with statistical significance, so these methods are mainly quantitative in nature.  Methods include:

  • Product surveys.  Everyone hates surveys, but it remains an effective way to assess how design changes are affecting user perceptions of the site. Different from most market research surveys you receive (and delete) in your inbox, these surveys deal specifically with user perceptions of the interaction and design.  It’s not always effective as standalone research studies since there is so much bias in surveys with their <5% response rates, but if you can run surveys over time, and control the sample so that the bias remains the same, it can be a very good tool to ascertain the success of your design changes.
  • Online user experience assessments.  Another favorite, tools like Keynote allows you to gather real-time click-through data as well as subjective user feedback.  It uses a proxy or a browser download to give users tasks on a site, and ask them questions about the experience while their activity is being tracked.  This often produces a mountain of data, which can be quite overwhelming and not always effectively used if there is not enough time/resources available to analyze the data.
  • Analytics.  Web analytics need no introduction, but there are several tools specifically aimed at user experience, with my favorite being Razorfish’s Advanced Optimization tool for web forms.  By placing a small piece of JS on form pages, it gives you a mountain of data on how users interact with forms, including what error messages they receive, how much time they spend in each field, the last field they were on before closing the form, scrolling data, and the list goes on.  It’s a great way to improve form conversion.  I honestly don’t know why they don’t market this thing more.

So that’s an overview of user research methods — there are many more, but I wanted to focus on the ones I’ve found especially useful in my own work.  The real power of user research starts to happen when you combine methods and triangulate results to come up with a product strategy that takes a variety of quantitative and qualitative insights into consideration.  If you’re interested to learn more about this, Using Multiple Data Sources and Insights to Aid Design is a good post on the topic.

There are so many resources available on user research — your best bet to stay on top of the latest happenings in the field is to subscribe to UX Booth and UX Matters.

In part 2b, I’ll talk about the other two inputs to Product Requirements: Business Needs and Technology Needs.  I’ll also discuss how all of this fits into creating Product Requirements, and what the Product Manager’s role is in all of it.  Stay tuned!

Tech4Africa panel: How we redesigned Payfine.co.za

This week I was in Johannesburg for the debut of Tech4Africa, a conference about web technology in the African context. It was a fantastic experience, an opportunity to learn from and meet some great people, and I will most certainly be back next year (but hey, Gareth, let’s move it to Cape Town next year!). Yes, there were the usual small conference hiccups, but nothing that can overshadow the importance and significance of this event.

The mere fact that we were able to listen to speakers like Clay Shirky, Andy Budd, and John Resig, as well as some top South African thinkers & doers, and discuss with them the uniquely challenging opportunities that exist here in Africa, made this conference a winner. The content was mostly great, but the conference was more than that — it was about being inspired and energized about being in this industry, at this time, in Africa. You should follow Tech4Africa and its head organizer, Gareth Knight, for updates on the conference. And no matter where you live, you should attend next year. This event is here to stay.

I also had the great opportunity to speak on a web design panel with Allan Kent, Basheera Khan, and Mike Lewis. We took a User-Centered Design (UCD) approach to redesigning Payfine.co.za, a web site that allows South Africans to pay the many traffic fines they get every… well… every month or so.

We’ve never met each other before the conference, and we were all in different locations. So, since we had to do this remotely and in our limited spare time, we broke the process up into three distinct user experience elements and each took responsibility for one of the tasks: content strategy (me), interaction design (Bash), and visual design (Mike). We collaborated a lot along the way, but we decided to each lead the creation of one piece of the puzzle, and then put it all together in a coherent story (this was Allan’s job!).

The end result? Well, you should judge for yourself. Here is what Payfine currently looks like:

And this is the proposed redesign:

The one thought I want to pull out above the rest about this process, is that UCD is not rocket science. It’s not easy, but it’s not rocket science. There is a process, and there are rules (they can be broken, but they help focus the design process).

But. It does require a mind shift (I hate that word — can anyone suggest something different?) in the African web space — a realization that the interfaces we currently have on our banking sites, our e-commerce sites — even our entertainment sites — are simply not good enough. And it requires a commitment by those companies to invest in the user experience of their sites, because it will have a positive effect on the business.

Below are the slides we went through during our session, which I amended to make it a little bit easier to read without the voice-over we provided. We’re all happy to answer any follow-up comments or questions on this, so please let us know if you have any. And I really want to thank the rest of the team — this has been a great experience — let’s do it again!

How we redesigned Payfine.co.za - Tech4Africa

The potential and dangers of 'squirrel projects'

In one of his characteristically brilliant essays, Paul Graham recently wrote:

I think most people have one top idea in their mind at any given time. That’s the idea their thoughts will drift toward when they’re allowed to drift freely. And this idea will thus tend to get all the benefit of that type of thinking, while others are starved of it. Which means it’s a disaster to let the wrong idea become the top one in your mind.

The importance of focus in a startup, or any other business for that matter, is such a basic principle that no one disagrees with it, but it is still such a difficult thing to get right.  One of the reasons is that you don’t want to stifle innovation, and some of the best ideas can come from a completely random project you went off to do in your spare time.

Whatever your feelings are about side projects that take you off your main focus, it is important to recognize them for what they are: distractions.  This doesn’t necessarily mean it’s bad, but let’s call it what it is — these projects distract you from your “top idea.”

For the products I’m responsible for at Yola, we have name for such distractions.  We call them “squirrel projects.”  If you’ve seen the movie Up, you’ll probably immediately know what I’m talking about.  If not, here’s a refresher:

I don’t think “squirrel projects” need more definition than that video…  So, when one of your team members go off on a sometimes-random-but-always-guaranteed-to-be-cool tangent, it’s important to do two things:

  • Call it out as a squirrel project
  • Determine whether or not it’s a squirrel worth hunting

Figuring out if it’s a squirrel worth hunting depends mainly on:

  • The timing of the project
  • The potential value of the idea

I’d say that 2 days before release day is a pretty bad time to go squirrel hunting.  But what if you’re in the beginning of a sprint and something great comes along?  Adjust.  Reprioritize. Throw some things on the backlog, and make room.  Because sometimes, it’s worth it.

It’s also important to note that “value” doesn’t necessarily mean immediate ROI.  There are different ways to get value out of a squirrel project.  Sometimes it’s the potential for revenue down the road.  Sometimes it’s the time spent now on automation tasks that will save you a bunch of time later.  And sometimes, it’s just plain cool (two words and a hint for something you should try on Yola: Konami Code).

Squirrel projects aren’t bad.  But they can be devastating to your focus and momentum if they happen at the wrong time, and/or they have no potential for value.  So go hunt the good ones, and let the bad ones go.