The connection between user experience and brand loyalty

I recently attended a brand presentation where the video below was shown. It’s pretty funny, and also a perfect example of how interactive products and consumer-generated content should fundamentally change our traditional views of customer loyalty. Loyalty in our current environment is fostered through repeated great (user) experiences, not just through advertising and coupons.

But even though I like the general point the video is trying to make, I think it stops a little short of the real issue. It is saying that we should listen to our customers better. But that’s not enough — we need to understand customers in ways they don’t even understand themselves, and then build experiences that meet unmet (and sometimes unconscious) needs through repeated, positive experiences that deepen the customer-company relationship.

Uncovering these needs happens not just through “Voice of the Customer” research programs, but also through more contextual research efforts like ethnography and contextual inquiries (combined with validating quantitative research). I believe this is where traditional Market Research programs like NPS (Net Promoter Score) only tell a part of the full brand loyalty story (albeit an important part, for sure).  There is evidence that the tide is turning on this topic as the field of HCI (Human-Computer Interaction) becomes more mainstream and user experience research techniques become more accessible.

There is a powerful synergy in discovering how to deepen true customer loyalty through collaborative efforts between Market Research and User Experience Research, and we need to bring these two disciplines closer together (this view is also very much in line with the thinking described in the excellent Adaptive Path essay The Long Wow).

Visual design clutter index for web pages

I’ve been working on a project where we’re trying to come up with a way to establish a visual design “clutter index.”  The goal is to see if there is some threshold beyond which web page clutter impacts business metrics like conversion and click-through rates.  The challenges are widespread of course, and mainly focused on the following 3 areas:

  • The definition and measurement of clutter.  There are a variety of ways to measure clutter on pages, ranging from the completely objective (e.g., % of white space on a page) to completely subjective (e.g., how do users rate the page on a clean vs. cluttered scale).
  • The definition of conversion.  Since some pages on an e-commerce web site are revenue-generating, and others aren’t, an important question is how you define conversion.  For revenue-generating pages (e.g., pages with a “checkout now” button) this is easy — “Did the page result in a sale?”  For other pages, like product information pages, this measure won’t work, so some other measure of engagement with the page becomes necessary.
  • Controlling for other influencing factors.  In conjunction with the first two points comes the problem of causality vs. correlation.  Assuming you have your definitions of clutter and conversion nailed down, how can you be sure any changes you see in conversion is caused by clutter (causal relationship), and not some other factor you are not accounting for (there’s correlation but no causal relationship).


Using Twitter to value online information

I have recently noticed an interesting trend among the people I follow on Twitter. It appears that my network is dividing itself neatly into 2 camps: those who care deeply about the content they publish, and those who use it more casually. Let me explain…

Saying “good night” to everyone you know

Twitter users who casually update their status without thinking about it too much continuously say things like “Yep,” “Good night tweeple,” and “Banging my head against the desk.” Cryptic information that can be quite difficult to figure out. I’m not saying that this is necessarily a bad thing. It’s just clear that some people view Twitter as a broadcast medium mainly meant for people they know in the real world, and that’s fine (I tend to think that’s what Facebook is for, but let’s not split hairs about this).

I’m also not suggesting that all tweets should be serious — the odd random or exasperated update can be interesting, enlightening, and often very funny, and it also shows that there’s a real person at the other end. I do follow a lot of these casual users, but I know all of them personally so their updates are meaningful to me. And of course there is always the option to stop following someone, so you only have yourself to blame for the content you receive on Twitter.

But then there are those who care a lot about what they say…

The dangers of “test and learn”

A recent discussion on a user experience forum I participate in turned to the topic of A/B testing.  I really enjoyed the conversation so I wanted to reiterate some of the points I made, and expand on it a little bit as well.  It’s not my goal to define A/B testing here but to share my opinion on its use.  I believe that even though A/B testing can be extremely valuable to help identify the best iteration of a site or a particular page, it should never be used in isolation.

Since A/B testing is relatively cheap to do and the results are so compelling, companies are in danger of adopting a “test and learn” culture where pages are just A/B tested with no additional user input.  That would be the wrong way to go.  A/B testing shouldn’t be used on its own to make decisions, it should always be used in conjunction with other research methods — both qualitative (such as usability testing, ethnography) and quantitative (such as desirability studies).



  1. 1
  2. ...
  3. 193
  4. 194
  5. 195
  6. 196
  7. 197
  8. 198