Menu

Netflix doesn't know me: How I lost faith in recommendation engines

When Netflix first came out with their movie recommendations, I thought it was a great idea. I started rating movies I’d seen — good and bad — confident that the brain behind it all will do its magic and recommend some hidden movie gems that will, you know, change my life. Well, I’m still waiting for those movies. And to be honest, I’ve become a little bit frustrated with the whole thing.

Describing the latest example I encountered will reveal how much I liked a movie that I probably have no business liking, but I’m willing to sacrifice a little bit of my reputation in the name of science, or whatever this is…

The first problem I encountered is a pure UI issue, and has to do with how Netflix shows the star movie ratings on their pages. As an example, this is what I see for the movie August Rush in my queue:

You would assume that the customer average rating is just over the 3-mark, right? Well, looking at it closer, it turns out that Netflix shows you a rating they call “Our best guess” (3.4 in this case), instead of showing you the customer average (4.1 in this case):

Here’s the problem. I loved this movie. I’m giving it 4 stars. But since Netflix doesn’t know that I have a soft spot for modern musicals (despite how highly I rated the movie “Once”), the “Netflix brain” didn’t think I would like this movie as much as the average customer.

This is a problem you see often on sites where the UI does not give proper user feedback about what it’s showing you.  It took me a few weeks to realize they’re showing me “Our best guess” in search results, and not the true customer average. Now I have to mouse over to see the true average every time. Why? Because I don’t trust the brain any more. (By the way, this is just one example, but as I’ve looked into it more, I realized it’s a systemic problem for me — Netflix’s best guess is rarely in line with my tastes).

Incidentally, on Amazon.com, the average user rating is 4.5 out of 5 stars. Pretty good. So this is the problem then. There is such a wide range of tastes out there that it’s hard to know who to trust. This is the problem Netflix is trying to solve — let’s look at “users like you” and then show you that average instead of the overall average. You’re therefore initially more inclined to believe the “best guess” rating provided by Netflix, than the average consensus provided by all users. It’s a good idea, but the implementation doesn’t seem to be there yet.  (The discussion about the validity of 5-star ratings in general is a separate and very interesting discussion).

I say all this to make a simple point — it appears that the collective wisdom of all users does a better job of predicting if I will like a movie than the recommendation engine provided by Netflix. The question is whether it would ever be possible for recommendation engines to get to know you well enough based on your preferences. Maybe if it takes into account not only your movie interests, but also music, books, online activity, etc.? Yes it sounds creepy, but how else would Netflix know how much I like strange modern musicals?

7 Essential Productivity Tools for Product Managers

As Product Managers our job is to gather information from a variety of different sources, make sense of it all, and then turn it into cohesive product visions and execution plans that end up growing the business exponentially (yes, we’re superheroes).  And we wouldn’t do it if we didn’t already love bringing order to chaos.  But sometimes we need a little help.  Below is a collection of software (mostly Mac-based) that I have found essential in my day-to-day PM work and helps me to always have a handle on what is going on in my projects.

I have broken these down into three categories:

  • Tools for project management. These are the programs that are always open on my Mac.  It starts with a high-level overview of all projects, and progressively gets into more detail and specifics.  I can’t imagine staying on top of all my parallel tasks without these.
  • Tools for wireframing. No designer wants a PM to tell them what a design should look like — and for good reason: it’s not our job.  But sometimes you want to put some of your design thoughts on paper, without being too prescriptive on the execution.  These tools help you do that.
  • Tools for collaboration. These are the tools that increase productivity by freeing documents from your hard drive and putting them in the cloud so you can work on them in collaboration with other stakeholders.

Tools for project management

1. OmniPlan ($150 from The Omni Group)

OmniPlan is the Mac version of Microsoft Project, except that it’s a lot faster to use so you don’t end of abandoning it in the middle of every project because of sheer frustration.  It lets you easily add projects and tasks, track progress, and add specific notes about each action if you need a little more detail.

2. OmniFocus ($80 from The Omni Group)

Describing OmniFocus as a fancy To-Do list (which it is) would be doing it a huge disservice.  It was designed from the ground up to make it easy to input thoughts very quickly (into the “Inbox” area), and then you can separate those thoughts into Goals vs. Tasks.  The tasks are then easily separated into Projects (which work like folders) and Contexts (which work more like tagging).  It is easy to switch views between Projects, Contexts, Flagged items, Urgent items, etc.  I basically start and end every day with OmniFocus.

One of the other huge advantages of using OmniFocus is the iPhone app.  It’s expensive ($20), but well worth it.  One complaint I do have is that it’s not as easy to sync as it should be, which is disappointing.  The only free way to sync OmniFocus Mac with the iPhone App is through sharing on a Wi-Fi network.  There is no central database that syncs automatically between devices.  But this is my only gripe with it.  As long as you have your iPhone on and connected to the same wireless network as the Mac, it works like a charm and cross-syncs beautifully.

3. Evernote (free for a basic account)

I was resistant to using Evernote at first, because I really didn’t know what I would use it for.  Now I’m not sure how I ever got anything done without it.  This is the ultimate cloud application.  It syncs seamlessly between the web site, other computers where you have it installed, and the iPhone app.

Yes, I know, it’s just software for taking notes.  But I use it in so many ways.   Meeting notes, web clippings (get the Firefox plugin!), photos of whiteboard drawings… the list goes on and on.  And the fact that it immediately syncs with your account means that your notes are accessible on all your devices, which really helps when you eventually sit down at your desk and have to make sense of all the stuff you put in there during the day.  Also, the price is right!

Tools for wireframing

4. OmniGraffle Pro ($200 from The Omni Group)

This is the Mac’s answer to Visio (except, you know, better again).  Whenever I start working on a new project, OmniGraffle is my tool of choice to diagram the existing flow and any proposed changes.  I also use this to provide a more visual representation of any data that we have on any of the flows/pages - analytics, CS, user research, etc.  This really helps to get all stakeholders on the same page so you can solve for the right problems.

5. Balsamiq Mockups ($79 for desktop version)

As I mentioned at the start of this post, PMs need to be careful about producing mockups.  But that’s what makes Balsamiq such a perfect piece of software.  It is an easy-to-use, low fidelity mockup and wireframing tool that allows you to get ideas on paper without any visual design elements.  This allows you and the designers to get on the same page without stepping on each others’ toes.  Here is a demo from YouTube:

<

p align=“center”>

Tools for collaboration

6. Google Docs (free)

I have been Microsoft Office free for a while now, and I haven’t missed any part of it.  Google Docs allow you to be truly collaborative on your documents.  You can start a document and other stakeholders can add to it, comment on it, change it, and it’s all saved in real-time.  One of the best features is that multiple users can edit at the same time.  This means that, for example, PMs, designers, and engineers can work on the same document, and come out of a meeting with a finalized spec.

One drawback is that there is very limited version control in Google Docs, so that would be the only word of caution - use file names wisely to provide your own form of version control.

7. Dropbox (free for 2GB of storage)

I haven’t met anyone who has used Dropbox and didn’t fall in love with it.  Dropbox is how you would design file storage if the personal computer didn’t start out with hard drives.  It allows you to store your files in the cloud and access it from any computer — and from your iPhone with the free app.

The iPhone app needs some improvement, particularly to allow you to add folders as favorites for offline viewing, but that is a small complaint.  Dropbox basically means that you can work from anywhere.

Now go and be productive

So that’s my list.  I’d love to hear what other PMs are using to stay on top of their projects, and what your experiences have been with using the software in this post.  Let’s go be (organized) superheroes now…

Email is dead. Long live email.

There has been growing discontent with email over the past year or so, but it appears that many people’s hatred for this particular form of communication has now finally started to boil over.  Several articles and blog posts over the past few weeks lamented the death and/or evilness of email in no uncertain terms.  In this post I go into a few highlights from said email hatemail, followed by some thoughts on why we shouldn’t be so fast to close down our email accounts.

The problem with email is…

First, a disclosure.  The excerpts below are just that: excerpts.  While I attempt to keep the context and the original intentions of the authors intact, I encourage you to read all these articles in their entirety.  They’re not only thoughtful and well-written, but they also lay a solid foundation for what I think is a very worthy and much-needed debate.

In the article Why Email No Longer Rules”¦, the Wall Street Journal announces that email is king no more:

But email was better suited to the way we used to use the Internet””logging off and on, checking our messages in bursts. Now, we are always connected, whether we are sitting at a desk or on a mobile phone. The always-on connection, in turn, has created a host of new ways to communicate that are much faster than email, and more fun.

Caught up in Google Wave frenzy, Techcrunch laments the following in Google Wave And The Dawn Of Passive-Aggressive Communication:

Google Wave is not just a service, it is perhaps the most complete example yet of a desire to shift the way we communicate once again.  For many of us, email is simply not cutting it the way that it used to. It’s a sedentary beast in a fast-moving web. It uses old principles for management, and this is leading to overload.

Sticking with Techcrunch, in Relevance Over Time, Nik Cubrilovic argues that email sacrifices relevance in order to present items in a chronological order:

Chronological order needs to be abandoned in favor of relevance. Without relevance, our ability to manage large sets of information is inefficient. The technology for relevance exist today, for eg. spam filters are able to tell us what we definitely don’t want to read. Real world information retrieval and organization is based on relevance, either what somebody else believes is relevant to us, or what we decide is relevant. Newspaper stories are not laid out in the order that events took place and libraries do not catalog their books in the order they were published.

Jeff Atwood, in a post entitled Email: The Variable Reinforcement Machine, explains why he think email kills productivity:

Oh, sure, we delude ourselves into thinking we’re being extra-productive by obsessively checking and responding to our email, but in reality we’re attending too frequently to our own desire for gratification and sabotaging our own productivity in the process.

Why email is essential in business communication

After reading each of these articles, the same question kept coming to mind: How do these authors use email? They certainly don’t use it the same way I do.  Because I simply cannot imagine replacing email with Twitter and Facebook - and even Google Wave.  As far as I can tell, here are the major complaints about email:

  1. Email is not real-time enough. I don’t understand this complaint at all.  How is Twitter more real-time than either sitting at your desk with your email client open, or checking your BlackBerry for new messages?  Yes, Google Wave lets you see people type in real-time, but do we really need that?
  2. Email is not dynamic enough. I don’t want email to be dynamic.  Email is a way to communicate static thoughts.  Tools like Google Docs, Dropbox, and Versionshelf are there for collaboration.  But email is a linear record of events and discussions, which is essential if we want to preserve any kind of sanity in business communication.
  3. Email is chronological, not relevant. This complaint perplexes me the most.  If email isn’t relevant, you may want to write different emails, or just spend a little time setting up a few filters to get rid of Hilton HHonors statements and other useless newsletters.  Chronology brings order.  Even though the most important things might not be at the top of your inbox,  timestamp is an important element in helping us separate the urgent from the important.
  4. Email reduces productivity. More than being on Twitter all day reduces productivity?  I’d like to see how productive people are who do business in 140 characters.

In short, I’m just not ready to give up email.  It serves as a very effective To Do list for me.  It allows for accurate and extensive documentation when needed, as well as quick decision-making with a variety of stakeholders.  Long live email.

Toward a universal model for software product development

Introduction

I’ve seen a lot of product development processes over the years, and as with most things, it’s important to take what works for your organization, and leave what doesn’t work.  These processes go by different names, mainly various combinations of the words Product Development Lifecycle (PLC or PDLC).  There are also a common thread through most product development models, and this article assumes that the reader has a basic understanding of the general five steps of product development:

Great work has been done on fine-tuning the details of the PDLC, but it has always bothered me that there is not one universal model for software product development that fits two main criteria:

  • Specific enough so that it gives real and practical guidance for product managers and organizations on how to design and develop good product.
  • General enough so that it can be applied to all different types of software development methodologies (Agile, Waterfall, Spiral, etc.)

And that is what I set out to do in this article.

A universal model for software product development - first draft

So, here is what I have come up with so far.  I hope that this initial attempt at a universal model for software product development can generate some discussion on what works and what doesn’t, what’s missing, how it can be refined, etc.  Below the illustration I go into more detail on each aspect of the model (click to enlarge).

There are a few assumptions that are important to note about this model:

  • Regardless of the development methodology, representatives from Product, Marketing, Business, Design, and Engineering should be involved to some extent in each step, all the way to the writing of the technical specifications, after which it largely becomes an Engineering and Product effort.
  • Having said that, it is important for one person to own this process (i.e. be accountable for its success) from start to finish, and that person should be the Product Manager.  A good product manager is not a dictator.  He/she is a facilitator between all the different stakeholders in a product, and the really good ones are able to get through this model on time and on budget, every time and with as much consensus between groups as possible.
  • This model is designed to work for any organizational structure, project size, and timeline.  If the project is large, more time can be spent on each step.  If the project has a tight timeline, it doesn’t mean that you will skip the “Iterate” part of “Design + Iterate.”  It just means that you will spend less time on it (more on that later).
  • I highly recommend reading Adam Nash’s post entitled “Guide to Product Planning: Three Feature Buckets.”  He recommends that every feature release should have a combination of what he calls Customer requests, Metrics movers, and Customer delight.  I am in total agreement with him, and this is an important underlying assumption for this model to work effectively.

Detailed discussion of the different aspects of the model

Let me start by saying it is possible to write a separate blog post about each of the bullet points below.  My goal here is to be general and make one or two points about each aspect.  If there is interest I could expand further on any of the concepts in the model.  So with that said, let’s look at some definitions and implications of the model:

  • The starting point - identifying needs.  At the beginning of any project (new or iterative), it is important to gather and synthesize input from three different sources:
    • User needs.  Everyone needs to have a good understand of the market, the target segments, and their behaviors and attitudes.  There’s not enough room to go into detail here, but it is important to look at four sources of user input: market research (segmentation, etc.), user experience research (usability studies, ethnographic explorations), site analytics (behavioral analysis), and customer support (call transcripts, interviews of CS reps, etc.).  Having this common understanding allows the organization to build products that matter to users.
    • Business needs.  User experience practitioners too often neglect the fact that well, your site has to make money!  Revenue goals are not a good excuse for bad design, and that attainable revenue goals are essential to push the organization to design good product.
    • Technology needs.  Engineering needs to be at the table from the start.  They know the limitations of the product, they know what needs to be fixed, they know where the architecture can improve.  Their up-front input is essential in good product development.
  • Problem statement (Requirements).  I am indebted to Pragmatic Marketing, particularly a post entitled “On Reqs and Specs: The Roles and Behaviors for Effective Product Definition,” for the definitions I use for the three different documentation outcomes in this model: Requirements, Functional specifications, and Technical specifications.  The first outcome from the discussion and synthesis of needs is a common understanding of the problem statement you are trying to address, which takes the form of Requirements.   A requirement is simply a short statement of the problem, and Pragmatic Marketing recommends the following format:  “Our preference is the Requirements That Work format: [Persona] has [problem] with [frequency]. It forces product managers to explore the problem, not the solution, and helps the design team understand the context of the problem.”
  • Diverging and converging design thinking. Once the problem has been defined and agreed upon, the team starts thinking about solutions.  There are three important aspects of this phase, which is often called Product Discovery:
    • Start with blue sky ideation (divergent thinking).  At this point, don’t limit solutions by what is technically or otherwise feasible.  Spend time dreaming about the product - this is where innovation happens!
    • Relentlessly prioritize (convergent thinking).  This is the part of the process where nonsensical ideas are thrown out, and the team consolidates around a few possible solutions to the problem that can be further explored.  Remember: there is no commitment to implementation or specific designs yet at this phase.
    • Apply the technology filter only after the ideation phase. I wanted to explicitly call out the roll of Engineering during the solution discovery phase.  It is a mistake to bring Engineering in too late in the process.  There is a very important technology filter that needs to be applied during prioritization.  What is technically feasible?  If something is currently not feasible, what would it cost to build the right architecture?  Those early inputs can save a lot of headache down the road.
  • Functional specifications. The second output document from this model are the Functional specifications, which Joel Spolsky defines as follows: “A functional specification describes how a product will work entirely from the user’s perspective. It doesn’t care how the thing is implemented. It talks about features. It specifies screens, menus, dialogs, and so on.” Note that we’re not talking about technical implementation yet, that comes later.
  • Design and Iterate.  Everyone designs a product, but it is sad to see that when time/budget gets tight, iterating on it before it goes live is often the first part of the process to fly out the window.  It cannot be overstated how important it is to prototype and test your designs before they go live.  And not having time is really no excuse.  If you have no time, make a paper prototype and test it with three of your friends over coffee in the evening.  You’d be surprised how much value this can add.  Boxes and Arrows has a great article on prototyping and how to integrate it with your design process that’s well worth the read.
  • Technical specifications. Only once you have a set of Functional specifications and a Prototype of some form (even if it is very low fidelity), it is time to work on implementation.  Quoting Joel Spolsky again, “A technical specification describes the internal implementation of the program. It talks about data structures, relational database models, choice of programming languages and tools, algorithms, etc.”
  • Build and launch. If you’ve done the upfront work, this could actually be the easy part.  You have a set of requirements, a prototype, and a detailed set of specifications to work with, so now it’s time for the Engineers to work their magic and build the product.
  • Assess.  The assessment phase is extremely important and often overlooked.  It is important to define your measures of success upfront, and then follow up to see if you’ve met those goals.  How do users respond to the product?  Are we meeting revenue/engagement goals?  Are there bugs that need to be fixed?  What can we learn from how users interact with the product to give us ideas for new products?  I’m also an advocate for using the same four sources of input we discussed earlier (market research, user experience research, site analytics, and customer support), as opposed to relying on only one methodology, like a 3-week A/B test.  More on the dangers of that in one of my earlier posts.

Implications and final thoughts

I think this is a decent first attempt at a universal model for product development that can be applied to any timeline or organization type, but it is far from perfect.  I hope that the product management community can come together and refine this so that we can end up with a credible, relevant, and useful model for building great product.

Eventually, I’d like us to build out this model so that it is more easily adjustable based on certain constraints that might exisit within a project, such as budget, time, development methodology, and team size.  But let’s see if this one passes the smell test first…

What MSN Mobile can teach us about good design

As designers and user experience practitioners, most of us can look at web interfaces and immediately tell the good ones from the bad ones.  The good ones are usually an indication of execution built on a collaborative and equal effort between different groups and stakeholders.  The bad ones usually point to one of two things:

  • The site was chopped up and different teams owned different parts of the same pages without a clear plan for holistic design; or
  • Somewhere along the line relationships turned sour, decisions got escalated, and one of the groups/stakeholders won a contentious argument about the design of the product.

I came across one such example of poor execution today while browsing the MSN Mobile web site on my iPhone.  Here is what I saw:

Notice that there are (or appears to be) four editable form fields on this screen:

  1. The Safari address bar
  2. The Safari Google search bar
  3. The small MSN search box
  4. The big Bing search box

There should be little confusion with the first two fields — they are part of the Safari browser and clearly not part of the mobile web content.  However, the other two search bars present several problems.  Notice how both have the little magnifying glass search icon, indicating that you should be able to use both to search.  The questions users will ask themselves are, which search box should I use?  Do I want to search MSN or Bing?  What’s the difference between these two search boxes?

I’m sure most of you have also figured out what’s actually going on here — the Bing search box is not a search box at all, it is a clickable advertisement (pre-filled with “Miley Cyrus” for some reason, but let’s leave that out of the discussion for now).  It’s also clear from what we know about affordance that the user is encouraged to use the Bing search box due to its prominence, size, and iconography that’s consistent with search behavior across the web.  I think we can all agree that this is just bad usability, plain and simple, but I also started thinking about how something like this could happen.  I can think of two scenarios:

  • Informational content and advertising are owned by different groups. It is very likely that the advertising team were given the top banner placement on this page, with not much oversight about what they put in there.  They likely have their own designers who design for the advertising team, with no need or desire to think about the context of the entire page.
  • Revenue goals trump good usability.  Another possibility is that the designers are fully aware of this issue, but that Bing, possibly a different business unit than MSN Mobile, has strong revenue goals that they have to meet for their advertising campaigns.

Or it could, of course, be a combination of these two scenarios.  I think we can learn some very important design lessons from this seemingly innocuous usability flaw:

  • Never design in isolation.  This is such a simple principle, but it is still so remarkably easy to guess a company’s organizational structure just by looking at their web site.  Siloed design is one of the easiest design problems to fix, but it does take some courage: strong product management, a sincere desire to collaborate across business units, and an executive mandate to make it happen. All the MSN team had to do was get the designers/PMs together and design a Bing ad that fits with the page structure and doesn’t cannibalize search queries that should go through the MSN search box.
  • Aggressive revenue goals are not an excuse for bad design. As a user experience product manager, I am a realist and completely in favor of feature prioritization and pushing for meeting aggressive revenue goals.  But revenue-generating features should never be implemented at the expense of the usability of your web site. Too often we see examples of poor implementation of an interface because the team couldn’t find a way to reconcile the business goals with proper user-centered design.  I believe the MSN Mobile example is such an occasion.

But wait, there’s more!  Unfortunately, the MSN Mobile example then goes from bad to worse.  Clicking anywhere in the Bing ad brings up this page:

There’s just not much you can say about that.  At this point you’ve lost customers who could have done their searches through MSN Mobile.  Game over, everybody loses…

How to increase the value you get out of social media

A common complaint about social networks is that they insulate us by only showing us information we’re already likely to agree with. This solidifies our existing confirmation biases and makes us less likely to see the value of other viewpoints. It’s a legitimate concern, but we only have ourselves to blame. The problem is that if we don’t follow enough people from different types of networks, we’re always going to see the same type of information over and over.  And in this fundamental point also lies the best way to get the biggest benefit from social media.  So stick with me as we discuss some sociology theory, which I promise will lead to some practical implications in the end.

First, a little background on Structural Hole Theory.

Structural Holes Defined

Ronald Burt’s theory of “structural holes’ is an important extension of social network theory, which argues that networks provide two types of benefits: information benefits and control benefits.

  • Information benefits refer to who knows about relevant information and how fast they find out about it. Actors with strong networks will generally know more about relevant subjects, and they will also know about it faster. According to Burt (1992), “players with a network optimally structured to provide these benefits enjoy higher rates of return to their investments, because such players know about, and have a hand in, more rewarding opportunities”.
  • Control benefits refer to the advantages of being an important player in a well-connected network. In a large network, central players have more bargaining power than other players, which also means that they can, to a large extent, control many of the information flows within the network.

People with a lot of followers on social media have a high degree of Control benefits — they are often extremely influential in their fields, and in unique positions to have control over certain conversations on the web. But being an influencer doesn’t guarantee that you will have strong Information benefits , because you tend to get the same news over and over again if you don’t do a bit of work on expanding your network in a very deliberate way.

Burt’s theory of structural holes aims to enhance both these benefits to their full potential. A structural hole is “a separation between non-redundant contacts” (Burt, 1992). The holes between non-redundant contacts provide opportunities that can enhance both the control benefits and the information benefits of networks. The figure below shows a graphical representation of this definition.

The concept of non-redundant contacts is extremely important, and refers to contacts who give you access to networks you aren’t already part of. Now let’s look at how Mr. Scoble can increase the Information benefits he gets from Twitter.

Optimizing the benefits of networks

There are several ways to optimize structural holes in a network to ensure maximum information benefits:

  • The size of the network. The size of a network determines the amount of information that is shared within the network. A person has a much better chance to receive timely, relevant information in a big network than in a small one. The size of the network is, however, not dependant merely on the number of actors in the network, but the number of non-redundant actors. In other words, it’s not just about how many people you follow on Twitter, it’s also who you follow.  Pretty straight-forward, but let’s continue.
  • Efficient networks. Efficiency in a network is concerned with maximizing the number of non-redundant contacts in a network in order to maximize the number of structural holes per actor in the network. It is possible to eliminate redundant contacts by linking only with a primary actor in each redundant cluster. This saves time and effort that would normally have been spent on maintaining redundant contacts.  What this basically means is that if you follow people who all follow each other, your network isn’t very efficient and you need to get rid of some people.
  • Effective networks. Effectiveness in a network is concerned with “distinguishing primary from secondary contacts in order to focus resources on preserving primary contacts” (Burt, 1992:21). Building an effective network means building relationships with actors that lead to the maximum number of other secondary actors, while still being non-redundant.  This means that if 10 people give you access to the same network of information, only follow the most important one — their voice will be clearer and not drowned out by the others.
  • Weak ties. In his 1973 paper entitled “The strength of weak ties”, Mark Granovetter (Granovetter, 1973) developed his theory of weak ties. The theory states that because a person with strong ties in a network more or less knows what the other people in the network know (e.g. in close friendships or a board of directors), the effective spread of information relies on the weak ties between people in separate networks. “Weak ties are essential to the flow of information that integrates otherwise disconnected social clusters into a broader society” (Burt, 1992). This basically means that to get more out of Twitter, you need to figure out where your network is weak, and then follow those people who give you access to additional clusters. Building and maintaining weak ties over large structural holes enhances information benefits and creates even more efficient and effective networks.

So here’s the bottom line: to achieve networks rich in information benefits it is necessary to build large networks with non-redundant contacts and many weak ties over structural holes. Some of these information benefits are:

  • More contacts are included in the network, which implies that you have access to a larger volume of information.
  • Non-redundant contacts ensure that this vast amount of information is diverse and independent.
  • Linking with the primary actor in a cluster implies a connection with the central player in that cluster. This ensures that you will be one of the first people to be informed when new information becomes available.

How to get the most out of social media

If we apply these theories to Twitter and other social media networks, we quickly realize it is not the sheer number of “friends” in your network that count, it is the diversity of the people in your network that is most important. If you only have links to people in your immediate group of friends or colleagues, it will be difficult to get new information, since everyone will pretty much know the same things. This is not to say that you have to start following all those random spammers on Twitter, but it does mean that people with who you have “weak ties” will often provide you with new information and therefore more benefits than your “strong ties”.

So here’s how to make sure you get the most out of social media:

  • Identify the information networks you want to have access to (for me, it’s information about user experience design and product management).
  • Go through your following list and see where the overlap is — if there is a lot of resharing going on of the same people, follow the person who gets reshared the most.  This will reduce your Twitter stream but still get you the information you need (and faster than before).
  • Once you’ve reduced your following list, make your network as large as possible with the “weak ties” who will give you access to all the information you need.

These theories show that we can reduce the number of people we follow while actually getting more Information benefits from social media. We will get new information faster, we will get it only once or twice, and the information we get will be more diverse.

References

Burt, Ronald S. (1992). Structural Holes: The Social Structure of Competition. Cambridge: Harvard University Press. Granovetter, M. S. (1973). “The Strength of Weak Ties.” American Journal of Sociology 78: 1360-1380.

Why Facebook should forget about Twitter

With the three recent big stories in Facebookland (the FriendFeed aquisition, real-time search, and now the test launch of Facebook Lite) it doesn’t take a rocket scientist to figure out that Facebook is going hard after Twitter. (Update 1/16/2010: Facebook just rolled out “via” as their version of Twitter’s “retweet”. That, combined with recent changes to their privacy policy to make the platform more open, are two more clear examples of Facebook’s “Become Twitter” strategy)

What is more difficult to understand is why they are doing it.  Maybe it’s a personal vendetta because of the failed acquisition talks?  I just don’t see the business reason for this.  Here’s why I think Facebook should forget about Twitter and focus on making its own platform great:

Different target markets

It is well known that Twitter skews heavily towards younger tech-savvy users, with the rest of the population finding it hard to see the point.  Facebook, on the other hand, is increasingly being used by an older demographic.  The fastest growing demographic on Facebook is women over 55.

Why is all this important?  Because regardless of what Facebook wants to be, the demographic that is settling in on the site for the long haul is different from the Twitter user base — and they have totally different needs and behaviors. At this point, Facebook is too established as a brand to be able to force their product onto the target market they want.  And why would they even want to?  They have access to a much larger user base than Twitter.  Which brings me to my next point…

Always compete on your strengths

The mistake that Facebook is making is that it is trying to be Twitter for a user base that does not want Twitter.  Not convinced?  Go to http://www.brandtags.net and look at the brand clouds of word associations that people make with Facebook and Twitter.  For Facebook, you get words like Communication, People, Stalking.  For Twitter, you get words like Pointless, Stupid, Useless.

Now, of course Twitter is none of those things, but it shows the enormous gap in brand perceptions.  Why would you want to move a powerful people connection platform closer to something with a niche market that a majority of people find useless? There are a bunch of other Twitter statistics coming out lately that prove the Twitter niche factor: 5% of users account for 75% of the activity, 60% of US Twitter users abandon the site after a month, and 24% of all tweets are from bots (ok, that last one is irrelevant to this discussion, but still interesting).  And there’s also this interesting conversation on Mashable that clearly shows the differences between Twitter and Facebook usage.

The bottom line is that Twitter is for information sharing, Facebook is for life sharing.  That is what people are using it for — sharing photos, videos, those annoying pokes and quizzes, keeping in touch with friends all over the globe, lurking on profiles of people you used to know way back when.  That is the strength of Facebook, and that is what they should focus their platform on.

So what should Facebook do?

So here is my advice to Facebook: go where your users are.  Understand how they use the site, what their needs and behaviors are.  Go visit them, talk to them, watch them navigate around, understand why they are there in the first place.  And then enhance your platform to fulfill those needs.  Build new ways to feel closer to the people in your life.  Make it easier to share and discuss media.  Build families-only mini-communities.  Who knows what you can come up with if you just understand your users and build a web site for their needs?

Seriously — let Twitter be Twitter, forget about them and don’t force your users into that kind of experience.  Don’t try to be “status updates for everyone.”  Be a platform that lives up to the value proposition on your home page: “Facebook helps you connect and share with the people in your life.”

The connection between user experience and brand loyalty

I recently attended a brand presentation where the video below was shown. It’s pretty funny, and also a perfect example of how interactive products and consumer-generated content should fundamentally change our traditional views of customer loyalty. Loyalty in our current environment is fostered through repeated great (user) experiences, not just through advertising and coupons.

But even though I like the general point the video is trying to make, I think it stops a little short of the real issue. It is saying that we should listen to our customers better. But that’s not enough — we need to understand customers in ways they don’t even understand themselves, and then build experiences that meet unmet (and sometimes unconscious) needs through repeated, positive experiences that deepen the customer-company relationship.

Uncovering these needs happens not just through “Voice of the Customer” research programs, but also through more contextual research efforts like ethnography and contextual inquiries (combined with validating quantitative research). I believe this is where traditional Market Research programs like NPS (Net Promoter Score) only tell a part of the full brand loyalty story (albeit an important part, for sure).  There is evidence that the tide is turning on this topic as the field of HCI (Human-Computer Interaction) becomes more mainstream and user experience research techniques become more accessible.

There is a powerful synergy in discovering how to deepen true customer loyalty through collaborative efforts between Market Research and User Experience Research, and we need to bring these two disciplines closer together (this view is also very much in line with the thinking described in the excellent Adaptive Path essay The Long Wow).

Visual design clutter index for web pages

I’ve been working on a project where we’re trying to come up with a way to establish a visual design “clutter index.”  The goal is to see if there is some threshold beyond which web page clutter impacts business metrics like conversion and click-through rates.  The challenges are widespread of course, and mainly focused on the following 3 areas:

  • The definition and measurement of clutter.  There are a variety of ways to measure clutter on pages, ranging from the completely objective (e.g., % of white space on a page) to completely subjective (e.g., how do users rate the page on a clean vs. cluttered scale).
  • The definition of conversion.  Since some pages on an e-commerce web site are revenue-generating, and others aren’t, an important question is how you define conversion.  For revenue-generating pages (e.g., pages with a “checkout now” button) this is easy — “Did the page result in a sale?”  For other pages, like product information pages, this measure won’t work, so some other measure of engagement with the page becomes necessary.
  • Controlling for other influencing factors.  In conjunction with the first two points comes the problem of causality vs. correlation.  Assuming you have your definitions of clutter and conversion nailed down, how can you be sure any changes you see in conversion is caused by clutter (causal relationship), and not some other factor you are not accounting for (there’s correlation but no causal relationship).

The way to go about it is to take as many measurements of clutter as you can, feed them into a statistical model with a variety of conversion metrics, and see what comes out.  You also have to find a way to account for other influencing factors so that you can control for that in your model.  Easy, right?  Ok, so there are a lot of open issues, but they’re definitely not insurmountable.  I also believe it’s a worthy pursuit, the hypothesis being that there are clear business reasons for keeping designs and interfaces simple.

And apparently we’re not the only ones thinking about this…  Ruth Rosenholtz and her colleagues at MIT recently wrote a paper (Measuring Visual Clutter) where they seem to have developed what they call a “clutter detector” for a variety of interfaces, mostly offline (desk clutter, map clutter, etc.).  They describe some of their challenges in doing this as follows:

The fact that one person’s clutter is the next person’s organized workspace makes it hard to come up with a universal measure of clutter. Rosenholtz and colleagues modeled what makes items in a display harder or easier to pick out. They used this model, which incorporates data on color, contrast and orientation, to come up with a software tool to measure visual clutter.

On the issue of subjective measures of clutter:

Although there was a fair bit of disagreement among the people being tested about what constituted clutter, when the researchers compared results from their clutter measure to those of their human subjects, they found a good correlation.

I’m still digesting the paper, but it’s a fascinating read so definitely check it out.  Thoughts on how to approach this for e-commerce web pages are also more than welcome!

Using Twitter to value online information

I have recently noticed an interesting trend among the people I follow on Twitter. It appears that my network is dividing itself neatly into 2 camps: those who care deeply about the content they publish, and those who use it more casually. Let me explain…

Saying “good night” to everyone you know

Twitter users who casually update their status without thinking about it too much continuously say things like “Yep,” “Good night tweeple,” and “Banging my head against the desk.” Cryptic information that can be quite difficult to figure out. I’m not saying that this is necessarily a bad thing. It’s just clear that some people view Twitter as a broadcast medium mainly meant for people they know in the real world, and that’s fine (I tend to think that’s what Facebook is for, but let’s not split hairs about this).

I’m also not suggesting that all tweets should be serious — the odd random or exasperated update can be interesting, enlightening, and often very funny, and it also shows that there’s a real person at the other end. I do follow a lot of these casual users, but I know all of them personally so their updates are meaningful to me. And of course there is always the option to stop following someone, so you only have yourself to blame for the content you receive on Twitter.

But then there are those who care a lot about what they say…

Sharing content via Twitter

People who care see Twitter not just as an outlet for random thoughts, but also a valuable tool to learn and share and expand their knowledge about issues they care about. I follow a bunch of people who clearly care about the content they put on Twitter, and it adds enormous value to my work life and personal life (people like @jontyfisher, @adamnash, @SmithInAfrica, and @TheONECampain, just to name a small and diverse subset of folks).

Sharing interesting information on Twitter makes you a good citizen of the web for a very important reason. It allows the best content to rise to the top. What makes content sharing on Twitter powerful is that humans are involved, not just technology. The difference between going through your RSS feeds and learning about something through your Twitter network is that on Twitter, someone read the content and decided that it is good enough to share. And if you follow people with similar interests, chances are you will find it interesting too. As Justin Basini (@justinbasini) put it in a recent post: “Twitter users aggregate, edit, filter and share better than any technology.”

But what if the content isn’t interesting to anyone else? Well, then it will just die in the constant stream of tweets that go by every day. If the content is good, it will be retweeted, and spread rapidly not just through your own network but the networks of others.

In sociology the phenomenon of information spreading through multiple networks is known as The strength of weak ties. In a 1973 paper, Mark Granovetter developed his theory of weak ties. The theory states that because a person with strong ties in a network more or less knows what the other people in the network know (e.g. in close friendships or within your closely-guarded Facebook network), the effective spread of information relies on the weak ties between people in separate networks.

And this is of course one of the main strengths of Twitter — that not all connections have to be mutual (when you follow someone they don’t have to follow you back, like on Facebook). In other words, retweeting allows information to jump from one tightly-knit network to the next, allowing for the rapid spread of valuable information throughout the entire network, not just your own.

A new way to value information on the web

There are still a lot of people who feel that Twitter is a waste of time and adds no value. That might be true for them, but I think we are seeing a very interesting phenomenon here, and that is a new way to value information on the web and separate what’s worthy of reading from what’s not.

RSS feeds allow us to see content we might be interested in (but not every article will be good). Digg and similar services allow us to see what other people find interesting. But only Twitter puts those features together and lets us see content that people with similar interests than ours find valuable. And there is real power in that.

Oh, and you can follow me on Twitter if you’d like.