Menu

How to increase the value you get out of social media

A common complaint about social networks is that they insulate us by only showing us information we’re already likely to agree with. This solidifies our existing confirmation biases and makes us less likely to see the value of other viewpoints. It’s a legitimate concern, but we only have ourselves to blame. The problem is that if we don’t follow enough people from different types of networks, we’re always going to see the same type of information over and over.  And in this fundamental point also lies the best way to get the biggest benefit from social media.  So stick with me as we discuss some sociology theory, which I promise will lead to some practical implications in the end.

First, a little background on Structural Hole Theory.

Structural Holes Defined

Ronald Burt’s theory of “structural holes’ is an important extension of social network theory, which argues that networks provide two types of benefits: information benefits and control benefits.

  • Information benefits refer to who knows about relevant information and how fast they find out about it. Actors with strong networks will generally know more about relevant subjects, and they will also know about it faster. According to Burt (1992), “players with a network optimally structured to provide these benefits enjoy higher rates of return to their investments, because such players know about, and have a hand in, more rewarding opportunities”.
  • Control benefits refer to the advantages of being an important player in a well-connected network. In a large network, central players have more bargaining power than other players, which also means that they can, to a large extent, control many of the information flows within the network.

People with a lot of followers on social media have a high degree of Control benefits — they are often extremely influential in their fields, and in unique positions to have control over certain conversations on the web. But being an influencer doesn’t guarantee that you will have strong Information benefits , because you tend to get the same news over and over again if you don’t do a bit of work on expanding your network in a very deliberate way.

Burt’s theory of structural holes aims to enhance both these benefits to their full potential. A structural hole is “a separation between non-redundant contacts” (Burt, 1992). The holes between non-redundant contacts provide opportunities that can enhance both the control benefits and the information benefits of networks. The figure below shows a graphical representation of this definition.

The concept of non-redundant contacts is extremely important, and refers to contacts who give you access to networks you aren’t already part of. Now let’s look at how Mr. Scoble can increase the Information benefits he gets from Twitter.

Optimizing the benefits of networks

There are several ways to optimize structural holes in a network to ensure maximum information benefits:

  • The size of the network. The size of a network determines the amount of information that is shared within the network. A person has a much better chance to receive timely, relevant information in a big network than in a small one. The size of the network is, however, not dependant merely on the number of actors in the network, but the number of non-redundant actors. In other words, it’s not just about how many people you follow on Twitter, it’s also who you follow.  Pretty straight-forward, but let’s continue.
  • Efficient networks. Efficiency in a network is concerned with maximizing the number of non-redundant contacts in a network in order to maximize the number of structural holes per actor in the network. It is possible to eliminate redundant contacts by linking only with a primary actor in each redundant cluster. This saves time and effort that would normally have been spent on maintaining redundant contacts.  What this basically means is that if you follow people who all follow each other, your network isn’t very efficient and you need to get rid of some people.
  • Effective networks. Effectiveness in a network is concerned with “distinguishing primary from secondary contacts in order to focus resources on preserving primary contacts” (Burt, 1992:21). Building an effective network means building relationships with actors that lead to the maximum number of other secondary actors, while still being non-redundant.  This means that if 10 people give you access to the same network of information, only follow the most important one — their voice will be clearer and not drowned out by the others.
  • Weak ties. In his 1973 paper entitled “The strength of weak ties”, Mark Granovetter (Granovetter, 1973) developed his theory of weak ties. The theory states that because a person with strong ties in a network more or less knows what the other people in the network know (e.g. in close friendships or a board of directors), the effective spread of information relies on the weak ties between people in separate networks. “Weak ties are essential to the flow of information that integrates otherwise disconnected social clusters into a broader society” (Burt, 1992). This basically means that to get more out of Twitter, you need to figure out where your network is weak, and then follow those people who give you access to additional clusters. Building and maintaining weak ties over large structural holes enhances information benefits and creates even more efficient and effective networks.

So here’s the bottom line: to achieve networks rich in information benefits it is necessary to build large networks with non-redundant contacts and many weak ties over structural holes. Some of these information benefits are:

  • More contacts are included in the network, which implies that you have access to a larger volume of information.
  • Non-redundant contacts ensure that this vast amount of information is diverse and independent.
  • Linking with the primary actor in a cluster implies a connection with the central player in that cluster. This ensures that you will be one of the first people to be informed when new information becomes available.

How to get the most out of social media

If we apply these theories to Twitter and other social media networks, we quickly realize it is not the sheer number of “friends” in your network that count, it is the diversity of the people in your network that is most important. If you only have links to people in your immediate group of friends or colleagues, it will be difficult to get new information, since everyone will pretty much know the same things. This is not to say that you have to start following all those random spammers on Twitter, but it does mean that people with who you have “weak ties” will often provide you with new information and therefore more benefits than your “strong ties”.

So here’s how to make sure you get the most out of social media:

  • Identify the information networks you want to have access to (for me, it’s information about user experience design and product management).
  • Go through your following list and see where the overlap is — if there is a lot of resharing going on of the same people, follow the person who gets reshared the most.  This will reduce your Twitter stream but still get you the information you need (and faster than before).
  • Once you’ve reduced your following list, make your network as large as possible with the “weak ties” who will give you access to all the information you need.

These theories show that we can reduce the number of people we follow while actually getting more Information benefits from social media. We will get new information faster, we will get it only once or twice, and the information we get will be more diverse.

References

Burt, Ronald S. (1992). Structural Holes: The Social Structure of Competition. Cambridge: Harvard University Press. Granovetter, M. S. (1973). “The Strength of Weak Ties.” American Journal of Sociology 78: 1360-1380.

Why Facebook should forget about Twitter

With the three recent big stories in Facebookland (the FriendFeed aquisition, real-time search, and now the test launch of Facebook Lite) it doesn’t take a rocket scientist to figure out that Facebook is going hard after Twitter. (Update 1/16/2010: Facebook just rolled out “via” as their version of Twitter’s “retweet”. That, combined with recent changes to their privacy policy to make the platform more open, are two more clear examples of Facebook’s “Become Twitter” strategy)

What is more difficult to understand is why they are doing it.  Maybe it’s a personal vendetta because of the failed acquisition talks?  I just don’t see the business reason for this.  Here’s why I think Facebook should forget about Twitter and focus on making its own platform great:

Different target markets

It is well known that Twitter skews heavily towards younger tech-savvy users, with the rest of the population finding it hard to see the point.  Facebook, on the other hand, is increasingly being used by an older demographic.  The fastest growing demographic on Facebook is women over 55.

Why is all this important?  Because regardless of what Facebook wants to be, the demographic that is settling in on the site for the long haul is different from the Twitter user base — and they have totally different needs and behaviors. At this point, Facebook is too established as a brand to be able to force their product onto the target market they want.  And why would they even want to?  They have access to a much larger user base than Twitter.  Which brings me to my next point…

Always compete on your strengths

The mistake that Facebook is making is that it is trying to be Twitter for a user base that does not want Twitter.  Not convinced?  Go to http://www.brandtags.net and look at the brand clouds of word associations that people make with Facebook and Twitter.  For Facebook, you get words like Communication, People, Stalking.  For Twitter, you get words like Pointless, Stupid, Useless.

Now, of course Twitter is none of those things, but it shows the enormous gap in brand perceptions.  Why would you want to move a powerful people connection platform closer to something with a niche market that a majority of people find useless? There are a bunch of other Twitter statistics coming out lately that prove the Twitter niche factor: 5% of users account for 75% of the activity, 60% of US Twitter users abandon the site after a month, and 24% of all tweets are from bots (ok, that last one is irrelevant to this discussion, but still interesting).  And there’s also this interesting conversation on Mashable that clearly shows the differences between Twitter and Facebook usage.

The bottom line is that Twitter is for information sharing, Facebook is for life sharing.  That is what people are using it for — sharing photos, videos, those annoying pokes and quizzes, keeping in touch with friends all over the globe, lurking on profiles of people you used to know way back when.  That is the strength of Facebook, and that is what they should focus their platform on.

So what should Facebook do?

So here is my advice to Facebook: go where your users are.  Understand how they use the site, what their needs and behaviors are.  Go visit them, talk to them, watch them navigate around, understand why they are there in the first place.  And then enhance your platform to fulfill those needs.  Build new ways to feel closer to the people in your life.  Make it easier to share and discuss media.  Build families-only mini-communities.  Who knows what you can come up with if you just understand your users and build a web site for their needs?

Seriously — let Twitter be Twitter, forget about them and don’t force your users into that kind of experience.  Don’t try to be “status updates for everyone.”  Be a platform that lives up to the value proposition on your home page: “Facebook helps you connect and share with the people in your life.”

The connection between user experience and brand loyalty

I recently attended a brand presentation where the video below was shown. It’s pretty funny, and also a perfect example of how interactive products and consumer-generated content should fundamentally change our traditional views of customer loyalty. Loyalty in our current environment is fostered through repeated great (user) experiences, not just through advertising and coupons.

But even though I like the general point the video is trying to make, I think it stops a little short of the real issue. It is saying that we should listen to our customers better. But that’s not enough — we need to understand customers in ways they don’t even understand themselves, and then build experiences that meet unmet (and sometimes unconscious) needs through repeated, positive experiences that deepen the customer-company relationship.

Uncovering these needs happens not just through “Voice of the Customer” research programs, but also through more contextual research efforts like ethnography and contextual inquiries (combined with validating quantitative research). I believe this is where traditional Market Research programs like NPS (Net Promoter Score) only tell a part of the full brand loyalty story (albeit an important part, for sure).  There is evidence that the tide is turning on this topic as the field of HCI (Human-Computer Interaction) becomes more mainstream and user experience research techniques become more accessible.

There is a powerful synergy in discovering how to deepen true customer loyalty through collaborative efforts between Market Research and User Experience Research, and we need to bring these two disciplines closer together (this view is also very much in line with the thinking described in the excellent Adaptive Path essay The Long Wow).

Visual design clutter index for web pages

I’ve been working on a project where we’re trying to come up with a way to establish a visual design “clutter index.”  The goal is to see if there is some threshold beyond which web page clutter impacts business metrics like conversion and click-through rates.  The challenges are widespread of course, and mainly focused on the following 3 areas:

  • The definition and measurement of clutter.  There are a variety of ways to measure clutter on pages, ranging from the completely objective (e.g., % of white space on a page) to completely subjective (e.g., how do users rate the page on a clean vs. cluttered scale).
  • The definition of conversion.  Since some pages on an e-commerce web site are revenue-generating, and others aren’t, an important question is how you define conversion.  For revenue-generating pages (e.g., pages with a “checkout now” button) this is easy — “Did the page result in a sale?”  For other pages, like product information pages, this measure won’t work, so some other measure of engagement with the page becomes necessary.
  • Controlling for other influencing factors.  In conjunction with the first two points comes the problem of causality vs. correlation.  Assuming you have your definitions of clutter and conversion nailed down, how can you be sure any changes you see in conversion is caused by clutter (causal relationship), and not some other factor you are not accounting for (there’s correlation but no causal relationship).

The way to go about it is to take as many measurements of clutter as you can, feed them into a statistical model with a variety of conversion metrics, and see what comes out.  You also have to find a way to account for other influencing factors so that you can control for that in your model.  Easy, right?  Ok, so there are a lot of open issues, but they’re definitely not insurmountable.  I also believe it’s a worthy pursuit, the hypothesis being that there are clear business reasons for keeping designs and interfaces simple.

And apparently we’re not the only ones thinking about this…  Ruth Rosenholtz and her colleagues at MIT recently wrote a paper (Measuring Visual Clutter) where they seem to have developed what they call a “clutter detector” for a variety of interfaces, mostly offline (desk clutter, map clutter, etc.).  They describe some of their challenges in doing this as follows:

The fact that one person’s clutter is the next person’s organized workspace makes it hard to come up with a universal measure of clutter. Rosenholtz and colleagues modeled what makes items in a display harder or easier to pick out. They used this model, which incorporates data on color, contrast and orientation, to come up with a software tool to measure visual clutter.

On the issue of subjective measures of clutter:

Although there was a fair bit of disagreement among the people being tested about what constituted clutter, when the researchers compared results from their clutter measure to those of their human subjects, they found a good correlation.

I’m still digesting the paper, but it’s a fascinating read so definitely check it out.  Thoughts on how to approach this for e-commerce web pages are also more than welcome!

Using Twitter to value online information

I have recently noticed an interesting trend among the people I follow on Twitter. It appears that my network is dividing itself neatly into 2 camps: those who care deeply about the content they publish, and those who use it more casually. Let me explain…

Saying “good night” to everyone you know

Twitter users who casually update their status without thinking about it too much continuously say things like “Yep,” “Good night tweeple,” and “Banging my head against the desk.” Cryptic information that can be quite difficult to figure out. I’m not saying that this is necessarily a bad thing. It’s just clear that some people view Twitter as a broadcast medium mainly meant for people they know in the real world, and that’s fine (I tend to think that’s what Facebook is for, but let’s not split hairs about this).

I’m also not suggesting that all tweets should be serious — the odd random or exasperated update can be interesting, enlightening, and often very funny, and it also shows that there’s a real person at the other end. I do follow a lot of these casual users, but I know all of them personally so their updates are meaningful to me. And of course there is always the option to stop following someone, so you only have yourself to blame for the content you receive on Twitter.

But then there are those who care a lot about what they say…

Sharing content via Twitter

People who care see Twitter not just as an outlet for random thoughts, but also a valuable tool to learn and share and expand their knowledge about issues they care about. I follow a bunch of people who clearly care about the content they put on Twitter, and it adds enormous value to my work life and personal life (people like @jontyfisher, @adamnash, @SmithInAfrica, and @TheONECampain, just to name a small and diverse subset of folks).

Sharing interesting information on Twitter makes you a good citizen of the web for a very important reason. It allows the best content to rise to the top. What makes content sharing on Twitter powerful is that humans are involved, not just technology. The difference between going through your RSS feeds and learning about something through your Twitter network is that on Twitter, someone read the content and decided that it is good enough to share. And if you follow people with similar interests, chances are you will find it interesting too. As Justin Basini (@justinbasini) put it in a recent post: “Twitter users aggregate, edit, filter and share better than any technology.”

But what if the content isn’t interesting to anyone else? Well, then it will just die in the constant stream of tweets that go by every day. If the content is good, it will be retweeted, and spread rapidly not just through your own network but the networks of others.

In sociology the phenomenon of information spreading through multiple networks is known as The strength of weak ties. In a 1973 paper, Mark Granovetter developed his theory of weak ties. The theory states that because a person with strong ties in a network more or less knows what the other people in the network know (e.g. in close friendships or within your closely-guarded Facebook network), the effective spread of information relies on the weak ties between people in separate networks.

And this is of course one of the main strengths of Twitter — that not all connections have to be mutual (when you follow someone they don’t have to follow you back, like on Facebook). In other words, retweeting allows information to jump from one tightly-knit network to the next, allowing for the rapid spread of valuable information throughout the entire network, not just your own.

A new way to value information on the web

There are still a lot of people who feel that Twitter is a waste of time and adds no value. That might be true for them, but I think we are seeing a very interesting phenomenon here, and that is a new way to value information on the web and separate what’s worthy of reading from what’s not.

RSS feeds allow us to see content we might be interested in (but not every article will be good). Digg and similar services allow us to see what other people find interesting. But only Twitter puts those features together and lets us see content that people with similar interests than ours find valuable. And there is real power in that.

Oh, and you can follow me on Twitter if you’d like.

The dangers of "test and learn"

A recent discussion on a user experience forum I participate in turned to the topic of A/B testing.  I really enjoyed the conversation so I wanted to reiterate some of the points I made, and expand on it a little bit as well.  It’s not my goal to define A/B testing here but to share my opinion on its use.  I believe that even though A/B testing can be extremely valuable to help identify the best iteration of a site or a particular page, it should never be used in isolation.

Since A/B testing is relatively cheap to do and the results are so compelling, companies are in danger of adopting a “test and learn” culture where pages are just A/B tested with no additional user input.  That would be the wrong way to go.  A/B testing shouldn’t be used on its own to make decisions, it should always be used in conjunction with other research methods — both qualitative (such as usability testing, ethnography) and quantitative (such as desirability studies).

A/B testing is an important method in the research toolkit because it can give you information that usability testing on its own cannot.  The main goal of A/B testing is to see how business metrics move up and down depending on the version of the page — click through rates, checkout rates, purchasing rates, etc.  You can’t see that with usability testing alone.  But as Kohavi et al. point out in their paper Practical Guide to Controlled Experiments on the Web, A/B testing has some major limitations:

  • Quantitative Metrics, but No Explanations. It is possible to know which variant is better, and by how much, but not why.  In user studies, for example, behavior is often augmented with users’ comments, and hence usability labs can be used to augment and complement controlled experiments.
  • Short term vs. Long Term Effects. Controlled experiments measure effects during the experimentation period, typically a few weeks.   It is wise to look at delayed conversion metrics, where there is a lag from the time a user is exposed to something and take action. These are sometimes called latent conversions.
  • Primacy and Newness Effects. These are opposite effects that need to be recognized. If you change the navigation on a web site, experienced users may be less efficient until they get used to the new navigation, thus giving an inherent advantage to the Control. Conversely, when a new design or feature is introduced, some users will investigate it, click everywhere, and thus introduce a “newness” bias.
  • Features Must be Implemented. A live controlled experiment needs to expose some users to a Treatment different than the current site (Control). The feature may be a prototype that is being tested against a small portion, or may not cover all edge cases.  Nonetheless, the feature must be implemented and be of sufficient quality to expose users to it.
  • Consistency. Users may notice they are getting a different variant than their friends and family. It is also possible that the same user will see multiple variants when using different computers (with different cookies).

As with most things, it is important to use A/B testing responsibly.   Since every research/testing method comes with its own limitations, a combination of methods is the only way to get the full picture and make the right decisions.