Menu

Why you shouldn’t use introductory tours in apps

Luke Wroblewski explains why those overlay introductory “tour” screens you often see in apps is such a bad idea in Designing for real world mobile use:

These issues stem from the fact that introductory tours show up before you ever get a chance to use an application. Most people are eager to jump right in and as a result, they skip reading the manual. The ones that do read haven’t seen the interface yet so they don’t have any sense of where and how the tips they’re learning will apply.

I completely agree, and have written about this before in Best practices for user onboarding on mobile touchscreen applications, where I give three guidelines for app onboarding:

  • Make sure that users have familiarized themselves enough with your app to have the correct mental model before you start teaching them how to use it.
  • Show users only the information they need to take the immediate next step(s) for using the application.
  • Make sure users have a clear link between the information you give them and how to access/use that information in their everyday use of the app.

Smart cities and wealth creation

Rick Robinson wrote a really interesting article on the huge differences in life expectancy between the wealthiest and poorest areas of a city, and how the move to Smart Cities is trying to combat that. From Death, life, and place in great digital cities:

At the heart of the Smarter Cities movement is the belief that the use of engineering and IT technologies, including social media and information marketplaces, can create more efficient and resilient city systems. Might that idea offer a way to address the challenges of supporting wealth creation in cities at a sustainable rate of resource usage; and of providing city services to enable wellbeing, social mobility and economic growth at a reduced level of cost?

Rick goes on to explain some counter-intuitive dangers of this approach, and concludes:

We are opening Pandora’s box. These tremendously powerful technologies could indeed create more efficient, resilient city systems. But unless they are applied with real care, they could exacerbate our challenges. If they act simply to speed up transactions and the consumption of resources in city systems, then they will add to the damage that has already been done to urban environments, and that is one of the causes of the social inequality and differences in life expectancy that cities are seeking to address.

It’s a long, dense article, but it provides a much-needed realistic view of the power of technology to transform cities and the people who live there. The article also taught me this really good principle of urbanism:

Consider urban life before urban space; consider urban space before buildings.

That immediately jumped out at me as a good principle in software development as well: Consider user needs before applications; consider applications before individual pages.

On the topic of Smart Cities, also see Smart cities and smart citizens, a very interesting write-up about this year’s FutureEverything summit. It makes a similar point about the importance of life over buildings:

Perhaps part of the problem in current dialogues around smart cities is the failure to understand what a city actually is. The smart city vision has tended to focus on buildings and infrastructure or traffic management and how technology can increase efficiency. Catherine Mulligan of Imperial College London says the reverential tones with which some smart-city speculators talk about technology is worrying: “They say these systems and computers can now make better decisions than human beings. But if you take the human beings out, it’s just a bunch of buildings talking to each other… and that’s not a city. The city is what it is because of the people.”

[Sponsor] Radium: a new way to listen to internet radio

Radium is a new way to listen to internet radio. It sits in your menu bar and stays out of your way. And it just works.

With its clean user interface and album cover display, you’re always just a click away from beautiful sounds. Add your favorite tracks to the wish list and check them out later on the iTunes Store. Take the sounds with you using Radium’s built-in AirPlay streaming support. It’s all there.

With the proliferation of services like Spotify and Pandora, why choose Radium? Because with Radium, you don’t have to build up playlists, constantly answer questions about your music preferences, or navigate a cumbersome user interface. Radium is all about the sounds. And these sounds come from over 6000 free stations, maintained and curated by real people like you.

Available for $10 on the Mac App Store. Check it out.

Radium

Sponsorship by The Syndicate.

How new features can hurt your product

Within a week, two articles came out about resisting the urge to add new features quickly after a product is launched. First, Julie Zhuo makes the case for slow, small launches with clear “sunset” criteria in The tax of new, because of the inherent cost of maintaining and improving new features:

The tax that comes with introducing any new feature into your product is high. I cannot stress this enough. Sure, maybe the new feature isn’t hard to build, maybe it only takes a couple days and a handful of people, maybe it can be shipped and delivered by next week. And maybe the additional cognitive load for a user isn’t high — it’s just an extra icon here, after all, or an extra slot in a menu there. But once your new feature is out there, it’s out there. A real thing used by real people.

Jared Spool then wrote Experience Rot, focusing more on the UX and technical debt issues introduced by new features:

The moment a new feature is added — one that wasn’t considered in the initial design — the rot starts to take hold. It’s at this moment that a rethinking of the design has to happen and the seeds of complexity are laid.

If that new feature is close in style and function to the original set of features, the experience rot may not be visible. Yet, because it needs to be retrofit into the original design, it starts down the inevitable road.

As more features are added, it becomes harder to make the overall design coherent and sensical. Soon features are crammed into corners that don’t make sense.

It’s interesting to hear the same conclusion drawn from different perspective, so it’s worth reading both articles.

More on algorithmic decision-making

Yesterday I posted The problem with letting algorithms make most of our decisions, discussing how removing all knowledge obstacles can make us less adept at dealing with challenges. As is often the case, within a few hours of posting that I came across two more articles that addresses the same issues. First, from Kyle Baxter’s very interesting essay On the Philosophy of Google Glass:

Page’s idea — that we would be fundamentally better off if we had immediate access to all of humanity’s information — ignores [how we develop knowledge]. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

And then, from Smart cities and smart citizens, an editorial in Sustain Magazine (which I’ll reference more over the coming days):

Furthermore, [Dan Hill, CEO of Fabrica] argues that current smart-systems thinking could lead us down a dangerous path towards passive citizens. As citizens — and city leaders — devolve their decision-making and responsibility to technology, their awareness of their environment diminishes in line with their ability to do something about it.

“If you automate too much stuff, people stop thinking about the issues. Yes, it might be more efficient to make the lights go off automatically, but it stops us thinking about it, we’re not engaged — and when we’re disengaged that’s not a good idea. We want people to think about something like carbon. Besides, we can turn the lights off on the way out — it’s entirely possible, we’re quite a smart species potentially!”

I find it fascinating how the Internet sometimes feel like one organism, always thinking and debating the same issues from many different angles. From Google Glass to Architecture to self-driving cars, it seems that currently we’re collectively worried about the impact of smart technologies on our lives.

The problem with letting algorithms make most of our decisions

Knight Rider Kitt

Image source: Knight Rider’s KITT – My finished replica!

Nicholas Carr asks some serious questions about things like self-driving cars and our increased reliance on algorithms for decision-making in Moral code:

As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them?

Clive Thompson picks up the thread in a very interesting Wired article called Relying on Algorithms and Bots Can Be Really, Really Dangerous:

The truth is, our tools increasingly guide and shape our behavior or even make decisions on our behalf. A small but growing chorus of writers and scholars think we’re going too far. By taking human decision-making out of the equation, we’re slowly stripping away deliberation—moments where we reflect on the morality of our actions.

But even stepping away from the morality issues, there are some other undesirable side-effects to algorithmic decision-making:

Or as Evan Selinger, a philosopher at Rochester Institute of Technology, puts it, tools that make hard things easy can make us less likely to tolerate things that are hard. Outsourcing our self-control to “digital willpower” has consequences: Use Siri constantly to get instant information and you can erode your ability to be patient in the face of incomplete answers, a crucial civic virtue.

The argument is that smart technology has the potential to strip us of our grit. And that’s a big problem, particularly if you subscribe to what author Paul Tough calls “the character hypothesis”: the notion that noncognitive skills, like persistence, self-control, curiosity, conscientiousness, grit and self-confidence, are more crucial than sheer brainpower to achieving success.

The hypothesis is that character is created by encountering and overcoming difficult situations. Therefore one of the big dangers of algorithms making our decisions for us is that if it removes challenges from our lives, it reduces our ability to develop grit and build character. It’s like an Axiom for our brains.

Update: I came across a couple more articles about these issues. See More on algorithmic decision-making.

Improve prioritized feature lists by adding more dimensions

Ken Norton wrote a really nice post about the problem with prioritized feature lists in product development, using his team’s early work on Google Docs as an example. Specifically, here is the problem he highlights in Babe Ruth and Feature Lists (Why Prioritized Feature Lists Can Be Poisonous):

Our wish list approach also created false equivalence. There was a huge chasm between what #1 meant to us and what it meant to our users. For us, it was first amongst equals. To them it was a painful tumor overdue for removal.

Orders of magnitude separated #1 from the rest of the list. That urgency didn’t come through until we got a bunch of them in the room and let them vent.

It’s worth reading the whole post before continuing, because the context is important. Ken highlights a really important point about prioritised feature lists — they are too one-dimensional to give you enough confidence about the product development decisions you’re making. That’s why it’s so important to plot priorities on multiple dimensions to aid in decision-making.

The method that springs to mind immediately in the case of the Google Docs example is something I’ve written about before, the Kano model. Ken explains that even though they saw text formatting within Google Docs as an important feature to work on, users saw it as a major frustration, and a fundamental bug that needed to be fixed.

Plotting their list of features using the Kano model could have highlighted this disconnect earlier in the process. To recap, the Kano model, developed in the 1980s by Professor Noriaki Kano for the Japanese automotive industry, is a helpful method to prioritise product features by plotting them on the following 2-dimensional scale:

  • How well a particular user need is being fulfilled by a feature
  • What level of satisfaction the feature will give users

The model is generally used to classify features into three groups:

  • Excitement generators. Delightful, unexpected features that make a product both useful and usable.
  • Performance payoffs. Features that continue to increase satisfaction as improvements are made.
  • Basic expectations. Features that users expect as a given — if these aren’t available in a product, you’re in trouble.

Here is a visual representation of the Kano model:

The Kano Model

Now, let’s take Ken’s original list of prioritised features and plot them on the Kano model. Obviously I don’t have the background that the team would have had, so this is just a guess to illustrate the point:

Kano model for Google Docs

With these added dimensions (and let’s assume there are 10 or more features on this graph), Product Managers can begin to make plans for what they need to work on by always keeping a balance between Basic expectations, Excitement generators, and Performance payoff features.

Now, I’m sure we could quibble about where exactly each feature should be plotted, but it’s immediately clear from this example that Formatting is a Basic expectation that needs to be very well met for Google Docs to get even close to parity with other word processors. That’s where they needed to start, and only once the basic expectations were met, should they have moved on to adding additional features to Formatting. At the same time, working on features like improved Sharing and Commenting would also have shown users that the product continues to get better over time (while generating some much-needed excitement).

So, with that said, I’d like to add a bit of a “yes, and” to Ken’s post. Yes, prioritized feature lists can poisonous, and one way to make sure that doesn’t happen is to add additional dimensions to the list to improve the accuracy of the decision-making. The Kano model is one approach, and I also discuss a couple more in The ultimate product question: How do you know what’s important?

Demo Mode vs. Reality Mode in product development

Rebekah Cox wrote a great post discussing the difference between product Demo Modes (in-store displays, on-stage demos that work without a glitch, picture-perfect product intros) and what she calls Reality Mode:

Reality mode takes time, iteration, data and user research. It takes honestly using what you’ve created and putting it through its paces. It takes asking yourself “is it useful?” and honestly answering. The result may not align with conventional wisdom, you may have to sacrifice that clever hook everyone comments about (but is ultimately useless or worse) and the demo may be boring or nonexistent. But by using your product in the real world and thinking about its true utility and value, you may end up with an enduring product where people are delighted through consistently delivered value instead of just a cool demo.

It reminds me of Marco Arment’s assessment that Facebook Home is “designed for optimal input and failed to consider real-world usage.” This is a very real problem in product development. It takes courage to look your darling project in the eye, find it wanting, and admit to it.

More

  1. 1
  2. ...
  3. 126
  4. 127
  5. 128
  6. 129
  7. 130
  8. ...
  9. 202