Menu

Client collaboration in UX consulting

Baruch Sachs makes some good points in Part 1 of a series called Practicing Great UX Consulting:

You are not there to educate people. You are there to advise your client and guide the creation of an amazing user experience. You are the expert; that’s why they brought you in. Collaboration and openness are key here. People need to feel invested, not put upon.

I agree that collaboration is key — the problem comes when collaboration gets confused with consensus. Consensus cultures often produce watered down, unexciting products. Products where endless rounds of give-and-take have worn down the original idea to a shadow of what it once was. Consensus cultures also wear down the teams working on the product, because no one really gets what they want, they just get some of it.

Collaboration is different. In collaboration cultures people understand that even though everyone gets a voice, not everyone gets to decide. People are able to voice their opinions, argue passionately for how they believe things should be done, and try to negotiate compromises. But it certainly doesn’t mean that everyone has to agree with every decision. And that’s why the client/agency trust relationship is so crucial.

One great collaboration technique that works well with clients is called Design Studio. Jared Spool has a great write-up of how to conduct a Design Studio worskhop, and it’s also worth reading Paul Boag’s thoughts in Never wireframe alone.

The dying art of the small magazine ad

In Small Ads Dushko Petrovich turns our attention away from Facebook and Google to take a philosophical journey through those small, weird, completely non-targeted block-ads that appear in magazines:

At first you think these little rectangles are amusing because they offer monogrammed sweaters and self-publishing opportunities—things that are undoubtedly funny, in a sad, Skymall sort of way. But sometimes the funny sadness goes deeper than that, like the sadness of “unique diamond fish jewelry” for $15,000. And then sometimes you are plunged so deep into these ads, you wish there was a German word, or school of social thought, that could sufficiently describe the experience.

There might not be a German word for the experience, but Dushko’s troubled thoughts on the matter is entertainment enough.

Why you shouldn’t use introductory tours in apps

Luke Wroblewski explains why those overlay introductory “tour” screens you often see in apps is such a bad idea in Designing for real world mobile use:

These issues stem from the fact that introductory tours show up before you ever get a chance to use an application. Most people are eager to jump right in and as a result, they skip reading the manual. The ones that do read haven’t seen the interface yet so they don’t have any sense of where and how the tips they’re learning will apply.

I completely agree, and have written about this before in Best practices for user onboarding on mobile touchscreen applications, where I give three guidelines for app onboarding:

  • Make sure that users have familiarized themselves enough with your app to have the correct mental model before you start teaching them how to use it.
  • Show users only the information they need to take the immediate next step(s) for using the application.
  • Make sure users have a clear link between the information you give them and how to access/use that information in their everyday use of the app.

Smart cities and wealth creation

Rick Robinson wrote a really interesting article on the huge differences in life expectancy between the wealthiest and poorest areas of a city, and how the move to Smart Cities is trying to combat that. From Death, life, and place in great digital cities:

At the heart of the Smarter Cities movement is the belief that the use of engineering and IT technologies, including social media and information marketplaces, can create more efficient and resilient city systems. Might that idea offer a way to address the challenges of supporting wealth creation in cities at a sustainable rate of resource usage; and of providing city services to enable wellbeing, social mobility and economic growth at a reduced level of cost?

Rick goes on to explain some counter-intuitive dangers of this approach, and concludes:

We are opening Pandora’s box. These tremendously powerful technologies could indeed create more efficient, resilient city systems. But unless they are applied with real care, they could exacerbate our challenges. If they act simply to speed up transactions and the consumption of resources in city systems, then they will add to the damage that has already been done to urban environments, and that is one of the causes of the social inequality and differences in life expectancy that cities are seeking to address.

It’s a long, dense article, but it provides a much-needed realistic view of the power of technology to transform cities and the people who live there. The article also taught me this really good principle of urbanism:

Consider urban life before urban space; consider urban space before buildings.

That immediately jumped out at me as a good principle in software development as well: Consider user needs before applications; consider applications before individual pages.

On the topic of Smart Cities, also see Smart cities and smart citizens, a very interesting write-up about this year’s FutureEverything summit. It makes a similar point about the importance of life over buildings:

Perhaps part of the problem in current dialogues around smart cities is the failure to understand what a city actually is. The smart city vision has tended to focus on buildings and infrastructure or traffic management and how technology can increase efficiency. Catherine Mulligan of Imperial College London says the reverential tones with which some smart-city speculators talk about technology is worrying: “They say these systems and computers can now make better decisions than human beings. But if you take the human beings out, it’s just a bunch of buildings talking to each other… and that’s not a city. The city is what it is because of the people.”

[Sponsor] Radium: a new way to listen to internet radio

Radium is a new way to listen to internet radio. It sits in your menu bar and stays out of your way. And it just works.

With its clean user interface and album cover display, you’re always just a click away from beautiful sounds. Add your favorite tracks to the wish list and check them out later on the iTunes Store. Take the sounds with you using Radium’s built-in AirPlay streaming support. It’s all there.

With the proliferation of services like Spotify and Pandora, why choose Radium? Because with Radium, you don’t have to build up playlists, constantly answer questions about your music preferences, or navigate a cumbersome user interface. Radium is all about the sounds. And these sounds come from over 6000 free stations, maintained and curated by real people like you.

Available for $10 on the Mac App Store. Check it out.

Radium

Sponsorship by The Syndicate.

How new features can hurt your product

Within a week, two articles came out about resisting the urge to add new features quickly after a product is launched. First, Julie Zhuo makes the case for slow, small launches with clear “sunset” criteria in The tax of new, because of the inherent cost of maintaining and improving new features:

The tax that comes with introducing any new feature into your product is high. I cannot stress this enough. Sure, maybe the new feature isn’t hard to build, maybe it only takes a couple days and a handful of people, maybe it can be shipped and delivered by next week. And maybe the additional cognitive load for a user isn’t high — it’s just an extra icon here, after all, or an extra slot in a menu there. But once your new feature is out there, it’s out there. A real thing used by real people.

Jared Spool then wrote Experience Rot, focusing more on the UX and technical debt issues introduced by new features:

The moment a new feature is added — one that wasn’t considered in the initial design — the rot starts to take hold. It’s at this moment that a rethinking of the design has to happen and the seeds of complexity are laid.

If that new feature is close in style and function to the original set of features, the experience rot may not be visible. Yet, because it needs to be retrofit into the original design, it starts down the inevitable road.

As more features are added, it becomes harder to make the overall design coherent and sensical. Soon features are crammed into corners that don’t make sense.

It’s interesting to hear the same conclusion drawn from different perspective, so it’s worth reading both articles.

More on algorithmic decision-making

Yesterday I posted The problem with letting algorithms make most of our decisions, discussing how removing all knowledge obstacles can make us less adept at dealing with challenges. As is often the case, within a few hours of posting that I came across two more articles that addresses the same issues. First, from Kyle Baxter’s very interesting essay On the Philosophy of Google Glass:

Page’s idea — that we would be fundamentally better off if we had immediate access to all of humanity’s information — ignores [how we develop knowledge]. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

And then, from Smart cities and smart citizens, an editorial in Sustain Magazine (which I’ll reference more over the coming days):

Furthermore, [Dan Hill, CEO of Fabrica] argues that current smart-systems thinking could lead us down a dangerous path towards passive citizens. As citizens — and city leaders — devolve their decision-making and responsibility to technology, their awareness of their environment diminishes in line with their ability to do something about it.

“If you automate too much stuff, people stop thinking about the issues. Yes, it might be more efficient to make the lights go off automatically, but it stops us thinking about it, we’re not engaged — and when we’re disengaged that’s not a good idea. We want people to think about something like carbon. Besides, we can turn the lights off on the way out — it’s entirely possible, we’re quite a smart species potentially!”

I find it fascinating how the Internet sometimes feel like one organism, always thinking and debating the same issues from many different angles. From Google Glass to Architecture to self-driving cars, it seems that currently we’re collectively worried about the impact of smart technologies on our lives.

The problem with letting algorithms make most of our decisions

Knight Rider Kitt

Image source: Knight Rider’s KITT – My finished replica!

Nicholas Carr asks some serious questions about things like self-driving cars and our increased reliance on algorithms for decision-making in Moral code:

As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them?

Clive Thompson picks up the thread in a very interesting Wired article called Relying on Algorithms and Bots Can Be Really, Really Dangerous:

The truth is, our tools increasingly guide and shape our behavior or even make decisions on our behalf. A small but growing chorus of writers and scholars think we’re going too far. By taking human decision-making out of the equation, we’re slowly stripping away deliberation—moments where we reflect on the morality of our actions.

But even stepping away from the morality issues, there are some other undesirable side-effects to algorithmic decision-making:

Or as Evan Selinger, a philosopher at Rochester Institute of Technology, puts it, tools that make hard things easy can make us less likely to tolerate things that are hard. Outsourcing our self-control to “digital willpower” has consequences: Use Siri constantly to get instant information and you can erode your ability to be patient in the face of incomplete answers, a crucial civic virtue.

The argument is that smart technology has the potential to strip us of our grit. And that’s a big problem, particularly if you subscribe to what author Paul Tough calls “the character hypothesis”: the notion that noncognitive skills, like persistence, self-control, curiosity, conscientiousness, grit and self-confidence, are more crucial than sheer brainpower to achieving success.

The hypothesis is that character is created by encountering and overcoming difficult situations. Therefore one of the big dangers of algorithms making our decisions for us is that if it removes challenges from our lives, it reduces our ability to develop grit and build character. It’s like an Axiom for our brains.

Update: I came across a couple more articles about these issues. See More on algorithmic decision-making.

More

  1. 1
  2. ...
  3. 119
  4. 120
  5. 121
  6. 122
  7. 123
  8. ...
  9. 195