Menu

Throwing the cards out with the bath water

Michael Andrews brings up some interesting concerns about cards in Are UI Cards good for content? However, I’m left thinking that most of the “cons” in the article are actually Information Architecture problems, not problems with the card metaphor:

UI cards can contribute to content usability problems that may not be immediately evident.  Users often like UI cards when they encounter them, and don’t notice their limitations.  They see tidy cards often with colorful thumbnail images.  The cards seem optimized to make good first impressions.  But often, the cards end up squashing the content that must go in them, or omitting content details that don’t fit the layout vision.

If content is squashed and truncated in cards, it’s not the card element’s fault, it’s the content’s fault. It happens when content isn’t written while keeping the context of use in mind.

Here’s another example:

Another issue with UI cards is their lack of hierarchy. When all cards are the same size, all cards look equally important, whether they have detailed information, time sensitive information, sparse information, or optional information.

Again, visual hierarchy is a larger design problem, and not the fault of cards.

I point this out because it reminded me of one of the big dangers in design that we have to watch out for. We often see a UI issue and immediately switch out the pattern instead of trying to understand what the the real problem is. It’s great if we can look at something we designed and say, “Hmm, that doesn’t work.” But we have to go further and also understand why it doesn’t work before we just take the easy why out and replace the UI element.

If content is squished on cards, is it because we used cards, or because we didn’t write the content concisely enough to be easily consumable in small spaces? If all cards look the same, should we stop using cards, or design different card types to address visual hierarchy better? I would argue that in both cases, the latter should at least be considered. Of course, if cards are wrong for the interface, then burn it with fire. But be sure.

Further reading on cards:

The importance of non-makers

Debbie Chachra wrote a great essay on our current obsession with the word “Maker”, and how that devalues other professions like educators and caregivers. From Why I Am Not a Maker:

When new products are made, we hear about exciting technological innovation, which are widely seen as worth paying (more) for. In contrast, policy and public discourse around caregiving—besides education, healthcare comes immediately to mind—are rarely about paying more to do better, and are instead mostly about figuring out ways to lower the cost.

The logistics of usability testing on mobile devices

I’m currently working on the design of a native app for Jive Software, and we really wanted to do some usability testing on a prototype before development starts. There are, of course, a multitude of reasons to do usability testing on prototypes, but we had some very specific issues we wanted to address:

  • We are a distributed team. Product Management is in Palo Alto, Product Design is in Portland, and Development will happen in our Tel Aviv office. So we knew from the beginning that a functional prototype would be essential to communicate within the team. Flat wireframes / PSDs just weren’t going to cut it.
  • We’re working on something in a fairly new market, so we need to do quite a bit of validation to make sure we get the utility of the app right.
  • And of course, our designs are never as good as we think they are, so we wanted to make sure we correct the majority of the usability issues before development starts.

I read a lot about prototyping native apps and mobile usability testing, and since everyone’s process is different, I wanted to give an overview of the process we settled on, since we’re really happy with it so far. It was very important to me that all our offices would be able to observe the usability tests, so that guided a lot of the decision-making. Here’s how we’re doing it…

Prototyping

After reviewing and trying out every prototyping tool known to man, I settled on Proto.io for this project. There are so many great options out there that it’s hard to go wrong on a choice of tool. Proto.io was the best for me because of a few key features:

  • I needed full interactivity — since we used the prototype for usability testing it needed to feel as real as possible.
  • I needed lots of flexibility in the animations/transitions supported, since we’re in new territory and need to try out lots of different things. Proto.io supports any interaction I threw at it.
  • I needed to use a mix of built-in components and my own assets, and Proto.io handles that pretty well.

Of course, I can’t show what the prototype looks like at the moment, but as soon as the app launches I’ll update the post and embed it here.

Mobile usability testing rig

I got lost in mobile usability testing guides for days — there are so many good ways to do it. At first we considered remote testing, but it just wasn’t a good option for us because we were going to do part IDI (in-depth interview) and part usability testing, so we needed a way to be in the room with users and dig deep into certain areas.

I’ve seen some crazy setups — my favorite and weirdest is probably MailChimp’s “hug your laptop” idea. It’s a brilliant hack, but I was worried our non-tech savvy users would have trouble with this, so I needed another solution.

I ended up going with Bowmast‘s Mr. Tappy kit, and I attached a Logitech HD Pro Webcam C920 to it. We played around with it in the office first to make sure it’s going to work:

With that out of the way, it was on to the next challenge — how to stream it everywhere.

Streaming to observation rooms

This part was much easier than I thought, simply because we already use Vidyo for videoconferencing in all our offices. So every time I started a usability test I would start my meeting in Vidyo, and then people from our other offices could dial into my meeting from their observation rooms. They could see the room on one screen, and the participant’s phone on another. It worked like magic:

What we ended up with is a setup where I can do usability testing in person on mobile devices, record the sessions, and have people observe these sessions from anywhere in the world. It was an incredibly productive few days, and I’m now working my way through fixing all the usability issues we picked up. Can’t wait to show this to you when it goes live!

Further reading:

Where the product buck stops

There is so much wisdom in Paul Adams’ Lessons learned from scaling a product team, but my favorite part is the clear articulation of accountability. All companies should have something like this:

  • If the analysis of the problem to be solved is incorrect, it’s on the PM. Ensure appropriate research is done.

  • If the design doesn’t address the problem, it’s on the Designer. Ensure you understand the research and problem.

  • If the design solves the problem, but doesn’t fit with Intercom, deliver best practices, or is otherwise weak, it’s on the Designer. Ensure you understand our beliefs, patterns and principles.

  • If engineering doesn’t deliver what was designed, or delivers it late, it’s on the Eng Lead. Ensure you understand the problem being solved and design, plan appropriately and accurately before writing code.

  • If it goes out with too many bugs and broken use cases it’s on the PM. Ensure the team test realistic usage and edge cases.

  • If the team is spending too much time on fixing bugs and not adding new value per our roadmap, it’s on the Eng Lead. Ensure each project improves overall code quality.

  • If we don’t know how it performed, it’s on the PM. Ensure success criteria are defined and instrumented.

  • If it doesn’t solve the problem, it’s on the PM. Ensure there is a plan to improve product changes that don’t fully solve the problem.

In other words, the buck mostly stops with Product Management…

Real markets vs. Expectation markets

Putting the linkbait title aside, Steve Denning’s The Dumbest Idea In The World: Maximizing Shareholder Value1 is a really interesting article about the difference between “real markets” and “expectations markets”:

In today’s paradoxical world of maximizing shareholder value, which Jack Welch himself has called “the dumbest idea in the world”, CEOs and their top managers have massive incentives to focus most of their attentions on the expectations market, rather than the real job of running the company producing real products and services.

And that comes at the expense of customers. This sentence also stood out for me:

Unfortunately, as often happens with bad ideas that make some people a lot of money, the idea caught on and has even become the conventional wisdom.


  1. Link is to the print version because the Forbes site is so unreadable 

Big data and human intervention

In Netflix’s Secret Special Algorithm Is a Human Tim Wu writes about the importance of human intervention in data-driven decision making:

Of course, there is a big difference between using data in combination with intuition and relying entirely on an algorithm—the decision-making equivalent of Siri finding gas stations near you. I don’t think anyone—Netflix, Mitt Romney—makes big decisions that way. As Chris Kelly, the C.E.O. of Fandor, an indie-film Internet channel told me, “It just isn’t true that you can rely on data completely.” Even Google, the champion of algorithms, employs substantial human adjustments to make its search engines perform just right. (It cares so much about this that Google claims First Amendment protection for its tweaks.) I do not doubt that companies rely more on data every day, but the best human curators still maintain their supremacy.

It’s a good reminder that following data blindly is a pretty bad idea. Joshua Porter’s Metrics Driven Design is stil the best presentation I’ve seen on this topic and how it relates to design.

Algorithms aren’t gods

In The Cathedral of Computation Ian Bogost makes the argument that algorithms have replaced religion for many people:

Here’s an exercise: The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.

It’s a long article but very much worth reading, especially for the conclusion:

Algorithms aren’t gods. We need not believe that they rule the world in order to admit that they influence it, sometimes profoundly. Let’s bring algorithms down to earth again. Let’s keep the computer around without fetishizing it, without bowing down to it or shrugging away its inevitable power over us, without melting everything down into it as a new name for fate. I don’t want an algorithmic culture, especially if that phrase just euphemizes a corporate, computational theocracy.

But a culture with computers in it? That might be all right.

Software repair

Richard Seroter’s 10 Architecture Tips From “The Timeless Way of Building” is highly relevant to software development as well:

“Each building when it is first built, is an attempt to make a self-maintaining whole configuration … But our predictions are invariably wrong … It is therefore necessary to keep changing the buildings, according to the real events which actually happen there.” (p. 479-480) The last portion of the book drives home that fact that no building  (software application) is ever perfect.  We shouldn’t look down on “repair” but instead see it as a way to continually mature what we’ve built and apply what we’ve learned along they way.

Just as buildings need “repair”, software takes iteration to get right.

More

  1. 1
  2. ...
  3. 84
  4. 85
  5. 86
  6. 87
  7. 88
  8. ...
  9. 201