Archive for the ‘Data Design’ Category

Four articles recently issued on the Harvard Business Review Blog Network thoughtfully address the good, the bad, and the ugly of data visualizations.

In The Power of Visualization’s “Aha!” Moments, (March 19, 2013) Scott Berinato interviews Amanda Cox, graphics editor at the New York Times.  Cox states, “I wish there were more examples where dataviz actually mattered.”

Cox asserts that “[t]he ability to ask good questions is really what we start with.”  Cox points to the critical issue of data visualization, be it a bar/line chart or poster with numbers; that is, what question does it answer.

Data viz is both young and not young. It’s still rapidly changing, so I’m hoping it gets more awesome rapidly. But we’re already at a place where we can make people understand what they didn’t understand. Now we want to make people understand what no one has understood before. The best visualizations cause you to see something you weren’t expecting, and allow you to act on it.

Bill Franks provides some examples of good visuals in The Value of a Good Visual: Immediacy (March 19, 2013).

Our brains are meant to see in pictures. Grids and columns of data, while ubiquitous, make it very difficult to see trends or patterns. Additionally, a lot of the new data sources available today, such as genetic data or social network data, don’t lend themselves to traditional spreadsheets and graphs. These data types require a different way of displaying them to allow us tosee the underlying patterns and stories in the data.

I agree with Franks that certain data sets are better represented with data visualization tools.  The challenge is finding the appropriate analysis tool (visualization) for the data.  Sometimes the bar / line chart or a table is sufficient to answer the “good question” that started process.

As an example of bad visuals, Gardiner Morse ardently reprises It’s Time to Retire ‘Crap Circles’ (March 19, 2013).

Every time I encounter a crap circle my heart sinks. crap circleI first wrote (http://hbr.org/2005/11/crap-circles/ar/1) about these contemptible “information” graphics in HBR in 2005, and since then they’ve only seemed to multiply. You know what these are— you may have even used them — though you may not have had a name for them. I aim to change that. These perniciouscircles-and-arrows diagrams infest PowerPoint and other business presentations, purporting to clarify an idea while actually obscuring it.

The next time you find yourself preparing a circle for a presentation, ask yourself if the process you’re describing really works the way you say it does. And the next time a presenter trots out a circle tomake a point, find the bogus links and put him on the spot. We could all benefit from a little more linear thinking.

We’ve Reached Peak Infographic, and We’re No Smarter for It concludes Dylan C. Lathrop (March 19, 2013). Lathrop begins, “If I were to chart the evolution of my attitude toward infographics over time, it would start with a soaring arc, dip and rise, then drop into a steady flat line.”

Lathrop asserts, “It’s time we acknowledge the shortcomings of infographics as much as we celebrate their upsides.”

Our data is growing more complicated just as readers are getting less patient. Even the best illustration can’t bridge the comprehension gap.

Today, many people just don’t want to [read]. There’s never been more data at our fingertips, but most of us have trouble making sense of that glut of information unless it’s shaped into cohesive nuggets. Enter the modern infographic, which has moved away from the elegant simplicity of the Isotype icons in favor of communicating entire data sets in one smartly designed package.

Lathrop concludes,

Infographics can evolve by transcending cold data-breakdown, and combining data visualization with more human narratives. Some publications have begun to present well-designed information in tandem with deeply reported pieces online, and the future it represents is thrilling. I’m not ready for an infographic about the death of infographics, but I’m sure someone somewhere has already assigned that piece, and is just waiting for us all to click.


Read Full Post »

Yellow Measuring TapeWe are bombarded by big data, data visualization, and infographics. In attempting to join in, a common mistake when starting to use data in program evaluation or other organization measures, is to collect data that is easy to gather and measure. However, this often misses the so what; i.e., what does the data tell us about how we’re doing.

Beth Kanter in Big Data Without Defining Success First is a Big Mistake on September 28, 2012 clearly states,

But I think jumping into a process: ”Gather, Analyze, and Act” without defining success (or failure) on the front end might lead to wasted time.

Kanter provides an example from DoSomething.org. For many nonprofits the most difficult step, yet the keystone, is describing what success looks like. However, this article does not specifically address this key step.

To Succeed with Big Data, Start Small by Bill Franks on October 3, 2012 on the HBR Blog Network

While it isn’t hard to argue the value of analyzing big data, it is intimidating to figure out what to do first. There are many unknowns when working with data that your organization has never used before — the streams of unstructured information from the web, for example. Which elements of the data hold value? What are the most important metrics the data can generate? What quality issues exist? As a result of these unknowns, the costs and time required to achieve success can be hard to estimate.

Franks’ advice is to start with simple “analytics that don’t take much time or data to run.”

Pursuing big data with small, targeted steps can actually be the fastest, least expensive, and most effective way to go. It enables an organization to prove there’s value in major investment before making it and to understand better how to make a big data program pay off for the long term.

His advice is useful, if the organization has completed the critical step of defining success. That is, once the organization can describe what success looks like, the next step is to determine how success can be measured. I recommend than applying Franks’ advice to begin with small steps to develop metrics.

Robert Plant in Big Data Doesn’t Work If You Ignore the Small Things that Matter on the HBR Blog Network on October 5, 2012, states,

Big data is today’s panacea, the great new hope for unlocking the mysteries of marketing. To avoid being left behind, companies are rushing to cash in on the information they glean from customers, and vendors are stepping up to help.

Companies would do better at satisfying and retaining customers if they spent less time worrying about big data and more timemaking good use of “small data” — already-available information from simple technology solutions — to become more flexible, informative, and helpful.

And in the frenzy to capitalize on big data, don’t forget what it’s like to be a data point — an individual customer dealing with your company. If you’re not making your data points happy, they’ll gladly move into someone else’s database, just as you did after the repair service failed to show up.

Read Full Post »

Stephen Few on Dashboards

Control vs. the Illusion of Control: Which Works for You?  Following his recent announcement of winners of his 2012 dashboard competition, Stephen Few has written another article about dashboards.   Few pointedly and clearly expresses his thoughts on certain aspects of dashboard design.

[I]t’s no more absurd than assuming that information displays for monitoring the performance of your business should look like gauges on a car. The part of the “dashboard” metaphor that works is the similarity in function between car dashboard gauges, which we use to monitor information about the car and our driving, and monitoring dashboards, which we use to monitor information about the organization’s performance. It is meaningless and downright absurd to stretch the metaphor any further.

May I strongly suggest that if you’re interested in performance measurement and dashboards that you read his article directly.

Read Full Post »

2012 Perceptual Edge Dashboard Design Competition: We Have a Winner!

In his blog, Perceptual Edge, Stephen Few showcases the winner and runner-up of his dashboard design competition.  Few had 91 entries and he judged them based on eight, weighted criteria:  comprehensive information, highlighting important information, use of graphics whenever appropriate, choice and design of graphics, sufficient information to decide if action is necessary, hierarchy of importance by salience, support for comparisons, legibility, clear organization, hierarchy of importance by position, visibility without scrolling or paging, clear meaning, use of space, and scalable design.

Few describes the attributes of the winning dashboard so that you see it for yourself in the winning dashboard image included in the blog post.  Few also includes the winner’s description of how he completed the competition exercise.  Further, Few includes the runner-up dashboard and a brief description of its winning attributes.

For examples of poor design in dashboards, you can view Few’s Examples page.

Read Full Post »

Credible Fonts

Maybe I’m an old fuddy-duddy by now, but I prefer to read serif text on a printed page.  I regularly encourage people to use a clean, crisp serif font rather than sans serif when sending correspondence because I have always found sans serif to be unprofessional, and I have tended to pay much less attention to content presented in sans serif.

Finally, my viewpoint on fonts may have been corroborated.

Suzanne LaBarre addresses the question Are Some Fonts More Believable Than Others? on the FastCompany website.  She summarizes the article, Hear, All Ye People; Hearken, O Earth (Part One) by Errol Morris, a frequent blogger at The New York Times.

Are some fonts more believable than others? A curious experiment by documentary filmmaker Errol Morris suggests as much. After polling approximately 45,000 unsuspecting readers on nytimes.com, Morris discovered that subjects were more likely to believe a statement when it was written in Baskerville than when it was written in Computer Modern, Georgia, Helvetica, Trebuchet, or Comic Sans. (LaBarre)

Morris begins his article by citing anecdotal evidence from a blog post, “The Secret Life of Fonts,”  by Phil Renaud.

Renaud had written 52 essays in total. Eleven were set in Times New Roman, 18 in Trebuchet MS, and the remaining 23 in Georgia. The Times New Roman papers earned an average grade of A-, but the Trebuchet papers could only muster a B-.

And the Georgia essays? A solid A.

Morris describes his experiment and the analysis of the results conducted by Benjamin Berman and Professor David Dunning at Cornell.  The results will knock your socks off.  Dunning comments,

Baskerville seems to be the king of fonts. What I did is I pushed and pulled at the data and threw nasty criteria at it. But it is clear in the data that Baskerville is different from the other fonts in terms of the response it is soliciting. Now, it may seem small but it is impressive.

The word that comes to my mind is gravitas. There are some fonts that are informal — Comic Sans, obviously — and other fonts that are a little bit more tuxedo. It seems to me that Georgia is slightly tuxedo. Computer Modern is a little bit more tuxedo and Baskerville has just a tad more starchiness. I would have expected that if you are going to have a winner in Baskerville, you are also going to have a winner in Computer Modern. But we did not. And there can be a number of explanations for that.

Baskerville is different from the rest. I’d call it a 1.5% advantage, in that that’s how much higher agreement is with it relative to the average of the other fonts. That advantage may seem small, but if that was a bump up in sales figures, many online companies would kill for it. The fact that font matters at all is a wonderment.

Read Full Post »

There really is nothing to add to Josh Smith’s article, 10 Steps To Designing An Amazing Infographic, on the co.Design website.  He begins with a clear and simple premise, Information can be useful–and even beautiful–but only when it’s presented well. 

I’ll list the steps, but the article is a must-read for anyone interested in infographics.  There’s more to it than being able to recite the steps in order.

  1. Gathering Data
  2. Reading Everything
  3. Finding the Narrative
  4. Identifying Problems
  5. Creating a Hierarchy
  6. Building a Wireframe
  7. Choosing a Format
  8. Determining a Visual Approach
  9. Refinement and Testing
  10. Releasing it Into the World

Read Full Post »

Does a blank board give you a mental drawing block?

Beth Kanter’s blog posting on June 1st provides several ideas about thinking visually, How to Get Insight from Data Visualization: SHUT UP and SLOW DOWN!   She provides a good reminder to not store data in silos, rather look at relationships in data, look for patterns.

On May 29th, Beth posted, Data Visualization Techniques for Those Who Can’t Draw.  Beth states,

I think there is a subtle distinction between “visual thinking” and drawing skills.    Visualization of data requires visual thinking to be meaningful and you don’t need to be Picasso to be a good visual thinker and share your ideas in a drawing.

The major problem or block to using visualization is that many people think drawing is just for artists and if you can’t draw well, than you shouldn’t bother to do it.  That’s an assumption you need to challenge.

Beth recommends David Sibet’s Visual Meetings book and Dan Roam’s Blah Blah Blah: What To Do When Words Don’t Work.  I have Sibet’s book and find it a useful resource.

Read Full Post »

Older Posts »

%d bloggers like this: