Archive

Archive for November, 2011

4 MORE Points of Wisdom from Steve Krug’s ‘Rocket Surgery Made Easy’

November 30, 2011 2 comments

About a month ago, Newman wrote a post titled 4 Points of Wisdom from Steve Krug’s ‘Rocket Surgery Made Easy’. He was reading the book and wanted to share his insights. At the time, I was immersed in the James Gleick book ‘The Information’. And if you’re a regular reader, you know that led us on a week long journey exploring how entropy is related to web design and user testing. Oh yes, it got serious.

Now it’s my turn to go through ‘Rocket Surgery Made Easy’ and I have 4 more points of wisdom that I learned from reading through the book.

#1. Prove vs. Improve

This was a bit of a revelation to me. When I think of user testing, I think of trying to make a website better. It never occurs to me that I’m trying to prove something. It just seems obvious that I would be trying to improve things. But when it comes to user testing, it’s possible to do both.

To put it in scientific terms, it’s quantitative testing vs. qualitative testing.

Quantitative testing involves creating a testing methodology adhering to a strict testing protocol to ensure non-biased results. If it sounds like a science test, with a hypothesis and all of that, you’d be right. And because there’s a hypothesis and you’re looking for valid data feedback, it is setting out to prove something.

Qualitative testing is much less formal. It’s not focused on proving anything. Instead, its focus is on making things better.

Certainly, there’s room for both types of testing but when it comes to actually doing the testing, for most businesses, it will be more time and cost effective to concentrate on improving things. And doing that is as easy as asking for an opinion.

#2. Why Down-and-Dirty Qualitative Testing Gets Results

Deep down, we all know that nothing is free. So what gives here? Why does it seem like the testing type that requires less rigor, time, effort, and money seem to be the one that actually works? Simply put: it’s because ‘good enough’ is good enough. And by the time you’ve exhausted insights from qualitative testing, you’ll be in a better position to do quantitative testing.

Krug lays out three reasons in particular why qualitative testing works:

1. All sites have problems

Settle down.

I’ve never come across a website that couldn’t use a little work. Apparently, neither has Steve Krug. One of the main reasons that water cooler user testing works is because there’s always room for improvement.

2. Most of the serious problems tend to be easy to find

In the dust of creating a website, it’s easy to get too close to the whole operation. A common issue comes from troubleshooting problems. These problems can be technical or organizational or even baked into the business plan. Eventually a solution is found and implemented. Sure, it managed to go all Matrix on all of the problems and managed to dodge all of the bullets but that doesn’t mean that the solution was the right one for the user.

The first hint that you're too close.

When you find a non-interested party and get feedback, the major issues will crop up again and again. You can’t see the forest for the trees. They can.

The forest from 'Return to Zork' still haunts my dreams.

3. Getting stakeholders involved in user testing gives them a reality check on who their users are

Another common mistake that’s made during the planning phase of web design is that it’s created for an ‘average user’.  The problem is, the average user tends to bear little resemblance to their actual average user. The obvious remedy is to go find some ‘average users’. Technologically, we can do just that.

Use a tool like Inspectlet (which Newman has been using and blogging about) or one of the other ‘Heatmaps / Mouse Tracking Tools’ we have in the sidebar to record your user’s sessions. Share those  videos with all the stakeholders and then stand back. Everybody will get a new insight about the ‘average user’ and will immediately want to talk about it. It’s pretty remarkable to watch, actually.

Who you think your users are.

Who your users actually are.

This has the wonderful effect of getting everybody to focus on the right thing: improving the user experience.

#3. Test other people’s websites

This is just brilliant. User testing can (and should!) happen even before the first napkin sketch is drawn. How? Test the websites of your competitors or of somebody else in the same field. Test sites that have features you’re thinking of implementing. A cup of coffee and a conversation could save you weeks of work.

Remember, this is not rocket surgery. It’s basically asking people their thoughts about a website. Nothing says that you only can ask people about your website. As Krug says, “someone has gone to the trouble of building a full-scale working prototype of a design approach to the same problems you’re trying to solve, and then they’ve left it lying around for you to use.”

#4. Test for ease of understanding

That was easy!

This bit seemed to echo the Smashing Magazine article ‘Easier is Better Than Better‘ we discussed on Monday’s podcast. The gist of the article is that:

“People choose not on the basis of what’s most important, but on what’s easiest to evaluate.”

Or more simply stated, “Don’t Make Me Think!“.

Websites do two things: provide information for a user to consume and provide a way to filter out all of the other information.

You may know this as our ‘filter’ and ‘confirming’ pages that we talk about again and again. Web pages, until you get to an information page, whether that’s a YouTube video, or a contact page, or a product description page, or MLS search results, are known as ‘filters’. This is because their main role is to get you to the information you’re there to view. The most obvious of these filter pages is the front page.

On most front pages of website, they exist to shuffle visitors off to other pages. They are not a destination page in-and-of themselves. The essence of filtering is clarity. And clarity can be measured by watching how people interact with a page. The easier it is to navigate the page, the lower cognitive load and the higher the success rate.

People want ‘easy’ more than they want ‘better’. If your site isn’t easy, it won’t have a chance to show that it’s ‘better’ because finding another website is just as easy.

This testing can be done even at the napkin-sketch phase. Just ask somebody who isn’t involved with the project to tell you what they see in the sketch. Listen to what they have to say. Chances are they’ll say “oh, this looks like a site for ____ and what’s this ‘experience’ button?” or something to that effect and you’ll immediately hone in on what doesn’t make sense to your users.

Final Thoughts

I can see why this book is the go-to resource for easy and effective user tests. It maintains a laser-like focus on how to improve your website with user testing. It’s filled with good nuggets. Certainly enough to do at least a third installment of this series. I really can’t recommend it highly enough. It’s the perfect compliment for the various user testing tools that we sample here on the site.

Better User Experience Podcast #16 – Bounce Rates, Inspectlet, and Decision Fatigue

November 28, 2011 Leave a comment

Are you ready for some PODCAST?!  While the Giants and the Saints battle on the gridiron, Ben and I delve into the finer points of website Bounce Rates and the magic of Inspectlet user interaction videos – Yes, I just made that up, “User interaction videos”.  If you watch them, you’ll feel inspired as well.

Remember you can subscribe to our podcast on iTunes – new episodes each week!

Posts we reference in the podcast:

http://www.inspectlet.com

BUX – Using Bounce Rates in Google Analytics to Improve Your Web Site’s Critical Path

SmashingMagazine.com – Easier is Better than Better

NYT – Do You Suffer From Decision Fatigue?

Usability Observation with Inspectlet

November 25, 2011 4 comments

Inspectlet Homepage - This week's usability tool review

First Impression with Inspectlet

We place Inspectlet in the category of  ‘Heatmaps / Mouse Tracking Tools’ and ‘Screen Capture Tools’.  In the past, I had used Userfly, another tool in this category and had been underwhelmed.  The code snippet slowed down the page and I eventually took it off.  I don’t want stuff to load from another site on my page.

So, I was alittle leery Inspectlet, but Ben was favorable in his survey of UX Tools and we decided on this review. So keeping an open mind, I started to use it.

Here it is: Inspectlet let’s you observe people using your website – Actual users on the actual site.  Like Coca-Cola, it’s the real thing.  Paste a code on your site, and <magically> a recording shows up on the Inspectlet dashboard with a video along with key metrics –  time on site, browser information (screensize and type) and number of pages viewed.

The numbers are good. It’s good to know that a visitor using IE found the site thru a google search for this spent 2 mins on this page.

The video is GREAT. It’s invaluable to know where they scrolled and clicked on the site.  It’s the thing you can’t get unless you discover a cloak of invisibly and a teleportation device  (Hey, I could build a kick-ass business in England if I could hook up with the Chosen One)

Inspectlet - It's Like MAGIC!... psst, it IS magic. (I'm two for two on chosen one references!)

What is the experience of using Inspectlet?

Inspectlet is surprising simple to use, considering it’s magical abilities.  It’s straightforward process and nothing seemed counter-intuitive to me.  I was expecting to sign up, get the code and paste it into my site, and watch some videos.  Real-Time Analytics and Heatmaps are two other features.  These were not so valuable to me because of the low traffic the testing sites.  Analytics would be more valuable if the sites got more traffic.  And, the heatmaps would be more valuable if the sites had more interaction / clicks.  I suppose over time the patterns would become clear, but in the few days I tested not enough data was collected to make Real-time Analytics and Heatmaps valuable.

The videos are ‘The show’ – the real value for me.  You get a table of all the captures for the site which includes – IP address, starting page, capture length, browser (type), screen size, and referer and date.  Click, and you get a video display page which is very well thought out and easy to use.  Each page viewed by the user gets a new ‘chapter’.  You can pause the video or speed it up (up to 5x speed).  Unfortunately, you can’t scrub back and forth.  There is a nice feature where the video automatically skips parts with no interaction AND the sections with the most interaction (clicks and mouse movement and scrolling) are highlighted in red.  All of this can be viewed in their demo -that’s what sold me on it.

The Process of setting up the test

  • Sign up / login / get to the dashboard
  • Add a new site
      • give it a name
      • RealTime Twitter Query (Twitter should be capitalized in the form – new tool!)
      • skip some stuff that’s not documented (Exclude IP, screen capture method, and screen capture frequency)
  • Get and Install the code on each page you want to test
  • Watch the videos – scribble notes frantically, and look quizzically
  • Analyze the results – Come up with ideas for testing questions and changes to be made

Doing this process is fairly straightforward, but I do wish for more documentation. Getting comfortable with stuff you don’t understand is key for using alot of stuff on the webs – you can’t be an expert in everything.  Still I wish for more documentation.  Yet, the tool just works.

I did have a problem – no data was coming in – and it was quickly resolved by tech support via an email – ON A SUNDAY.  Once again, support is good.  The problem was that I had the ‘Staggered Captures’ set up incorrectly. (some more documentation would be good here)

How to get the most value from ‘Usability Observation’

Here’s my thing:  Using Inspectlet will benefit your usability plan.  I think a tool like this should be in every UX toolbox and here’s why.

Like I’ve said before, user-testing is about observing users with the intent of improvement – to make changes.  Inspectlet gets directly to the observation.  You are like a fly on the wall (Great name Userfly!) You don’t disturb the user and they are having an authentic experience with your site.  This by-passes many of my issues with user-testing.  This is really Usability Observation.

We aren’t taking them out of the flow – they don’t know they are being watched.  It’s like security cameras in a retail store.  But we aren’t watching them for shoplifting.  We’re watching to see where they go, how they got there, where they click.

My big issue with testing – the thing I can’t get my head around.  Is what to test?  What questions to ask?  Observation is the answer.  Observation and testing go hand in hand. Observation leads to  exact and specific test questions. Those test questions lead to more observations.

Here is the pitfall: You can’t observe and test at the same time.  We’ve talked about this many times, but now I’m having an ‘Inward Singing’ moment. Avoid this pitfall: by observing first.  We listen first, correct?  So, listen to your users.  If you listen, they will tell you what to ask next.  Observe first. Test second. repeat.  Hmmmm.  Or Observe. Change. Test. repeat.

Like a three step Waltz. 1,2,3 - Observe, Change, Test. ... And, let the user lead, plz.

My thoughts get unclear here: But bare with me a sec.  Maybe these are the three fundamentals and they each relate and rely on the other.  Observation is a form of Measurement.  Inspectlet is both Qualitative and Quantitative measurement.  The point is that it’s dangerous to mix the elements or try to perform them together.

We tried to remove the observation from the testing in our testing script – by starting with the participant simply using the site.  Ben even suggested leaving the room while they complete the tasks. Like: “Here do these tasks and I’ll come back and we can talk about it”.  That’s good.  But, Inspectlet is better.  They don’t know they are being watched.  They are thinking about their goals and needs – not being a test subject or providing insight to you, the builder.

Natural users are better than un-natural test subjects.

So, how do you avoid messing up at writing a test question? Start with observation or a measurement.   Then specifically ask / test about that measurement.  Bounce Rate is a common metric we want to lower.  If it’s high, people are leaving your site within 10 secs and are not going deeper into the site.  You see it in Google Analytics and you SEE it in the Inspectlet video screencaptures. That’s an observation.  Inspectlet would show you this – and more, you can see if the user did anything during those ten seconds.  Now, make a change to lower bounce rate – put key content above the fold, make a clear call to action, make text bigger and bolder.  And, the final step, make a question and test with a Usabilla type tool – “What are you most interested in on this page?”; “Where would you click if you wanted to do [insert Critical Path step one]?; “Which text would you likely read first?”.   These questions test if your change made a difference. Well, you could measure again over time to see if the change made a difference.  Or, you could create a Usabilla test to ask about first impressions of the site – or 5 sec test.com or Feng-GUI it.

Okay: Enough rambling.  Point that started that digression is good, I feel.  Here are my findings:

  • Observation is different than testing.
  • Both are important and relate to each other.
  • Inspectlet is an observation tool – it will provoke questions to test and changes to make.
  • Three general types activities in usability or design are: Observe->Change-> Test and can be followed in that order.

A few final thoughts:

Using Inspectlet, I found myself wishing for an intercom button.  “Excuse me, website visitor.  Why are you scrolling up and down like a madman? ”  I realize now, that I want to switch from observer to tester.  And, of course, I wanted to make changes.  The big insight I had – do something for smaller screen sizes. Could I have seen that in Google Analytics?; yes.  Did I know we have 10% ‘small’ screen use?; yes.  Did it have a big impact seeing those numbers?; no, not until I SAW it with my own eyes.

We talked about finding users to test in previous posts.  Inspectlet [because it is an observation tool] doesn’t have this problem.  The users are right there on the site now – right now.

I bet site owners get addicted to watching the videos, just like some are addicted to watching visitor counts.  There is data there – actionable data that will bring in more money.  Because of that, I think Inspectlet is a great value at 8 bucks a month.

By way of explanation of that digression into the process of design and testing or website revision.  I’ve just finished David Zull book on the brain Learning Cycle and I think that’s where those ideas came from.  He basically says the brain learns in four stages: Gathering, Reflecting, Creating and testing.  Inspectlet is a gathering tool – a sensory tool.  Usabilla is a testing tool – an active probing tool.  Reflecting might be Analytics – where you integrate the data and decipher patterns.  Creating is where you make changes to your design and plan.

Thanks for reading and see you next time!

Using Bounce Rates in Google Analytics to Improve Your Web Site’s Critical Path

November 23, 2011 1 comment

Google Analytics is a fantastic program for assessing the performance of a website. It’s THE primary fixture for most small and medium websites when it comes to measuring traffic. By now, most people are familiar with tracking visits, visitors, pageviews, referring sites, search engine traffic, and so forth. A good deal of users are also tracking sales, leads, and revenue.

These are wonderful things to be doing. They’re essential to any business that’s serious about growing their web presence.

But if you could only measure one thing, what would it be?

As much as you’d think it’d be something like traffic or revenue, there’s really a more actionable metric.  Metrics are useless if they aren’t useful. I know that sounds like I was just channeling John Madden but think about it. Knowing the number of visitors, even if you knew them in relationship to previous months traffic isn’t very actionable. It just provokes a second question.

Why did x number of people come to my website last month?

And off you go to look at referral traffic data. But since we’re limiting ourselves to one metric, we can’t do that. So knowing how much traffic the site is getting, while awesome, isn’t actionable.

It’s the same story if you want to know about sales, leads, or revenue. They all beg a second question: why that particular number of sales, leads, or revenue? And off you go to look at your conversion funnel…

What we really need is a metric that tells us something about how the website is performing that is also tied in some way to increasing the bottom line.

Let’s cut to the chase: that metric exists, it’s the bees knees, and it’s called the Bounce Rate.

The bounce rate is a really simple concept to understand.

A bounce rate measures how many people came to a specific page on your website from someplace else on the Internet and then left without going deeper into your website. Or to put it more simply, it measures one-and-dones.

Why does this matter and how is it actionable?

It matters because people are likely to leave your website if they can’t find what they want. If you know what percentage of people are leaving a page and if that number is high, it tells you that something on that page is confusing to users. A low bounce rate indicates that people are able to see what they want and have clicked elsewhere on your site to find it.

Where the bounce rate really shines though is in how it can help you clean up your site’s critical path.

On your typical e-commerce site a critical path looks like this:

  • Front page
  • Category page
  • Sub-category page
  • Product Description page
  • Cart
  • Checkout – Shipping
  • Checkout – Billing
  • Checkout – Review
  • Checkout – Order Receipt

Just about all traffic will land on one of the first four pages – down to the product description page. Look at those pages in Google Analytics. What are their bounce rates? The first three pages are just filters so the bounce rate should be as close to 0% as possible. The product description page will have a higher bounce rate because it’s a destination page rather than a filter page. When people land on a destination page, they are making a decision about information on that page. They are no longer trying to find what they’re looking for. As such, it’s possible that the visitor won’t need to go further into your website. Because of this, it’s acceptable to have a bounce rate on your product description page. But you should really work on getting your filter pages as close to 0.00% as possible.

Start with the page with the highest bounce rate and see if there are any obvious reasons why people may be leaving the page without going deeper into the site. If you need it, consider a tool like Inspectlet so that you can see actual user behavior on your site. Work on refining your pages to lower your bounce rate.

It’ll have the effect of strengthening your critical path and will lead to higher goal conversions.

You can think of the bounce rate as the canary in the coal mine. It’ll let you know where the problems are. It’s up to other tools and user testing to suss out the specific problems on those pages but the bounce rate does a great job of focusing site updates on places where they will have an immediate positive effect.

Better User Experience Podcast #15 – Remote User Testing With Usabilla

November 21, 2011 Leave a comment

Today it’s “Show and Tell” time on the podcast.  I’ve brought “Usabilla” to the front of the class for you to enjoy – in living color.  Podcast #15 is ALSO Screencast #1.  Your welcome internet community – the future of communications technology is here (may not be available in all markets)

[Note from Ben: The upload choked. I’m uploading it again and it’ll be here soon. It’s the screencast to go with the podcast, but you can enjoy the podcast without watching the screencast… That said, I’ll have it up as soon as possible.]

Ben and I discuss to the process to create a test and how it might fit into a site design / revision process.  We also brought in our old friend Feng-GUI to help ‘triangulate’ the results. We agreed that the standard questions were lacking and probably not going to reveal actionable data.  And, we finish the show with the love triangle between chocolate, peanut butter and jelly.  ENJOY!

Subscribe to our podcast on iTunes – Insightful episodes each week!

Posts we reference in the podcast:

BUX Post: Simple Usability Tests with Usabilla

Usabilla

Feng-GUI

Usabilla Blog post – Five things you can test under five minutes

Destiny’s Child – Bootylicious

Simple Usability Tests with Usabilla

November 18, 2011 4 comments

Tool of the day, Usabilla!

First Impression with Usabilla

We have Usabilla listed in our ‘Conduct a User Test’ category along with Loop11 and Usertesting.com.  In earlier posts, Ben and I have reviewed Usabilla and concluded it would be a valuable resource for simple usability tests for websites.  Basically, if you are a business owner with a website, then Usabilla is a good place to start with user testing.  It’s simple. It has a usable free service plan and support resources to help you get started today.

Website testing, in it’s simplest form, is observing people using your site with the intent to make improvements.  Usabilla offers a way to test users completing tasks on the website. You give a task.  The user completes the task with a click and/or a note. You can see where they clicked, what they wrote, and the time it took.

What is the experience of using Usabilla?

Before I started with my own test, I studied the Usabilla features.  I first heard of Usabilla on review-type blog post where the founder and CEO of the company, Paul Veugen, commented and addressed issues uncovered in the review. That shows engagement and interest.  This isn’t a dead-tool or a side project for someone.  It makes me trust these folks.  Points for Usabilla, right off the bat.

My trust in the company  is further supported by their blog.  The posts are relevant to the small business owner interested in user testing a website.  Five-things-you-can-test-under-five-minutes  and Guerilla-usability-testing-tools-improve-conversion-rate-satisfaction  are two insightful and practical posts I read.

Additionally, I liked that I could test sites [using the screencapture tool] and/or uploaded images for my tests.  This allows me to follow the advice of Steve Krug and test the napkin sketch of my designs [Maxim:Start earlier than you think makes sense].  Collecting users for the test was made easier by providing a javaScrip widget to embed on the site to get actual site visitors to perform the test.  And, lastly, I want to give a shout out to the Usabilla Support.  I had a question about the widget placement on the page [it was being covered by the content]. I followed the ‘Chat Now’ link on the dashboard to quickly get in touch with Paul v. A. who looked at my site and instructed me how to fix my problem. Try to find that kind of support in another free tool!

The Process of setting up the test

  • Sign up
  • Create a test
      • give it a name
      • custom logo – not sure where this was used
      • Add a page – generated screenshot from a URL didn’t work on one page -had to use my own screenshot
      • Choose a task – create a new task
  • preview the test
  • activate the test
  • invite participants – via URL or Widget
  • Pause test and Edit
  • Analyze the results

The experience of using the tools was smooth and went as expected. No frustration  or undo cognitive load. The real hard part is deciding what to test and how to test it.  Thankfully the Blog was very helpful, even if the standard questions were not. Meaning, I think those standard questions won’t reveal actionable insights. More on this later.

With great power - like chainsaw arm- comes great responsiblity

Having the right tool means nothing if you don't know how to use it.

How to get the most value from remote user testing with Usabilla

Just because you have access to a powerful tool, doesn’t mean you can produce powerful results.  The tool is only as good as the user.  If this discourages you, don’t let it.  Everyone has to begin somewhere [and we’ve chosen Usabilla as that tool].  Here are some of my thoughts based on my first experiences with remote user testing like this.

First and foremost, what do you test?  More specifically, what questions do you ask the user about your design in order to get actionable / profitable results?  I’ve talked about this before in my post,”It’s the Questions, Stoopid“.  Basically, everyone has impressions and they are generally unique and varied. That makes them both hard to test and [mostly] un-actionable.

The tool, powerful as it is, allows you to test whatever you like. You can use the standard questions.  IMHO, this is likely to return superficial, un-actionable results. Or, dig a bit deeper into your design decisions and ask some targeted questions and will lead to concrete site changes, more conversions, better [smoother? more elegant? less confusing?] critical paths for your site visitor through your site.

Developing this talent for question creation is the real challenge for UX experts.  It requires a scientific eye for data vs noise and the creativity to manipulate the tools to reveal actionable insights.  It’s hard – try it yourself and you’ll see.  Of course you could be the ‘chosen one’.  For the rest of us, I suggest these test questions as food for thought and a good starting point (borrowed from the Usabilla Blog post mentioned earlier).

There can be only one!

Where would you click to start to use this product?

Simple test of how and if the user notices the  call to action and the Critical Path of the user experience.  If they get it ‘wrong’ [or I should say, click where you don’t expect] or it takes a long time for them to click [nice feature of Usabilla measures this time], then you can review / revise your design and retest.  Usabilla makes it easy to do this.

Which elements make you trust on this website, and why?

Trust is paramount on the internet.  If I don’t trust a website, then I’m not likely to convert for them – buy stuff, give my info, sign up for anything.  Could be just me… I believe good designers put in trust building elements into sites – certification logos, personal pictures – not stock, quality content, well known brands or imagery, etc. The question will reveal: Are users picking this up? Do they click on the elements you expect?  And, because of a nice contextual note tool, the user can very leave a specific note about the design.  Yep, that “…and, why?” is powerful.

hmmmm, I don't trust 'em. Been burned too many times in the past. Sorry, Ginger.

A few final thoughts:

Where you get / recruit your test participants matters, I feel.  If you are simply hoping to increase the efficiency of the ‘around the watercooler’ test – sure, go ahead and use your facebook and social media to get participants.  They are returning users.  And, I guess, if your site is based on returning users, then this would be fine and good.  But, if you need first time users, then use the widget or ask via some other means than existing contact lists.

I saw an banner ad for Usabilla where the tagline was ‘We give designers quantitative ammunition to go with their qualitative insight’.  I thought this was perfectly phrased.  We do test insight. We do test the assumptions that lead to our design decisions.  That’s why we test.  We ask ourselves: Will making this button red attract more attention? Will adding a recognizable logo increase the sense of trust in the users? Until ‘the scientists’ develop a brain scanning and opinion deciphering device, we’ll have to be creative with the user-testing tools available to uncover the effect of our designs on the user.

I’m glad to add Usabilla to our tools  and recommend it to anyone looking to quickly and easily start testing today.

How to Create Better Monthly Reports For Your Clients

November 16, 2011 1 comment

The monthly report is odd little creature. It’s created with the best of intentions but is too often under utilized by the people it was created to inform. There’s also the problem of the document itself. It’s confusing, or it focuses on the wrong things. It means well but too often it’s a relic of the past.

Anyway you slice it, chances are you have room to create a better monthly report.

Who Cares?

Surely, you’re thinking, this must be one of the most boring topics in all of business. In my personal experience of creating reports for clients for over a decade, rarely have these reports done their job: be meaningful to those it was created to inform.

Ninety percent of the time, the only time I ever heard from a client specifically about their monthly report was if they didn’t get one. Basically, they were only aware of it because of it’s absence.
In their mind, the monthly report was proof that something was being done. When I would send out monthly AdWords reports to my clients, only a few would want to talk about it. Most clients just filed it and forgot it.

You know you were thinking it.

The clients that filed their reports weren’t intentionally ignoring their website. I’m sure some of them treated them like their monthly financial statement from their broker: They trusted me to do what’s right for them with their AdWords account and take on faith that what’s in the report affirms that belief.

The real problem was that the report didn’t have any meaning to them. There was a bare minimum of analysis, charts from Google Analytics, and AdWords data. It was fun to see the green and red arrows showing how data changed from the month previous but none of it allowed the client to make a decision. And in a world where time is limited, to a business owner, if there’s no decision to be made, there’s no reason to read it.

Rule #1: Do Talk About Fight Club

Rule #2: Do Talk About Fight Club.

Fundamentally, the monthly report is about communication. The best way to make sure the the report is useful and is used to maximum effect is to hold a brief meeting to talk about the various considerations the report reveals.

This guy can skip the meeting.

Focus on Business Goals

The biggest problem with monthly reports is that they are overwhelmingly created as works of fiction. Everything in the report might technically be true but there’s a desire on the part of the creator to send a clear EVERYTHING’S A-OK OVER HERE BOSS message. It’s just one of those things. Once somebody gets a budget, they’ll do a lot to keep it. And bluffing in a monthly report is a good way to do it. It’s security through obscurity.

There's an XKCD comic for that.

I’ve seen reports sent to clients that were hundreds of pages of screen shots. The only reason I can think that was done was because somebody thought it was a good idea to make the report seem huge. As if a report that can double as a paper weight is somehow more valuable than one that focuses on its usefulness.

A useful monthly report is one that focuses on the web plan’s goals.

A monthly report is an extension of the web plan. If there’s no plan, then you’re right to wonder why a monthly report is even necessary. So if you don’t have a plan, stop now, rewind the website to our blog posts last week on the initial client meeting and start there.

If you have a web plan then you should know the:

  • Business goals
  • Website goals
  • Budgets
  • Time Tables
  • Responsibilities

In short, the monthly report needs to echo all of those facets of the report, provide an update on what’s happened in the past month and then it should provide a way to discuss how to move forward. If any decisions need to be made or if there are items that need to be discussed, they need to be noted.

Be Comprehensive

I’ve been using the odd phrase “web plan” to this point in this post. In my opinion, a web plan is really a plan that addresses all aspects of your web presence. A web presence is the sum-total of a person or business on the web.

It’s your website, Facebook page, Twitter feed, YouTube channel, SoundCloud account, search engine visibility, advertising, and feed subscribers combined.

If all of this is taken into account when creating the plan, as I think it should be, then you have a Web Presence Plan. Everything else is a subset: a website plan or a social marketing plan or a SEO plan, what have you.

The point is, you have to design the report around the plan, and the plan should be as comprehensive as possible. Applied fully, this report will contain a lot of data. As the months pass and historical data is available, the amount of raw data will only grow. This is a good thing. Normally, this is how monthly reports die a slow death. But because of how we intend to use this document, in this case, it’s a good thing.

It’s more than communication, it’s education

My AdWords clients that I used to talk to about their monthly liked to sound informed. We’d have conversations filled with discussion about click-thru rates, cost-per-click, and page placement. Rarely though were they interested in cost-per-conversion, which is the One Metric to Rule Them All in the AdWords universe.

It’s not that there isn’t value to be had by looking at the click-thru rates, cost-per-click and page placement, it’s just that they are wholly explanatory data for the only metric that really matters: how much it costs to get a sale.

The problem was that there was a knowledge gap. On some level, clients know that web dev and web marketing firms are not going to ultimately take responsibility for what happens on their website. We’ve covered this before. As such, they feel invited to take a peek at the underlying data and to work on the analysis themselves.

While the desire to be involved is commendable, it’s at this point that the gap in knowledge and training on these topics can become apparent. Every web developer I know has a story about a client misusing technical jargon. They’ll say things like, “I need to increase my XMLs!”

Which you have to admit is a little ROFL.

It’s the job of the monthly report to point out what’s important. It needs to highlight the cost-per-conversion and use the other data to support why it is what it is.

The only way that’s going to happen is if the report makes it clear which data is primary and which is supplementary.

Analysis: Inputs and Outputs

Websites are about two things: getting people to it and what they do once they’re on it. Every facet of a company’s web presence can be grouped into one of these few categories. All social networking, all SEO, all advertising is about driving traffic. The website’s graphic design, functional design, and content are all responsible for what people do once they’re on the website.

It’s through this lens that data should be analyzed. Looking at these two sides of the web-coin will keep the report relevant and will lead to smarter conversations.

Traffic

The initial plan probably lays out specific target metrics for the social networks, SEO, and advertising. Certainly, measure all of that and work to meet or exceed those targets. But more importantly, and more generally, how is traffic to the website? Has it been trending up? Do you know why? Do you see opportunities in SEO, Facebook, Twitter, YouTube, etc., to increase traffic?

Website Function

How are sales/leads? How has it been trending? What’s the average bill of sale? What are the best sellers? Why are they the best sellers? What’s being done to address strengthening the critical path? How does the conversion funnel look? Has any user testing been done? Has that testing revealed anything about page-specific elements that need addressing?

Build from the ground up, present from the top down


The key to the whole thing is to provide the data but to put it in an appendix at the end of the document. The monthly report is about business, not technology. The technological concerns arise because they support the business goals. So kick the tech stats to the back of the report and put the analysis front and center.

In most monthly reports the data is front-and-center and the analysis is gravy. The data shouldn’t be front-and-center, it should be be the supporting documentation. The meat of the report should be a discussion of the various decisions, considerations, and opportunities that rise out of the data.

It’s also necessary to recognize that business happens in a larger context: what time of year it is, changes to how things are done online, etc. It’s a good practice to get in the habit of summarizing the current environment before moving into the analysis. You want to set up the discussion so that everybody sees as much of the field as possible. Providing environmental context allows the client – who is probably not online every day – to orient themselves before being asked to make some decisions.

Put it all together and a typical report would loosely be structured like this:

  • Current environment (1 page or less)
  • Analysis: Decisions/Considerations/Opportunities (2 pages or less)
  • Supporting Data
  • Full Data Appendix

Create Accountability


I’m a big fan of accountability. That goes for the developer as well as the client. If a client says they’ll be responsible for creating some content, they should be responsible for the outcome of not creating that content. After all, it’s hard to promote a blog that rarely has new content.

The best way to force accountability is to get signatures next to all decisions. Then if things aren’t done according to the plan, there’s a physical record of who dropped the ball.

The thing is, there are a few ways things can go right and about an infinite number of ways they can go wrong. Getting signatures is a way of enforcing the rules set forth in the original web plan. It might sound like a grumpy old man to demand a physical signature but it’s really just trying to prevent problems down the road.

I recommend adding one page to the monthly report after the monthly meeting: it’s a page that details what’s going to be done in the next month. Next to each line item is a signature of everybody responsible for making that line item happen.

Once you have that document, make copies and send them to everybody involved. You keep the original. At the end of the year when you’re doing your annual report, these documents will be the star of the show. And because there are literal signatures on what was supposed to be done, nobody can feign ignorance.

The goal, of course, is not to get people in trouble or to create ill-will but to keep everybody accountable for their responsibilities.

We’ve all experienced the problem of people helping in places where they aren’t supposed to be. This provides a way to discuss that issue too. If your name isn’t signed next to the line item, you don’t need to be involved. Simple as that.

The monthly report is the way the web plan gets accomplished. It’s a tool. And accountability is an important part of that. Without accountability by all parties, entropy starts to increase and the project suffers. Better to stop all of that before it starts. Get the signature.

It’s A Living Document

Monthly reports are their most effective when they’re treated like a living document. It’s meant to reflect conditions on the ground, both in the past month and historically and to provide a way for leaders to make decisions to accomplish the business goals.

Over time, the goals are going to change. The things done to various parts of the company’s web presence will change. When it does, let the report change too. Don’t fit the data into the report, fit the report to the data.

The bad monthly reports we’ve all seen in the past failed to change as the business needs changed. They’re paper zombies; undead and here to eat your brains.

Rather, stay focused on your client’s needs. Create a document that addresses those needs and updates the web plan and talk about it, every month.

A report that does all of that creates the conditions for success and growth and validates you as the monkey that knows how to keep its eye on the banana.