Archive

Posts Tagged ‘goals’

How to Create Better Monthly Reports For Your Clients

November 16, 2011 1 comment

The monthly report is odd little creature. It’s created with the best of intentions but is too often under utilized by the people it was created to inform. There’s also the problem of the document itself. It’s confusing, or it focuses on the wrong things. It means well but too often it’s a relic of the past.

Anyway you slice it, chances are you have room to create a better monthly report.

Who Cares?

Surely, you’re thinking, this must be one of the most boring topics in all of business. In my personal experience of creating reports for clients for over a decade, rarely have these reports done their job: be meaningful to those it was created to inform.

Ninety percent of the time, the only time I ever heard from a client specifically about their monthly report was if they didn’t get one. Basically, they were only aware of it because of it’s absence.
In their mind, the monthly report was proof that something was being done. When I would send out monthly AdWords reports to my clients, only a few would want to talk about it. Most clients just filed it and forgot it.

You know you were thinking it.

The clients that filed their reports weren’t intentionally ignoring their website. I’m sure some of them treated them like their monthly financial statement from their broker: They trusted me to do what’s right for them with their AdWords account and take on faith that what’s in the report affirms that belief.

The real problem was that the report didn’t have any meaning to them. There was a bare minimum of analysis, charts from Google Analytics, and AdWords data. It was fun to see the green and red arrows showing how data changed from the month previous but none of it allowed the client to make a decision. And in a world where time is limited, to a business owner, if there’s no decision to be made, there’s no reason to read it.

Rule #1: Do Talk About Fight Club

Rule #2: Do Talk About Fight Club.

Fundamentally, the monthly report is about communication. The best way to make sure the the report is useful and is used to maximum effect is to hold a brief meeting to talk about the various considerations the report reveals.

This guy can skip the meeting.

Focus on Business Goals

The biggest problem with monthly reports is that they are overwhelmingly created as works of fiction. Everything in the report might technically be true but there’s a desire on the part of the creator to send a clear EVERYTHING’S A-OK OVER HERE BOSS message. It’s just one of those things. Once somebody gets a budget, they’ll do a lot to keep it. And bluffing in a monthly report is a good way to do it. It’s security through obscurity.

There's an XKCD comic for that.

I’ve seen reports sent to clients that were hundreds of pages of screen shots. The only reason I can think that was done was because somebody thought it was a good idea to make the report seem huge. As if a report that can double as a paper weight is somehow more valuable than one that focuses on its usefulness.

A useful monthly report is one that focuses on the web plan’s goals.

A monthly report is an extension of the web plan. If there’s no plan, then you’re right to wonder why a monthly report is even necessary. So if you don’t have a plan, stop now, rewind the website to our blog posts last week on the initial client meeting and start there.

If you have a web plan then you should know the:

  • Business goals
  • Website goals
  • Budgets
  • Time Tables
  • Responsibilities

In short, the monthly report needs to echo all of those facets of the report, provide an update on what’s happened in the past month and then it should provide a way to discuss how to move forward. If any decisions need to be made or if there are items that need to be discussed, they need to be noted.

Be Comprehensive

I’ve been using the odd phrase “web plan” to this point in this post. In my opinion, a web plan is really a plan that addresses all aspects of your web presence. A web presence is the sum-total of a person or business on the web.

It’s your website, Facebook page, Twitter feed, YouTube channel, SoundCloud account, search engine visibility, advertising, and feed subscribers combined.

If all of this is taken into account when creating the plan, as I think it should be, then you have a Web Presence Plan. Everything else is a subset: a website plan or a social marketing plan or a SEO plan, what have you.

The point is, you have to design the report around the plan, and the plan should be as comprehensive as possible. Applied fully, this report will contain a lot of data. As the months pass and historical data is available, the amount of raw data will only grow. This is a good thing. Normally, this is how monthly reports die a slow death. But because of how we intend to use this document, in this case, it’s a good thing.

It’s more than communication, it’s education

My AdWords clients that I used to talk to about their monthly liked to sound informed. We’d have conversations filled with discussion about click-thru rates, cost-per-click, and page placement. Rarely though were they interested in cost-per-conversion, which is the One Metric to Rule Them All in the AdWords universe.

It’s not that there isn’t value to be had by looking at the click-thru rates, cost-per-click and page placement, it’s just that they are wholly explanatory data for the only metric that really matters: how much it costs to get a sale.

The problem was that there was a knowledge gap. On some level, clients know that web dev and web marketing firms are not going to ultimately take responsibility for what happens on their website. We’ve covered this before. As such, they feel invited to take a peek at the underlying data and to work on the analysis themselves.

While the desire to be involved is commendable, it’s at this point that the gap in knowledge and training on these topics can become apparent. Every web developer I know has a story about a client misusing technical jargon. They’ll say things like, “I need to increase my XMLs!”

Which you have to admit is a little ROFL.

It’s the job of the monthly report to point out what’s important. It needs to highlight the cost-per-conversion and use the other data to support why it is what it is.

The only way that’s going to happen is if the report makes it clear which data is primary and which is supplementary.

Analysis: Inputs and Outputs

Websites are about two things: getting people to it and what they do once they’re on it. Every facet of a company’s web presence can be grouped into one of these few categories. All social networking, all SEO, all advertising is about driving traffic. The website’s graphic design, functional design, and content are all responsible for what people do once they’re on the website.

It’s through this lens that data should be analyzed. Looking at these two sides of the web-coin will keep the report relevant and will lead to smarter conversations.

Traffic

The initial plan probably lays out specific target metrics for the social networks, SEO, and advertising. Certainly, measure all of that and work to meet or exceed those targets. But more importantly, and more generally, how is traffic to the website? Has it been trending up? Do you know why? Do you see opportunities in SEO, Facebook, Twitter, YouTube, etc., to increase traffic?

Website Function

How are sales/leads? How has it been trending? What’s the average bill of sale? What are the best sellers? Why are they the best sellers? What’s being done to address strengthening the critical path? How does the conversion funnel look? Has any user testing been done? Has that testing revealed anything about page-specific elements that need addressing?

Build from the ground up, present from the top down


The key to the whole thing is to provide the data but to put it in an appendix at the end of the document. The monthly report is about business, not technology. The technological concerns arise because they support the business goals. So kick the tech stats to the back of the report and put the analysis front and center.

In most monthly reports the data is front-and-center and the analysis is gravy. The data shouldn’t be front-and-center, it should be be the supporting documentation. The meat of the report should be a discussion of the various decisions, considerations, and opportunities that rise out of the data.

It’s also necessary to recognize that business happens in a larger context: what time of year it is, changes to how things are done online, etc. It’s a good practice to get in the habit of summarizing the current environment before moving into the analysis. You want to set up the discussion so that everybody sees as much of the field as possible. Providing environmental context allows the client – who is probably not online every day – to orient themselves before being asked to make some decisions.

Put it all together and a typical report would loosely be structured like this:

  • Current environment (1 page or less)
  • Analysis: Decisions/Considerations/Opportunities (2 pages or less)
  • Supporting Data
  • Full Data Appendix

Create Accountability


I’m a big fan of accountability. That goes for the developer as well as the client. If a client says they’ll be responsible for creating some content, they should be responsible for the outcome of not creating that content. After all, it’s hard to promote a blog that rarely has new content.

The best way to force accountability is to get signatures next to all decisions. Then if things aren’t done according to the plan, there’s a physical record of who dropped the ball.

The thing is, there are a few ways things can go right and about an infinite number of ways they can go wrong. Getting signatures is a way of enforcing the rules set forth in the original web plan. It might sound like a grumpy old man to demand a physical signature but it’s really just trying to prevent problems down the road.

I recommend adding one page to the monthly report after the monthly meeting: it’s a page that details what’s going to be done in the next month. Next to each line item is a signature of everybody responsible for making that line item happen.

Once you have that document, make copies and send them to everybody involved. You keep the original. At the end of the year when you’re doing your annual report, these documents will be the star of the show. And because there are literal signatures on what was supposed to be done, nobody can feign ignorance.

The goal, of course, is not to get people in trouble or to create ill-will but to keep everybody accountable for their responsibilities.

We’ve all experienced the problem of people helping in places where they aren’t supposed to be. This provides a way to discuss that issue too. If your name isn’t signed next to the line item, you don’t need to be involved. Simple as that.

The monthly report is the way the web plan gets accomplished. It’s a tool. And accountability is an important part of that. Without accountability by all parties, entropy starts to increase and the project suffers. Better to stop all of that before it starts. Get the signature.

It’s A Living Document

Monthly reports are their most effective when they’re treated like a living document. It’s meant to reflect conditions on the ground, both in the past month and historically and to provide a way for leaders to make decisions to accomplish the business goals.

Over time, the goals are going to change. The things done to various parts of the company’s web presence will change. When it does, let the report change too. Don’t fit the data into the report, fit the report to the data.

The bad monthly reports we’ve all seen in the past failed to change as the business needs changed. They’re paper zombies; undead and here to eat your brains.

Rather, stay focused on your client’s needs. Create a document that addresses those needs and updates the web plan and talk about it, every month.

A report that does all of that creates the conditions for success and growth and validates you as the monkey that knows how to keep its eye on the banana.

Here’s How User Experience Testing Can Be Better

September 30, 2011 Leave a comment

Last week we drafted a usability test and tested one user in order to get a real experience with the theories and abstractions we were researching and discussing. Our results were surprising.

What We Wanted

We wanted to figure out a repeatable process for conducting a user test that would improve upon the simple watercooler test – The type of test that my friend at textWoo.com conducted. Additionally we wanted to “user test the user test” by way of quickly getting feedback on an early draft of our test scheme in the hopes of creating a more effective testing tool. And, we wanted to dive in.

What We Did

I downloaded and edited a script from the usability.gov site which came from the SUS framework and a usability test from 3w.org. The way I customized it was to look at each page type in the JavaJack’s site and create a task that would engage the user with that page.  I didn’t make the tasks ‘hard’ but I didn’t want them to be too specific.  I didn’t want to test the users ability to read, for instance.  Our friend, Anna, came by just in time to be the tester.

What We Got

The results of the very unscientific test were intriguing and beneficial to the overall design of the site.  As Anna talked thru using the site with Ben and I there watching, we noticed some things that could be improved and changed. Time well spent.

After the test, our thoughts turned to the testing process itself.  Ben’s post explains his thoughts on the critical path and the purpose, goals and objectives of the sites and how they relate to user testing. Here are other thoughts about the user-test and the testing process in general.

Design Dunce? Maybe. Personal Experience Expert? yep.

Design Dunce? Maybe. Personal Experience Expert? yep.

User’s aren’t designers, don’t ask them to critic a site design

Most scripts I’ve seen ask the user this question in the beginning of the test:

Please give me your initial impressions about the layout of this page and what you think of the colors, graphics, photos, etc.

When you asked this question, it seems that the users become unsettled and defensive.  They have to form an opinion and defend it. They stop using the site.  They critique the site instead. They start to look at the site as a designer.  The user is tainted after that simple little question.  Ask the user about their own experience… and why not ask them AFTER they have the experience.

To fix this we came up with a few ideas. Let them complete the tasks and then ask them about their experience.    Record the session (the screen, webcam and audio) without you in the room. Just don’t ask the question, there are better ways to get initial reactions from users – the user-testing sites in the sidebar.  Or, conduct separate tests for reaction and impression.

Fly on the wall

Fly on the leaf?

Hiring UX facilitators: Flys apply within. Humans need not apply

You want to be a ‘fly on the wall’  as much as possible.  Humans suck at this.  Evaluator bias is rampant and I’m doubtful you can eliminate it. Simply put it’s when the testing-user feels that they should answer a certain way or feel the agenda of the test questions.  Who doesn’t feel that in every survey!  Human societal norms get in the way here (Perhaps not in New York City, granted).  People tend to be polite to strangers and people in authority. I figure negative answers questions are rare.  Users will try to figure out what you want and try to give you that.   The act of testing will influence their use. I feel the designer is the least desirable person to be the facilitator of the test. If the user feels that the facilitator has an agenda or prefers one outcome over the other, then the test is compromised.

You can’t hire actual flys to do your testing (they don’t make lab coats and clipboards small enough).  Here are a few ideas we had to correct the problem on Evaluator Bias and human factors.  You could use a remote testing service – like the ‘Mouse Tracking Tools’ in the sidebar. Run face to face tests in a familiar place to the user – office, coffee shop, mall, home – so the user is more comfortable giving honest opinions. The facilitator should not be perceived as someone affiliated with the site – regardless if they are or not – nor an authority of any kind.  Perhaps, you can test several other sites to hide the site you are actually testing (This might be time/resource intensive)

Something is better than nothing

It works better than nothing.

Something is better than nothing.

However, doing any type of test is better than nothing. The simple act of watching somebody go through the site is very desirable. Testing gives you insight into the flow of the site, if the site is mechanically (or functionally) sound and working and if the user finds and stays on the critical path or primary site goal.

Alien canals on the planet Euphorbia Pulcherrima

Only natural

You sellin’ what I’m buying? Great, let’s do this.

Each user has their own goal when coming to the site. Each site owner has a goal when building a site.  If the two goals match, Great! Now get out of the way and let the site churn out money.  The user clicks thru the site and their goals is met.  This click trail thru the site is called the Critical Path.  And, it’s what you should test in the ‘Tasks’ portion of the standard usability test.

There can be many paths, but only one critical path on a site.  For example, I go to Apple.com to watch movie trailers. Would Apple user-test my experience in getting to and watching movies? Perhaps, but I bet they measure how easy it is for me to ‘jump over’ and buy some music or a new computer. The purpose – the critical path –  of the site is to sell (I’ll give you branding and customer service, as additional paths) and all other features and functions of the site support that goal.

How well the site moves visitors along the path is the effectiveness of the site.  And, we test for it by asking testers to assume they have the same goal as the site.  Likewise, we test for satisfaction. Did they complete the task, but were pissed because of something else? Did they expect one thing and get an unpleasant surprise?  Also, we test for efficiency. Did they complete the task, but it took 15 clicks and 20 minutes to complete?

You can test this by doing a very simple user tests. That’s the low hanging fruit of user tests.  Site effectiveness, Satisfaction, and Efficiency.

In conclusion,  don’t ask about impressions before your evaluation of the critical path. Do test even if the conditions are not scientific. Let users use. Visitors visit.  Don’t force them to have an opinion and then needle them about it. This is a carry over from the designers perspective.  User’s aren’t lab rats. Your color palette isn’t of supreme importance.  Listen close and you can hear a user. They are probably saying, “Just give me the dang banana already”

Questions:

Is the user qualified to speak to design?
How do you get around the evaluator bias?
Can one site have multiple critical paths?

Plunging Head First into the Web UX Test Design Process

September 23, 2011 Leave a comment

Thoughts on my user testing design process

Based on my background with instructional design and  guerrilla web building, I’ve tried take a new look at my design process from the perspective of usability and user centered design.

I’m sure it fits in with the larger context and conversation going on in the very robust user-centered design community. My purpose isn’t to make it fit, but rather just notice how I would go about conducting a user centered design. I know it will change- it’s already changed twice this week. It seems I change this design process each time I read an article on smashing Magazine.com or usability counts.

With most good things, my user-centered design process has a beginning, middle and end… Thinking of changing that to include the word terminus which is a beginning and ending, but I digress.

Beginning – “Stand in the place where you are” – R.E.M

I like to start with what I (or the client) know. Business goals should not be confusing, abstract nor complicated. “Sell more widgets”, “Make money”, “Provide health care to my workers”, etc. They should be clearly stated in a business plan or firmly in the mind of the business owner. From these business goals we can derive site goals and page goals.

Site goals can be anything:

  • to inform
  • to sell
  • to build trust
  • to reduce paperwork
  • to recruit new employees
  • to build a community
  • to entertain

Each page should have a primary goal-or the thing you want the user to do. Don’t give your pages an identity crisis. Don’t make your users search for the purpose, or ‘call to action’, or banana. Pages, like PowerPoint slides, are free. Use as many as you need. The idea is to have a clear goal  for each page. You can test this clarity directly in your user tests.

The clearer the goal, the more straightforward the user test.

Align Business, Site, Page and User goals for Whomp Ass UX

Align Business, Site, Page and User goal for Bad Ass UX

Aligning the business goals, site goals and page goals is key. If the business goal is “to sell” and the site goals are “to entertain”, then there’s going to be a problem. The biggest problem I see in websites is unclear or confusing site goals.

The last part of the defining what you know is to collect information for baseline performance. You may need to survey company employees or site users to collect this data. Much of this information is”laying around” or extant within the organization. Examples of this are business plans (for business goals), previous website’s design documents or web traffic data (for redesigns) or advertisements or marketing materials.

Now that you’ve collected and define what you know, it’s time to make a few estimates and educated guesses. This could be called a hypothesis. You’re going to make guesses about the users, and how and where they use your site. Why is this important? One, this is what you will confirm or refute with your test results. Two, a Saturday morning website is very different than a Saturday night website. Other questions are, are people using the site at work? At their desk? On the couch? On a mobile? Users determine how your site will be used and you have to make design decisions based on these assumptions.

You want to dive deeper into who these users are. What are their goals? Why are they going to your site? How did they get there? What are their demographic attributes?

Closing the beginning of this design process is choosing the testing environment and tools. Basically-whether you use technology or not – it comes down to interviews, surveys, and small-group testing [watching / observing the user interact with your site]. I suppose for websites you can also add automated reports and services like the W3C site checkers.

The middle part – a.k.a. the fun part

Once you have all of your “knowns” about the business and you’re educated guesses about the user and their context, you are ready to draft and refine your testing instruments. What sort of things are you still unsure of? Which assumptions are not agreed upon-there will be some in any group. That’s what and why you test.

You generally test a website or any system for these three things:

  • Effectiveness – Can users do what they ( And you) expect? mechanically, does it work?
  • Efficiency – Is it easy to use? Does it work naturally or does it take concentration?
  • Satisfaction – Sometimes called the ‘smile test’. Do people smile when they use it? Are they happy to use it? is it delightful?

I won’t go into too much detail here, but the general structure of a user test is:

  • Pre-test Questions
  • Participant Tasks
  • Post-test Interview
  • Post-test Survey

Before you conduct the test there are a few steps to follow. You have to:

  • Choose a facilitator-preferably one without much experience with the design or site itself. Recruit testers-preferably of the target market demographic
  • Decide on the links and location of the test – preferably close to the environment that the user will interact with the site, at home, at work or in a mobile situation.
  • Prepare a script for the facilitator-this avoids bias delivery and provides a standard control especially if you’ve got multiple facilitators
Vader, Bad ass, believes in testing

Vader, Bad ass, believes in testing

Fin – the end

The last part is to collect, analyze and report the results. If you prepared and conducted a well made test (lucky?) clear patterns will emerge from the test results. You may see that people are unclear about your menus and can’t find certain information. Or you may find that there is a loophole in the structure such that people get in but can’t get out.

It’s important to remember to test the user experience – not your design. The users are experts in their own experience.  They couldn’t care less about color scheme or fonts. They won’t tell you that experience and less you ask them. If you ask about design, they will tell you about design.

The goal of all of this is to make new assumptions about your users. These assumptions are now based on your test results. You have a more complete picture of your users and your site. Hopefully there aren’t some clear improvements you can make to your site. You can make those changes and test again to see if “The needle moved”, the changes were effective as anticipated. Then, you start the process over and test again.  Each time you will get a more complete picture of your users and insights into how to make their experience with your website better.

Thanks to Flickr: Sebastian Bergmann