Print Page | Contact Us | Sign In | Join AESP
Real-time Evaluation in Real Life
Share |
Strategies

This article is republished from the September 2016 issue of Strategies, AESP’s exclusive magazine for members. To receive Strategies, please consider joining AESP.
 
Real-Time Evaluation in Real Life: User Research as a Tool in our Toolbox
By Lisa Obear and Danny Molvik

Stern(1).jpgIntroduction
As Linda Dethman, Paul Schwarz and Courtney Henderson wrote about in their Strategies article last August, approaching DSM program evaluation from a "real time" perspective has considerable value for the program administrator, implementer and the evaluator. The DSM industry's take on evaluation has evolved over time to be less centered on evaluators as auditors, and more focused on evaluators as research and advisory partners. In approaching evaluation from this perspective, evaluators need to consider what tools they have in their toolbox to get utilities the information they need, quickly and effectively, to help them make mid-course decisions and adjustments. From a process evaluation perspective, one key tool evaluators can use to provide real-time value is user research.

What is User Research - and Why Should You Use It?
Even though program administrators see the incredible value in real-time research and seek to incorporate it as often as possible, there are some evaluation tasks which inherently need to be done after the customer has experienced the program fully – you can't exactly measure or verify energy savings from equipment that hasn't been installed yet! However, user research is unique in that it is one tool that only makes sense to do in real-time. Additionally, user research can be done where other evaluation tasks cannot – when we have small sample sizes, very tight timelines, or a need for truly immediate feedback before making a large resource investment in a program tool or process.

Because of these benefits, more utilities are turning to user research as a means of keeping a finger on the pulse of their customers and trade allies. Consumers Energy, one of the utilities that has embraced this approach, conducts on-going customer and trade ally user research tasks for their business energy efficiency programs. "I find more frequent user research and message testing valuable because it helps me as a program manager better understand what our business customers are looking for in energy efficiency programs, and more quickly adapt programs and advertising to better serve them," says Bob Mihos, Senior Program Manager with the Consumers Energy Small Business Energy Efficiency Programs. Via their evaluator, Consumers Energy has tested marketing messages, online reports, and program applications and completed ride-alongs with trade allies and program auditors; this information has helped Consumers Energy streamline and refine their tools and processes in – you guessed it – real-time

So, what is meant by "user research," and how can it be incorporated into evaluations? Calling user research a single tool is actually a bit misleading – it's really multiple tools wrapped up in one. Think of it more like a drill, with a number of different drill bits. User research encompasses a number of different tasks used to test and analyze the end-user experience; this could include message testing of new marketing efforts, usability testing of an online application or energy report, or ride-alongs with program trade allies or technicians.

From a process perspective, user research can help address a considerable amount of the potential for process issues. For nearly every process evaluation, two key questions are: How do customers experience the program, including program tools and the application? And, how are trade allies presenting and informing customers about the program? Two research tools which evaluators can add to their tool belt to examine these research topics are web usability testing and trade ally ride-alongs.
 
Web Usability Testing
For many programs, a sticky spot with potential for process issues is the program application, with many programs transitioning to online applications or web tools. Without seeing the end user actually use the application or tool, it can be difficult to understand where any true issues lie from an interview alone – are there actually problems with the application, or is this a case of user error? This is where web usability testing comes in handy.
 
It's actually fairly difficult to assess how useful a program tool like a website or application is through typical post-use evaluation means – it's often conducted six months or more after they've used it, and they've likely forgotten much of the nuance of the actual website. Web usability testing, however, is accomplished by asking users of a particular tool – an energy report, or program application – to think aloud as they attempt to complete tasks live on a website. While other research methods may reveal what customers think (e.g., surveys, focus groups) or what outcomes are achieved (e.g. click-through rate with web analytics), usability testing is a diagnostic method that identifies potential issues and improvements in a user interface, such as a website or online application. Without web usability testing, it is difficult to know how customers use and interpret information on a website. Users may interpret a website or online tool in a manner completely different than that intended by its designers, or they may simply have difficulty navigating the site. Traditional process or impact evaluations rarely reveal these issues, because individuals who try to navigate a website but fail or give up are not easily identifiable. 

Because usability testing does not require many participants (often fewer than 10 is sufficient), it can be a cost-effective way to identify issues before they have the opportunity to undermine programs or initiatives. Usability tests can also be used early in the development process before too many resources have been spent trying to refine a website or application. In fact, like much of user research, usability testing works best when conducted as a series of smaller tests throughout the product development cycle, rather than performing one large test at the end of development.
 
Trade Ally Ride-Alongs
For many programs, especially C&I, trade allies drive a huge amount of participation. For some programs, trade allies and contractors are the main outreach arm and the majority of interaction customers have with an energy efficiency program.  More than in-depth interviews, which rely on trade ally self-report, ride-alongs can allow evaluators to observe the actual recruitment, sale, or installation process as it is actually happening. Joining a trade ally or program technician/auditor on a customer site visit can be an eye-opening experience and provide great insight into the nuance of the contractor-customer relationship and the information being presented to customers.  

In this vein, this ethnographic approach to qualitative research can provide insight into any unanticipated issues that may arise during program implementation. Because ethnographic studies are in-person, they also allow for real-time testing and adjustment. Similar to other qualitative methods, ride alongs do not require many participants (again, fewer than 10 is often sufficient) and can be a cost-effective way to better understand program implementation challenges.   

As trade allies play such an important role in program implementation, understanding issues immediately can allow utilities and program administrators to address them before they are true problems. And while one anticipated challenge might be trade ally resistance to this type of research, or a perception of it being an "audit" of their work, this is generally a relative non-issue when presented as a real-time evaluation tool. Many trade allies are happy to share their experiences and knowledge in a ride-along setting and this type of research can be engaging from a trade ally perspective as well.

Summary
As evaluators, it always pays to examine our toolboxes from time to time, remove the old rusty tools, and add some new ones to keep us productive and useful. As we all move forward, looking for ways to get utilities and program administrators better and faster data to inform decisions, user research can be one tool that helps get us there.
 
Lisa Obear and Danny Molvik are Senior Consultants at EMI Consulting, a firm specializing in strategy, policy. program evaluation, planning, and customer and market research. This article is contributed by the AESP Market Research & Evaluation Topic Committee.
 
 

Latest Tweets

Contact Us

15215 S. 48th St.
Suite 170
Phoenix, AZ 85044
whatsnew@aesp.org
Tel: 480-704-5900