Design
Design

USABILITY TESTING & RESEARCH

As a part of custom application design and build or as a standalone service, Spire’s research team tests and measures the efficacy of user experiences and interfaces, offering data and tactics to improve productivity or brand perception or to decrease error rates and cognitive load.

While assumptions can help big companies move quickly, and get something out the door, testing will always help companies move efficiently and ensure every dollar does the right job. Validating or debunking assumptions with research will bring points of confusion to the surface, allowing problems to be solved before the product goes live.

User research at Spire starts with creating a “mental model”—that is, taking a virtual space and understanding how a user sees and engages with it. Asking the right questions, listening and empathizing allows us to remove ourselves from the software to focus solely on the user, delivering very quickly—early on and continuously over the life of the product—in search of valuable feedback.

How well can a user perform a task? How easily? What’s the “cognitive load,” or how much do they have to think? As unapologetic minimalists, we always seek to make an application simpler and more elegant, doing so through time-on-task testing to determine how a user gets around in the space, and how well the UI fits the existing mental model of the user.

The three primary methods we use in analyzing how a user navigates through an application are contextual inquiry, usability testing, and mouse movement tracking. These tools provide visibility into where users are going, how they are getting there, where are they getting stuck, etc. Each type of tool has distinct benefits and usage occasions, as discussed below.

We employ contextual inquiry when seeking to observe and understand a user’s interactions with an existing application, particularly one that is not instrumented with eye or mouse tracking capabilities. In these instances, one of our researchers will be situated near the user’s workstation, watching how the user interacts with an application, and discussing what the user did and why.

We often employ eye tracking and emotional measurement software during on-site and remote usability sessions. For instance, we look for expressions of consternation as an indication of cognitive load. Tools we commonly use include Silverback for streamlined Mac-based usability testing and UserTesting for more comprehensive tests.

Usability sessions are moderated by one of our researchers. When evaluating the usability of a new design, the session begins with task testing; i.e. “If you wanted to perform X or Y what would you expect to do?” At this stage the user is presented with an interface, but is not yet interacting with it. This allows us to gauge the clarity of labels and gain an understanding of ideal workflows. As the test progresses, we quantify usability using a Likert scale, and compare results to the existing design, looking for measurable improvement.

In order to generate data at scale about how a user navigates through a page, and an application, we are big fans of Hotjar and Mouseflow. These programs track mouse movements, clicks, scrolls, form completions, etc, and act as low-cost, passive (i.e. no user opt-in required) alternatives to eye tracking and formal usability sessions. Each application provides a replay of every session, and aggregates sessions to identify larger patterns in the data, presented through heat maps, funnels, form analytics and survey responses. We draw on this data to identify pain points and optimize application design.