Keynote evaluates customer experience by collecting detailed qualitative and quantitative data from large samples of individuals (typically 200 to 800) as they attempt a series of real-life tasks on the Web. Keynote samples users according to target customer profiles from the Keynote Research Panel (KRP) of more than 160,000 Web users, or directly from a client’s private panel of actual customers. In order to capture verbatim comments from users accessing the site, Keynote studies intercept live users and ask them instantly to participate in Web site evaluations. This results in the ability to associate real customer comments and link them to quantitative behavioral data (e.g., clickstream statistics or page views). Capturing the experience of real Web users is the only way to obtain insight into users’ subjective thoughts and feelings about a site. Intent-Based Context and Task-Based Testing Without knowing what customers are trying to chieve, it is impossible to know whether or not they have been successful. In Keynote evaluations, the user’s intentions are known. Using the Keynote Connector, the user pursues a predefined set of tasks (such as registering or using a shopping cart) in a method known as task-based testing. With load testing, users’ goals and intentions are clearly understood and success rates can be determined and compared across the spectrum of users. Clickstream Analysis addresses the need to understand user intent through the task-based testing approach. This method, common in raditional Usability lab settings, establishes a uniform set of goals (called objectives) that all users pursue. Because user intent is a known variable, Keynote can operationally define and measure success rates for particular tasks. The results can then be linked to qualitative comments and user satisfaction ratings. It is also possible to design open-ended objectives to allow for user-driven exploration of the website performance. Site Evaluations Conducted in Natural Settings Because Keynote’s technology enables remote site evaluations, panellists can participate from any location, at any time of day. Participants access the computer they use every day in their homes or offices, from their own Internet connections, browsers, and computers without having to conform to the constraints of a more artificial testing environment. By testing in a natural setting, the evaluation experience more accurately represents users’ normal Web use conditions. It provides more accurate study results by minimizing interviewer or moderator bias that can arise in a lab or focus group settings. In addition, the flexibility to participate from a variety of locations, such as home, work, or school, at points around the globe, increases the spectrum of potential study participants and produces more accurate test results. Remote settings also offer anonymity, encouraging panelists to express thoughts and feelings with candor.
Related Articles -
website performance, web load testing, load test, user experience,
|