How can we enhance Agent Workspace to fundamentally improve agent efficiency?

How can we enhance Agent Workspace to fundamentally improve agent efficiency?

Agent Workspace Research

Q1-Q2 2019


Project lead, working alongside a senior researcher—Responsibilities include project direction, defining research objectives and methods, participating in research activities, insight extraction  and delivering executive report.

Agent workspace is an enterprise web app that allows brands to connect with their customers through a verity of different social and private messaging channels, at scale. It has powerful routing, priority and automation capabilities that help drive efficiencies for users in large customer service teams.

Company wide opinions were that the Agent Workspace needed improving. Any improvements should positively impact Average Handling Time, a key efficiency metric that brands use to measure the success of their customer service teams.

But what needed to be done? The design team embarked on a 3 month research project to find out…

6 Locations Globally

San Francisco, New York, Cancun, Glasgow, Dundee and Birmingham


11 Different Brands

Across 5 Verticals

38 Usability Studies

24 Users, 14 Managers

16 Internal Interviews

Across 5 teams

45 Hours of Audio

x3 to extract insights


The project comprised of three stages:

  • Stakeholder
  • Data
  • User Research

The Stakeholder and Data stages would inform the User Research stage, helping us identify gaps in our current understanding of users’ needs, expectations and experiences.

The team created research plans for each stage, defining goals, hypotheses and research methodologies.

Project Goals

  • Understand stakeholder attitudes towards Conversocial products, users and the company’s business strategy
  • Understand user behaviour
  • Identify user pain points and areas of improvement for Agent Workspace
  • Prioritise improvements that will impact agent efficiency, measured by Average Handling Time
  • Benchmark current performance


Agents who are more satisfied with Agent Workspace are more efficient at responding to customers—data to support this would validate investment into improving agent experience as well as agent efficiency, which has been prioritised historically.

Stakeholder Stage

The team interviewed 16 internal stakeholders across various functions within the company, including the exec team, sales, marketing, product and engineering. Stakeholders can have a deep undertanding of the business and industry and give some steer to research direction.

Participants were asked for their opinions on:

  • Business Goals & Strategy
  • Product Differentiation
  • Product Improvements
  • Product Strengths

Suggested areas of improvement for Agent Workspace by Stakeholders

Improvements Stakeholders thought that by reducing confusion, integrating with more systems, automating the product and creating better user flows, agent efficiency would improve for Agent Workspace

Suggested Areas of differentiation for Agent Workspace by Stakeholders

DifferentiationStakeholders thought that automating features, enhancing UI, giving agents access to information they need and customising experiences, could differentiate Agent Workspace

Product Managers were able to help us identify key workflows and important features within them, guiding further research:

  • Resolve Issue
  • Approve Response
  • Set Reminder

Data Stage

What insights could be gathered from data analysis? The aim was to measure the effectiveness, efficiency and satisfaction of flows & features for overall performance and use the data to benchmark.


With limited engineering resource, the team was not able to retrieve the data required. We pivoted, reversing the strategy by swapping the user research and data phases. Identifing user pain points first then checking data to confirm insights.

Usability Benchmarking

Agents (our primary users) were sent a UEQ usability survey that would help benchmark percieved ease of use and user experience for Agent Workspace. Key insights to note were below average results for efficiency and stimulation. Using SEQ, ease of use was scored at 5.7/7.

The survey was to be sent to users periodically to track usability improvements over time.

Usabilty scores across 124 participants for Agent Workspace, benchmarked against 400 digital products


Scoring ‘Good’ for learnabilty, ‘Below Average’ for attractiveness, efficiency, dependability & stimulation and ‘Bad’ for novelty

User Flow Data

Conversocial’s Data Science team helped by sharing important user flow data. The data gathered proved that there was no one true user flow for issue resolution, leaving some features unused, depending on the type of issue and brand.

Example of common action sequences in Agent Workspace

User Flows

User Research Stage

Over the course of 2 months the team travelled to 6 different locations, globally to observe and interview users from 11 different customers. Each had very different customer service needs and practices, as seen by the contrast in workspaces.


User Studies

Participants took part in sessions lasting ~45 minutes approximately. Each session consisted of observing the participant in their normal working environment whilst they went about their day, with the team taking notes and asking questions where appropraite.

This was followed by a short interview where the team could dig a little deeper into what they had just observed as well as asking questions about working habbits and perceptions about Agent Workspace. 


Customer Service Agents Observed


Custome Service Managers Interviewed


User Pain Points Observed


Products or Features Requested

Extracting Insights

Insights were extracted from notes and recordings. Methods like Affinity Mapping helped the team find themes and patterns to help categorise and prioritise feedback. The categories below best represented perceived problems within Agent Workspace including severity level.

Volume of agent pain points for Agent Workspace, grouped by category

Pain Point by Category

Key Insights

Below are some key takeaways with recommended solutions, based on insights gained from the entirety of the research project including stakeholder opinion, data and user feedback.

Brands require tailored workflows for issue resolution

Different brands are resolving issues in different ways, creating their own user flows in the process. This is leading to some features not being used per brand and valuable real estate going to waste. On the flipside, agents are taking actions that are not necessary because reduntant UI is still available to them

Recommended - Configurable & Customisable UI, Guidance for Users, Side Panel Overhaul

Agents are frequently leaving Agent Workspace to find the information they require to do their job

33% of participants used other tools to store and/or gather information that they need to help the customer. Agent Workspace provides features that help agents with this but due to poor design they are neglected.

Recommended - Clipboard Overhaul, CRM Integrations Overhaul, New Suggested Responses feature

Scrolling through conversations to check message content is slowing agents down dramatically

100% of participants were observed scrolling through message content to find details in every conversation. This is a huge efficiency loss.

Recommended - Conversation Summary UI

Features designed to solve problems are not doing their job

One example of this is Playmode, a feature designed to automate the distribution of converations and to stop cherry picking of conversations doesn not stop agents from cherry picking, resultung in customers being left waiting for responses.  Other examples are Clipboard and Integrations.

Recommended - Playmode Iteration

There are many small bugs and manual interactions within Agent Workspace that collectively impact agent efficiency

Conversations splitting after 40 messages, spam content, and disappearing conversations are examples of bugs that slow agents. Manual actions like adding tags, refreshing conversations, setting up Agent Workspace and chasing approvals could all be automated to speed up agents.

Recommended - Review data to understand improvement benefit for each


Preconceptions were that research would inspire a completely new design paradigm for Agent Workspace, but that’s not what happened...

We identified many small to medium sized improvements that would collectively impact Average Handling Time and solve user problems. An iterative enhancement strategy was adopted. This gave the product and engineering teams the ability to focus and deliver quickly, as well as ensuring we could adapt to change and stay agile.

These features will define the product and design roadmap for at least 12 months.

Agent Satisfaction Hypothesis

Data collected supported the hypothesis that the more satisfied an agent the more efficienct they are at handling customer issues. Investment into improving product usability and agent experience will positively impact agent efficiency and should be considered in any product enhancements.

Average Handling Time for Issues with Responses vs Agent Satisfaction