Support Tools: PILOTS

Testing a Ticket Management System ‘Live’

Ticket management systems help support teams track, prioritize and execute time-sensitive issues at a global scale. At Google, I had the opportunity to help develop and test a new ticket management system for my team.

Please note: To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information. All information in this case study is my own and does not necessarily reflect the views of Google or its employees.

Please note: To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information. All information in this case study is my own and does not necessarily reflect the views of Google or its employees.

BACKGROUND

My team at Google was responsible for troubleshooting issues related to Google ads. As part of a tools consolidation effort, leadership decided we would transition to a new ticket management system within one years time. We conducted an extensive gap analysis and spent 6 months developing features in the new system. This brought us to the MVP state. Before onboarding, we needed to test the MVP system live to be sure it was ready. Thus, we planned to run a 2-week pilot to deploy the system with a subset of the team.

My Role and Team

This project was a coordinated effort among members of both the business and product teams.

  • My Role: User Advocate, Unofficial UX Researcher

  • Product: PMs, Engineers, UX Designers and UX Researcher

  • Business Operations: Program Managers (PgM), Business Leadership

PILOT GOALS

  1. Assess Readiness: Evaluate whether users can fully rely on this system as-is to perform their work.

  2. Continue Improving the Platform: Identify bugs and usability issues.

  3. Plan for Change Management: Assess user sentiments about switching to the new system.

Asset 1.png
 

PHASE 1: Planning

I was assigned the task of capturing and reporting user feedback during the 2 week pilot. Furthermore, many members of the cross-functional team planned to visit the offices to see the product in action. So, I was also responsible for coordinating shadowing sessions between visiting Engineers, UXers and users. And while I was to own these asks from the business side, the UX Researcher (UXR) and I realized we would benefit from partnering and coordinating our efforts. Together, we came up with a plan for managing user feedback collection using mixed-methods approach.

Asset 2.png

During the week leading up to the 2-week pilot, we led a series of orientation sessions and trainings. I delivered the User Feedback trainings to show users to submit their thoughts and what they could expect from the shadowing and interview sessions. But of course, even with weeks of preparation, there were surprises.

Improved Shadowing:

We had Engineers, PMs and UX-ers fly to the 3 pilot sites. After many months developing this MVP system, the team was excited to see the tool in action. Perhaps mistakenly, I assumed everyone traveling to the sites would be well-versed on our team’s workflows. However, I soon discovered many Engineers were not directly acquainted with our team and some treated this as an “immersion trip” -- a chance to get to know their users. And while developing empathy by getting to know users firsthand is critical for building excellent products, I quickly got feedback that the shadow sessions had become distracting, making it difficult for users to actually focus on doing their work. To combat this issue, I created a mandatory training for engineers which taught them shadow best practices and gave an overview of our teams workflows and roles.

Introduced Tool to Triangulate Data:

The UXR and I knew we’d be working with a large volume of incoming data. Initially, we thought it would be okay to rely on tools that were tailored to the types of incoming data. However, once the pilot kicked off and the data started streaming in, we realized that having data fragmented across so many systems made it difficult to generate cohesive insights and determine priority. We had to navigate between: Custom Dashboards, Google Sheets, Google Docs and a Bug (and FR) Tracking System. To address this, I researched and implemented a tool that let us to pull data from the bug-tracking system into spreadsheet format and allowed us to push all spreadsheet updates back into the bug-tracking system. This was incredibly helpful because it allowed us to have a centralized system where we could code all issues by “theme” and quickly convert insights into actionable bugs.


PHASE 2: PILOTING

 
Asset 3.png

Reconciling Mixed-Method Data:

Data collected from these different instruments sometimes had conflicting insights. For example, we captured “Switch Rate” in two ways: we had users self-report times when they had to switch back to the old system so we could understand their motivation for switching. However, because there are sometimes limitations with self-report data, we also captured “objective” switching data through logs. In the end, these numbers were very different! This meant we had to explain why the Switch Rate data did not match perfectly.  Furthermore, we found that 1 technical issue led to very high Switch Rates overall. However during our interviews, users told us they really liked the new platform: they found it intuitive and faster than our existing tool. This was a fascinating example of the fact that metrics won’t always tell you the full story. In this case, the Switch Rate metric wouldn’t have been a good signal for the team’s sentiments and willingness to onboard. Talking to users was the best way to gauge that.

Reporting to a Diverse Audience:

Our various stakeholders all had slightly different goals and areas of expertise for this pilot. Business stakeholders were very interested in the predicted impact to the business, Engineering was very interested in solving any bugs that came up, and UX was eager to understand parts of the overall design that could be  improved. The UXR and I needed to answer high level questions while also surfacing the wealth of critical, detailed findings on how to improve the system. In the end, we created a taxonomy of “Key Themes” which summarized outcomes in an intelligible way, but also contained references to all bugs, usability issues and feedback we had received during the pilot related to that theme. We set up ongoing meetings with stakeholder sub-groups to address specific questions and develop action-plans related to their goals.

Establishing Processes:

Through this pilot, we refined a system for incorporating a complex research agenda in the pilot framework. To ensure that everything we learned along the way was put to good use, I helped the UXR to create formal Pilot Research Protocol which we repurposed for 2 subsequent pilots.

PHASE 3: FOLLOW-UP

 

IMPACT

Through this work, we established a plan for improving product and onboarding my team. We identified 10 urgent issues that were fixed for onboarding, and 15+ longer-term feature requests that have been added to the product roadmap. Furthermore, I helped the UXR develop a Standardized Pilot Protocol which was used as the guide for two subsequent pilots.

 

Running a pilot of this scale and complexity was an incredibly rewarding challenge. I was pushed to combine my skills with different research methods into a comprehensive program. Even more importantly, this experience taught me how to incorporate user research into existing business and engineering agendas, and how to follow through and advocate for your findings. A few specific realizations that I plan to carry with me in future work are as follows:

  • Benefits of ‘Live’ testing: Testing the system live, in-situ, helped uncover insights that no other previous research had. For example, usability testing didn’t reveal some features that users would end up missing once they were gone! In this way, the pilot helped us detect issues that never came up during previous testing phases.

  • Advantages of Mixed Methods:  This was also my first time working on a complex Mixed-Method research project. It was fascinating to see how this combination of methods allowed us to illustrate themes at different levels of granularity. For example, we could see trends in logged switch data that were underreported by users in interviews.

REFLECTIONs

 

“Sofia played an essential role in the pilots in 2017. She demonstrated complete ownership and independence over data collection, analysis, and reporting. Throughout our collaboration, I felt incredibly confident that Sofia would go above and beyond in delivering high quality work. She also challenged parts of the process, which made me think through our approach more critically. As a result, our deliverable was more sound, consistent, and impactful.” - Liz (UXR)

“It's been an absolute pleasure working on the pilots with Sofia. She is thoughtful, forward-thinking, and one of the most competent peers I've had the good fortune of working with. She willingly and reliably follows through, always with a positive attitude. Can't thank you enough, Sofia!” - Support Agent

“Sofia has been such a great asset to the Operations and Product teams. Over the past quarters she helped to review and/or facilitate ~20 shadow sessions and roundtable interviews with Eng, PM, UX and users. Her work on this initiative was crucial to making product improvements and understanding the difference between workflows for onboarding.” - Ops PgM

TESTIMONIALS