Zierk
CASE STUDY · ENTERPRISE DESIGN THINKING · IBM
IBM's Power Systems servers run the infrastructure behind banks, hospitals, and governments, where a single breach costs an average of $4 million. When we were asked to make patching those servers easier, a Distinguished Engineer was already convinced the solution was one-click automation. Six weeks of research across global enterprise customers revealed an insight so compelling that the same skeptic who pushed for automation stood up at the final playback and said the process had changed how he thinks about building software. This is the story of how Project Monocle became a global case study in how design drives business outcomes at scale.
ROLE
DURATION
16 Months (Hallmark)
6 Weeks (Incubator)
IMPACT
$2B Revenue &
Global Recognition

THE CONTEXT
We got a brief that assumed the solution was automation
IBM's Power Systems servers run the infrastructure behind banks, hospitals, and governments. When a security vulnerability hits, a system administrator has to figure out whether they're affected, whether a patch exists, whether it applies to their specific configuration, whether it'll break something else, whether the system needs to come down, and then coordinate the maintenance window with every team that touches that environment. After all of that, they compile a compliance report proving every step happened correctly. A breach on these systems costs an average of $4 million.
The incubator brief called to build a one-click update. "Make patching a server as easy as updating an app on your phone."
Carl Burnett, a Distinguished Engineer and one of the senior technical architects in the Power Systems business, was one of the advocates for that direction. Carl was skeptical of design thinking. He wanted the engineering solution.
Six of us were put on it, working out of IBM's Austin, Texas design studio. I was the only team member with formal education in server architecture and networking, so the team relied on me to structure the problem space, lead user interviews, and translate technical complexity into research questions the rest of the team could work from.
THE DISCOVERY
Research uncovered a deeper insight
We spoke with six sponsor users globally. I built research artifacts from subject matter expert sessions. The research team wrote the interview protocols, but they pulled from me as the technical SME for what questions to ask. I ran some of the studies myself so I could dig into follow-ups when something interesting came up.
User Journey. The patching workflow mapped across three phases. Multiple actors hand off across the process. Decision points cascade: Am I affected? Is there a patch available? Does this apply to me? Is it concurrent? Each branch introduces delay and coordination overhead.

Jack
Lead System Administrator
Jack deals with the day-to-day operations like figuring out what needs to get patched in his environment.
Before he can patch his systems, he needs to find out: Are my systems exposed? Is this even the right patch Will this patch affect others?

Jill
IT Manager
Jill is responsible for making sure admins are able to successfully patch their systems.
She needs to make sure her system admins and stakeholders (e.g. app teams) are aware and aligned on the changes coming down the pipeline.
User Personas. Jack, Lead System Administrator: deals with day-to-day operations like figuring out what needs to get patched in his environment. Jill, IT Manager: responsible for making sure admins can successfully patch their systems and that stakeholders are aligned on changes coming down the pipeline.
Nobody asked for a single-click update.
What they told us was that the hardest part of patching wasn't the patch. It was everything before and after it. The information they needed to make a decision was scattered across disconnected systems, buried in dense XML files, and dependent on institutional knowledge that lived in individual people's heads. They couldn't see their own environment clearly enough to act.
Scheduling was its own problem. Every upgrade required coordination across multiple teams with competing maintenance windows. And security audits compounded everything, because administrators and their managers had to manually compile information about every endpoint, every vulnerability, and every patch into a report that satisfied auditors. One manager described that reporting process as consuming 25% of their time during the first half of every year.
The brief assumed the solution was automation. The actual problem was visibility.
“The most difficult thing for me is to get information. To see all the pieces and how they belong together so I can properly plan.”
“
– IBM Power Systems Customer, Project Monocle Sponsor User
“Trying to schedule upgrades is like pulling teeth.”
“
– IBM Power Systems Customer, Project Monocle Sponsor User
“I spend 25% of my time during the first half of the year prepping for our audit in July. That may not sound like much but multiply that by 50 hours a week, at a manager pay rate, with all the weeks required, and that is a lot of time.”
“
– IBM Power Systems Customer, Project Monocle Sponsor User

Experience Map. Full experience map across the patching lifecycle. Rows map Doing, Thinking, Feeling, and Key Opportunities for each phase. The densest problems clustered around information access and coordination.
THE PIVOT
From a systems problem to a human problem
We reframed the product around three principles that came directly from the research. Every screen had to answer "what do I need to do right now?" The system administrator patching servers and the IT manager proving compliance had different needs, so the product had to serve both without forcing either into the other's workflow. And administrators managed hundreds of endpoints across multiple component types, so the product couldn't require them to update one machine at a time.
More than 100 concepts in 5 weeks, tested daily
We ran empathy mapping differently than usual. Instead of treating it as a post-interview synthesis tool, our team used it before interviews to pull domain knowledge from subject matter experts. That gave all of us a foundation to ask better questions when we sat down with actual users, so our first round of interviews was already technically grounded.
Empathy Map Synthesis. System Administrator empathy map: Thinks, Feels, Says, Does quadrants filled from SME sessions before user interviews.
Over five weeks, we generated and refined over 100 concepts together. We moved from paper prototypes to wireframes to high-fidelity prototypes, running RITE testing to validate every interaction as we went. I drew wireframes alongside the other designers and contributed to the UX design throughout. The pace was daily. Our researcher would go have a call with a user, bring back feedback, and we'd be iterating in real time. They'd come back and take our new sketches into the next conversation and keep moving it forward. Nobody was guessing about whether something worked because we were testing it constantly.
I built the high-fidelity prototypes. Fully coded, 100% functional, with backend databases simulating real data calls. The color palettes came from the server environments, reds, yellows, and greens from status lights, grays from the rack hardware, so the interface felt native to the people using it.
Prototype Evolution. The progression from paper prototypes to wireframes to high-fidelity coded prototypes over the five-week incubator. Fun fact! The color palettes used for the Project Monocle prototype all came from System Admin server environments. (Red, yellow, and green cables, gray from the server racks.)
The incubator became Power Systems' highest priority product
Phil Gilbert, IBM's General Manager of Design, recognized the team for the speed and rigor of the iteration. He later called it "the best story of design at IBM in the last three years."
The incubator project graduated to Hallmark status, IBM's designation for its highest-priority products. The Power Systems business unit built a full product team around it. I was the only member of the original incubator team who transitioned onto the Hallmark project.
Incubator Prototype. Four of the major screens from the functional prototype. Inventory Dashboard, Components, Scheduling, and Reporting. The prototype took two days to build. We spent an additional two days refining and testing before the final playback.
Transforming the culture within Power Systems
The Hallmark expanded research to over 30 customers, including 10 sponsor users who gave us ongoing access to their environments and workflows. I ran field studies with system administrators. Over the course of sixteen months, we ran more than 50 workshops and logged 146 hours of user research. Weekly alignment with sponsor users kept the product grounded as the scope grew.
As-Is (Hallmark Project). A refined second iteration incorporating insights from multiple rounds of interviews. Direct quotes were printed onto cards and used as working material, with new research filling the gaps in the original scenario.
As-Is (Hallmark Project). A refined second iteration incorporating insights from multiple rounds of interviews. Direct quotes were printed onto cards and used as working material, with new research filling the gaps in the original scenario.
On Engineering
Some of the firmware-level fixes weren't accessible through any automated system. The process for applying certain patches was maintained in a manual document by a single person. An administrator would click a button and wait two weeks for a fix to come back.
I worked to build an API and a curl-based system that could talk to the BMC (Baseboard Management Controller) layer directly. BMC access through a web interface wasn't something people were doing at the time. We built it with proxies and dedicated trusted servers handling the requests, because nothing except a verified server could communicate at that level. That work continued through the OpenBMC project.
I worked with Carl Burnett to build a prototype that replaced a two-week manual turnaround for fix information to a 5-10 minute API call that was integrated into the end-product.
Inventory View (Hallmark). The full environment in a single table. Every endpoint filterable by type, description, current level, and recommended level. Administrators could see at a glance what needed attention across hundreds of machines without opening a single XML file.
Endpoint Details (Hallmark). Drilling into a single machine surfaces everything needed to make a patching decision in one place. Host info, the physical network topology of every component sharing that serial number, current and recommended levels, reboot requirements, and a live feed of every APAR included in the update.
Patch Plan (Hallmark). Scheduling a maintenance window meant selecting a date, choosing a fix level, and seeing exactly which systems were in scope. Teams could name, describe, share, and archive plans, giving everyone from the administrator to the auditor a clear record of what was patched, when, and why.
A global case study in how user-centered design drives business outcomes
As Power Systems' first ever web-based app, Project Monocle proved something that hadn't been proven before in a historically engineering-driven part of the business. User centered design yields real outcomes.
Following Monocle's launch, Power Systems contributed to a 32% revenue increase across IBM Systems, driving $3.3 billion in total segment revenue in 2017. Firmware turnaround dropped from 2 weeks to 5-10 minutes.
After Monocle's beta launch, requests for Design Thinking workshops started pouring in from teams across Power Systems who wanted to learn how we did it. The volume grew fast enough that I co-founded Power Systems' first internal facilitation practice to meet the demand.
Lindsay Zierk © 2014 – 2026
All Rights Reserved.














