IBM's Power Systems servers run the infrastructure behind banks, hospitals, and governments. When a security vulnerability hits, a system administrator has to determine whether they're affected, whether a patch exists, whether it applies to their specific configuration, whether it'll break something else, whether the system needs to come down — and then coordinate the maintenance window with every team that touches that environment.
After all of that, they compile a compliance report proving every step happened correctly. A breach on these systems costs an average of $4 million.
The incubator brief called to build a one-click update. Carl Burnett, a Distinguished Engineer and senior technical architect, was one of the advocates for that direction. He was skeptical of design thinking. He wanted the engineering solution.
Six of us were put on it, working out of IBM's Austin design studio. I was the only team member with formal education in server architecture and networking, so the team relied on me to structure the problem space, lead user interviews, and translate technical complexity into research questions the rest of the team could work from.

Early Incubator Workshop. One of the early workshops with a Power Systems SME. Trying to understand the problem space.
We spoke with six sponsor users globally. I built research artifacts from subject matter expert sessions. The research team wrote the interview protocols, but they pulled from me as the technical SME for what questions to ask. I ran some of the studies myself so I could dig into follow-ups when something interesting came up.
Not a single user asked for a single-click update. The hardest part of patching wasn't the patch — it was everything before and after it.
The information they needed was scattered across disconnected systems, buried in dense XML files, and dependent on institutional knowledge in people's heads. They couldn't see their own environment clearly enough to act.
Scheduling required coordination across multiple teams with competing maintenance windows. Security audits compounded everything — administrators had to manually compile information about every endpoint, vulnerability, and patch into a report for auditors.
“
The most difficult thing for me is to get information. To see all the pieces and how they belong together so I can properly plan.— IBM Power Systems Customer, Project Monocle Sponsor User
One manager described the reporting process as consuming 25% of their time during the first half of every year.
The brief assumed the solution was automation. The actual problem was visibility.
We reframed the product around three principles drawn directly from the research. Every screen had to answer "what do I need to do right now?" The administrator and the manager had different needs, so the product served both without forcing either into the other's workflow. And administrators managed hundreds of endpoints, so the product couldn't require updating one machine at a time.
Incubator Hills
We ran empathy mapping differently than usual — before interviews, to pull domain knowledge from subject matter experts. That gave all of us a foundation to ask better questions, so our first round of interviews was already technically grounded.
Over five weeks, we generated and refined over 100 concepts. We moved from paper prototypes to wireframes to high-fidelity prototypes, running RITE testing to validate every interaction. The pace was daily — someone would be on a call with a user while the rest of us iterated on what came back from the last one.

Terminal Concept. We even explored and tested a terminal concept to meet users where they already lived.
Every single person, regardless of title, contributed to the sketches, wireframes, and designs. I built the high-fidelity prototypes. Fully coded, 100% functional, with backend databases simulating real data calls. The color palettes came from the server environments — reds, yellows, and greens from status lights, grays from rack hardware — so the interface felt native to the people using it.
“
You guys have done such a great job of listening; we talk and you listen and next week new things are there and every time it's gotten better... When will the beta be ready?— IBM Power Systems Customer, Project Monocle Sponsor User
The sales office in New York started getting asked about Project Monocle. Nobody there had heard of it. Demand from sponsor users was reaching the business before we had even finished the incubator.
Power Systems established a Hallmark project — IBM's designation for its highest-priority products, alongside Watson and Bluemix — and built a full product team around it. I was chosen to lead the transition from incubator to Hallmark.
The new team had zero infrastructure background. I built a sequenced learning path from foundational networking through IBM's server architecture, onboarded everyone together, played back the incubator research, and planned the three-day workshop that launched the Hallmark.
The team was conducting user research and contributing to design decisions within the first two weeks.
The Hallmark expanded research to over 30 customers, including 10 sponsor users who gave us ongoing access to their environments and workflows. I ran a week-long field study with a system administrator. Over sixteen months, we ran more than 50 workshops and logged 146 hours of user research.
Carl and I worked closely throughout the Hallmark. He was still getting comfortable with Design Thinking. So was I — I had been a skeptic on a previous team. That failure is the reason I walked into Monocle determined to commit fully.
I treated him like a mentor. Over months of working through the architecture together, Carl stopped treating the research as something the design team did and started treating it as shared context for technical decisions.
When we discovered that some firmware-level patches left administrators unable to determine whether they were affected — forcing them to call IBM for a manually compiled spreadsheet with a two-week turnaround — it was Carl who leaned into the problem. I built the API layer for the web interfaces to connect to the servers. He built the curl-based system and set up the dedicated trusted servers. HMC and BMC access through a web interface wasn't something people were doing at that time.
“
Engaging our stakeholders at the beginning instead of the end allowed us to get at the true pain points. You could feel the emotion from these users.— Carl Burnett, Distinguished Engineer
The Incubator's linear map had been showing us one loop of a cycle that repeats roughly once a month — nine stages from Compliance through Push to Production and back again. Admins lived inside this loop indefinitely. The product couldn't be a tool they picked up and put down. It had to be a tool they lived in.
01
Personas
Three personas became two
The security team's needs turned out to be either handled by the admin or upstream of where the tool provided value.
02
Hills
From tasks to confidence
The incubator hills were task-driven and technology specific. The Hallmark missions expanded the scope and were refined to ensure positive outcomes and inspire confidence in our users.
03
Coordination
A footnote became a mission
The incubator had identified "Notify manager," but during the Hallmark's ongoing research, we found that coordinating maintenance windows was a full, named stage.
04
Scale
Platform extensibility from day one
Users told us where the product needed to grow. We designed the visual system, API architecture, and IA to be platform-extensible.
As Power Systems' first SaaS application, Monocle gave IBM a persistent connection to its customers' environments for the first time — unlocking a brand new Infrastructure-as-a-Service revenue model.
The OpenBMC proof of concept I built convinced IBM to invest in what became a Linux Foundation project — an industry-standard open source firmware stack adopted by IBM, Google, Facebook, Microsoft, and Intel. It runs across POWER9 and Power10 platforms, including the Department of Energy's Summit and Sierra supercomputers.
As Power Systems' first-ever web-based application, Monocle created the SaaS layer that connected IBM's on-premises infrastructure to the cloud delivering the hybrid cloud model that redefined the company's strategy.
“
We are aggressively reinventing our systems portfolio for cloud, data and AI. The centerpiece is our Power franchise.— Ginni Rometty, CEO, IBM
O'Reilly DesignInVision · THE LOOPIBM Enterprise Design ThinkingOpenBMC · Linux Foundation
Featured at the O'Reilly Design Conference, in InVision's documentary on IBM's design transformation, and as a published Enterprise Design Thinking case study. The OpenBMC proof of concept that came from Monocle's technical discovery became a Linux Foundation project adopted by IBM, Google, Facebook, Microsoft, and Intel.
O'Reilly Design Conference. IBM Distinguished Designer Doug Powell presents Project Monocle as a case study in design-driven outcomes at the O'Reilly Design Conference in San Francisco.
“
This is the best story of design at IBM in the last three years… By doing field research, they found out that this was a human problem, not a system problem.— Phil Gilbert, General Manager of Design, IBM
IBM Design Studio, Austin TX. Phil Gilbert, General Manager of IBM Design, calls Project Monocle "the best story of design at IBM" during a studio-wide presentation to the Austin design team.
01
Taught the Domain
Got the design team fluent in server architecture
My designers were being asked to design for server firmware, HMC architecture, and VIOS layers, none of which they'd ever encountered before. I broke down briefings, ran dry-run playbacks privately until they were confident, and had them present technical findings to leadership on their own. Within weeks, they were navigating the domain independently.
02
Ramped Up a Team from Zero
Got the Hallmark team contributing to design decisions in less than two weeks
I built a sequenced learning path from foundational networking through IBM's server architecture, onboarded everyone on the Hallmark team together, and planned the workshop that launched the Hallmark. The team was contributing to design decisions within two weeks.
03
Participated in Research
Deeply Embedded in every session
As an engineer embedded directly in the research process, I protected the integrity of our findings. I assisted by conducting field research with an on-site system administrator, where I participated in observational inquiry over the course of a week. I fielded technical complexity, filtered noise, and ensured the real human problems could surface.
04
Cross-Functional Facilitation
Planning and facilitating alignment across disciplines
As a technical developer, I had the credibility to bring design thinking into an engineering-driven culture in a way a designer couldn't. I planned and facilitated 50+ workshops across both phases in an organization that had been engineering-driven for decades. After the beta, dozens of teams in Power Systems started requesting their own design thinking workshops.
05
Built What We Designed
ZERO FIDELITY LOSS FROM PROTOTYPE TO PRODUCTION
Every prototype during the incubator was fully coded and functional. I sketched, wireframed, and helped map user flows throughout. I "pair programmed" with my designer to work through interactions. The screens I built in Angular shipped to production with the design intact without a single handoff between design and dev.
06
Founded a Second Hallmark
Solving technical constraints that blocked user needs
When research uncovered that users needed deeper visibility, I used my 20% innovation time to partner with Chris Austen to expose the BMC layer through a web interface for the first time. The proof of concept was presented at the Power Summit in San Jose. It became Power Systems' second Hallmark project and grew into a Linux Foundation project.































