When have metrics been useful in your design practice? When have metrics been a challenge? In my experience, metrics have played the role of the canary in the coal mine as much as they have helped teams stay on pace and make meaningful decisions about the work they are designing and building.
But why are modern day teams obsessed with numbers? And why should designers and design program managers care? First and foremost, digital design work is often an abstraction from real life problems, which requires abstract reasoning to make sense of. Basic math, measurement, and metrics enter the scene to help us understand value, change, and even risk over time. Metrics are signals or performance indicators that show us how we’re doing along the way.
At some point, every designer has had to deal with DAU, MDAU, CTR, churn… It feels like some sort of alphabet soup that might kill the creative soul. On the contrary, these measures are useful signals that tell us what humans are doing as they interact with the products and services that you offer them (Within the company and in the world!) The challenge that design program managers face, however, is that our work can often be intangible and often difficult to measure – or so it seems.
In this post, I want to talk about how we can borrow from our collaborators and bring similar rigor to our programs through measurement – plus some tips for getting started with basic indicators of program health. Like our partners in product, engineering, and design, program managers can measure the impact and value of their work through measures of ✨change ✨. The whole point of measuring anything at all is to give you the ability to monitor your work against your plan and make decisions accordingly. Let’s talk about it!
Types of programs and corresponding measures
In my last post, I talk about organizing programs into four categories. Those include: Product delivery, process management, people programs, and partnership. Each of these program types involve different types of problems to solve with different strategies and methodologies to solve them. Therefore, the kinds of things you measure to understand the performance of your program will follow suit.
Product delivery
With product delivery, you are going to be accountable for shared metrics with your technical counterparts like the DAU, MDAU, CTR, churn type metrics noted above. Those are a few of the basic signals that tell you what your customers are doing at scale. In addition to how the product performs, you may want to consider signals on the program effectiveness itself. In which case, you may consider measures that tell you how the product development process is performing like velocity (how fast you can build and ship useful things), quality (heuristics that help you meet and maintain your quality bar), predictability (often a mix of qualitative inputs that signal how repeatable your process is), maturity and program health (progress, plans, and risk mitigation measures which can sometimes tie in with your goals over time). If you specialize in areas like Design Systems, there is a full suite of metrics that can help you track your progress like # of new components, adoption rates, impact to development velocity, etc.
Useful measures of success:
Velocity and progress: Monitoring the progress and status of the program to ensure that it is on schedule or identifying areas of delay, so that appropriate corrective action can be taken.
Performance: Keeping track of the performance of the team, assessing the strengths and weaknesses of team members, and keeping the team members motivated.
Process management
When it comes to process management, you’re going to want to look at both the business value and the behavioral change affected by the process at hand. Standard business signals around cost saving, time saving, and throughput will tell you how efficient a process is. Behavioral signals like adoption rates and sentiment will tell you how effective a process is. You could apply this to processes like project intake, sprints, budget management, change requests, etc. For example, if we look at project triage, we might want to measure some form of risk mitigation by tracking the need to identify and assess potential risks to the program, and the appropriate steps taken to mitigate them. Risks mitigated is a pretty strong signal of efficacy!
A few more indicators:
Budget management: Tracking the program budget and comparing it to the expenses, profits and projected returns.
Resource management: Track the utilization of resources, such as people, equipment, facilities, and materials, to ensure that they are being used efficiently and effectively. Remember, resources required = scope / (time X capacity). Highly measurable!

People programs
People programs are big contributors to organizational health and performance. Keeping track of the performance of the team, assessing the strengths and weaknesses of team members, and keeping the team members motivated will tell you how your organization is doing. And anything related to people and their experience working on your team will be shaped by attraction, engagement, development, retention, and attrition:
Attraction includes your hiring pipeline, candidate experience, and employer brand. You’ll use basic growth metrics to understand your funnel and how each stage of the funnel performs. (Eg. How many people apply to job listings vs how many high quality candidates actually interview)
Engagement is about the ability to engage employees in the work that matters to get the results you want multiplied by employee sentiment (which you probably already measure in an annual pulse survey.)
Development benchmarks signal change in capacity or growth over time (Eg. Rate at which people get promoted because they get better and better at their job.)
Retention is the ability to keep good employees happy, on the team, producing good results. You can measure tenure and employee sentiment against business performance to get a clear picture.
Attrition is the rate at which people leave the company, both regrettable (They were good and left) and non-regrettable (They left and it’s probably a good thing.) This is a basic equation of size of team minus people who have left, plus people who joined. You want to look at the net size of the team and the rate or pace of regrettable attrition. If there is a spike in regrettable attrition, that might indicate a cultural issue on the team that needs to be addressed.
Partnerships
This program area is all about stakeholder management. This may include monitoring the satisfaction of stakeholders, such as customers, investors or employees to take actions to address any concerns or feedback. Depending on how you structure your portfolio, you might look to signals like executive effectiveness, strength or maturity of matrixed relationships, partner sentiment, or even anecdotal signals around perception and reputation. What are people saying and doing in response to partnerships? Counting your wins means a lot in this program area, as it can be ad hoc and less routine than other program areas. Remember, every program does not require fancy charts and graphs to be able to track and understand the state and impact of the work. Logging basic events and evaluating whether they were good or bad for the objectives at hand can give you the insights you need to manage effectively.
Change over time
When measuring change, you measure the performance of something over time. You’re looking for the before and after effect of the actions you take and decision you make.
Measuring the past is essential for understanding the progress and outcomes of your programs. It helps you assess the effectiveness of your strategies and interventions and provides insights for future decision-making. Here are different approaches to measuring the past:
1. Quantitative data: Quantitative data involves using numbers and statistical analysis to measure change. This can include metrics such as survey data, reports, and statistical calculations. For example, you can analyze user engagement metrics, conversion rates, or revenue figures to track the impact of your program over time. Quantitative data provides a quantitative understanding of the changes that have occurred. If you don’t have fancy tools like Snowflake or SQL, you can always log numbers manually and analyze them over time. (I logged event attendance by hand for years so that I could create a nice little histogram to describe my community program growth over time!)
2. Qualitative data: Qualitative data involves using descriptive data to measure change. This can include methods such as observations, interviews, case studies, or user feedback. Qualitative data provides a deeper understanding of the experiences, motivations, and perceptions of your program's stakeholders. It can help you uncover valuable insights and identify areas for improvement that may not be captured by quantitative measures alone. Think about the partnerships work mentioned above.
3. Key Performance Indicators (KPIs): KPIs are predefined metrics that are aligned with your program's goals and objectives. These metrics serve as indicators of progress and performance. By setting specific KPIs, you can track the achievement of targeted outcomes and measure the extent to which your program is meeting its intended goals. Examples of KPIs include customer satisfaction scores, on-time delivery rates, or cost savings achieved.
4. Before and after analysis: Before and after analysis involves comparing a situation or outcomes before and after implementing a specific intervention or change. This approach helps you evaluate the impact of your program by assessing the differences in key metrics or indicators. For example, you can compare customer satisfaction scores before and after implementing a new customer service training program to measure its effectiveness.
5. Surveys and feedback: Collecting feedback from stakeholders through surveys or other feedback mechanisms is an effective way to measure changes over time. By periodically surveying your program's participants, customers, or employees, you can gather their perceptions, opinions, and satisfaction levels. This qualitative feedback can provide valuable insights into the effectiveness of your program and identify areas that require improvement.
Forecasting the future involves making informed predictions and projections about the potential changes and outcomes of your program. It helps you anticipate challenges, set realistic goals, and make strategic decisions. Here are different approaches to forecasting the future:
1. Analyze past trends and patterns: Analyzing historical data and identifying trends or patterns can provide insights into future changes. By looking at past performance and identifying recurring patterns or trends, you can make informed assumptions about future developments. For example, if you notice a consistent increase in user engagement over the past few quarters, you may forecast continued growth in the future.
2. Identify key drivers: Understanding the main factors that drive change is crucial for forecasting the future. Factors such as changes in the economy, technology advancements, or shifts in consumer behavior can significantly impact the shape of a product delivery program. By monitoring and analyzing these key drivers, you can make predictions about how they may influence your program in the future. For instance, if you anticipate a rise in mobile device usage, you can forecast the need to optimize your program for mobile platforms. And while a design program manager may not be in a position to run market analysis, getting read in on these trends can help you anticipate changes to your product development lifecycle.
3. Conduct research: Conducting research allows you to gather information about the current state of affairs and potential changes that may occur in the future. This can involve market research, competitor analysis, or industry trends. Research helps you stay informed about external factors and developments that may impact your program's success. By staying abreast of relevant research findings, you can make more accurate forecasts.
4. Use forecasting models: Forecasting models utilize historical data, key drivers, and research findings to make predictions about the future. There are various methods and models available, such as extrapolation, time series forecasting, regression analysis, or simulation modeling. These models take into account historical patterns and key variables to estimate future outcomes. While forecasting models provide estimates and projections, it's important to regularly evaluate and refine them based on new data and changing circumstances. This methodology can be applied to nearly any program type.
By measuring the past and forecasting the future, you can gain valuable insights into the effectiveness of your program and make informed decisions to drive its success. These approaches help you track progress, identify trends, anticipate challenges, and set realistic goals for your program's continued growth and improvement.
Add it all together
Measuring program management is a powerful tool that allows design program managers to bring rigor and accountability to the work. While the intangible nature of design programs may pose challenges, metrics provide valuable signals and performance indicators that help us understand progress, make data-driven decisions, and drive meaningful change.
But what you measure is only half of the equation. In a future post, I will discuss what to do with the data once you have it. Storytelling, presentation, and reporting to drive strategy, operations and decisions are just important as tracking and measuring the programs you manage.
For now, let's embrace the power of metrics and measurement. Let's borrow from our collaborators, establish clear objectives and key results, define meaningful key performance indicators, and monitor indicators of program health. By doing so, we can continuously improve our programs, deliver impactful outcomes, and make a real difference in the pursuit of good work. Here’s to measuring what matters!