Having clear goals is critical for getting the most value out of a development team. Every day, teams make decisions about how to approach a problem and what is precisely in scope for a release. Preventing the team from implementing features that don’t contribute to the overall goal can add up to significant savings over time. In extreme cases, the lack of a clear goal can contribute to working on a project that provides little to no user value.
Sometimes, organizations lack a clear goal because they jump straight to WHAT they are building, skipping the WHY. Other times the problem and the WHY are understood by leaders in the organization but are then turned into solutions which are handed off to development teams without the important accompanying context.
This is often discussed as “outcomes over outputs.” The goal should never be to build something. Building something is a means to an end. The goal is to change user behavior and provide value.
Here are two examples highlighting the importance of setting clear goals, loosely based on my past experience:
Companies like Rent the Runway have popularized the idea of renting clothing. Let’s envision a startup, called Style Shipped, trying to get into this space. They differentiate themselves by charging per item, not per month. A required one-time signup fee will cover the cost of on-boarding new users, but after that, users will only pay for what they use. They plan to develop a pilot that makes use of an iPhone app which will primarily target Chicago, where they are located.
The positive: Style Shipped recognizes that to get a pilot launched as quickly as possible, much of the functionality that could be automated will be done manually for the initial release. For example, the process of shipping and receiving rented items will happen manually. Use cases that deviate from the “happy path” scenario, such as a late return, will also be resolved manually.
The negative: the company doesn’t have a clear understanding of what they hope to get out of the pilot. Effectively, the goal is to “build an app.”
Here is the backlog that was identified to do that:
Not too bad! That list generally makes sense as a feature-light pilot to be used as a starting point for future iterations. However, having a clear goal would save some time and money. Let’s look at how.
If this were the goal, how would the backlog look different?
The likely negative effect of not clarifying this goal would be extra time and money spent on implementing features that don’t validate the hypothesis.
A potential worse scenario would involve extra time and money implementing features without getting any answers to the most important questions. If the main question is whether users will pay the signup fee, and most users of the pilot get a promotion code to sign up, that question could go unanswered.
Defining what success looks like is crucial to making strategic decisions. An example for this scenario could be: x number of users in y weeks that have paid the signup fee and rented an item. Or z% of people who download the app pay the signup fee and rent an item.
When determining that this was a viable business model, it is likely that certain assumptions were made. Clothes have a life expectancy due to style and durability. A $200 item that rents for $20 may have to be rented 20 times to pay for the item itself plus the labor costs involved with shipping and cleaning. If style preferences dictate that items are phased out after 2 years, then each item will have to be rented roughly one week out of every 5 to break even, and even more frequently to create a profit margin. If this were the goal, how would the backlog look different?
Again, the likely negative effect of not clarifying this goal would be extra time and money spent on implementing features that don’t validate your hypothesis. In this case, it is likely to also result in an incomplete answer to the question.
A potential worse scenario would involve extra time and money implementing features that don’t provide users value or answer any of the most important questions.
An example of defining success metrics in this scenario could be: on average, clothing items are rented x% of the time.
This example is a company that sells software to mortgage brokers. Let’s call them Mortgage Forge. This company’s main differentiator is their centralization of rate data, which comes from relationships established over the years with various lenders. They offer data that is more accurate than their competitors, who rely on publicly available information. This allows Mortgage Forge to guarantee rates to a very narrow percentage, while their competitors’ rates are not guaranteed.
However, this data is less valuable if everyone has it. Their customers need to distinguish themselves from each other and believe that custom presentation of this data is their competitive advantage. Consequently, Mortgage Forge relies on offering fully customizable PowerPoint outputs.
The problem: these customized presentations are created manually. Each customer may have 5 to 10 different presentations, each of which has dozens of pieces of data that can be combined in multiple ways. Today, they have a developer manually manipulating the xml presentation definitions, which can take 2+ days per customer. When Mortgage Forge started out, this was a small percentage of that developer’s time. As they grew from a few dozen customers to a few hundred, this task has swollen to become more than a full-time job. Due to this backlog, the time to implement a new customer has stretched from one week to three weeks.
The positive: Mortgage Forge has fully vetted this problem and its potential solutions. They had real discussions about increasing staff, reducing customization offerings, and various degrees of automation to support this process. In the end, they opted to create a fully automated drag-and-drop solution for internal users to create templates and for end users to customize most things being customized today. They identified a clear problem statement: it takes too long to onboard new customers. They had a goal: reduce that time. They set success metrics: reduce average implementation time to one calendar week.
The negative: none of this context made it to the development team tackling the project! All that was communicated to them was the WHAT of creating a fully automated drag-and-drop solution for internal use to create templates and for external use to customize most things customized manually today.
All team discussions were in the context of a goal to build this solution, not in the context of a goal to reduce implementation time. They came up with a phased build strategy to reduce risk: first, build the tool for internal users to create templates, then create the end user tool for customizing presentations, then hook it all up to the real data sources. This allowed them to first address the biggest risks around UI technology, especially around accurately previewing the presentations in a browser.
As expected, the team ran into many questions along the way. Exactly which data fields need to be supported? Is drag-and-drop necessary? Do fonts, colors, and branding need to be supported? Instead of answering these questions with “Is that feature critical to reducing implementation time,” they answered the questions with “Is that useful to the user?” Being focused on the end user is not a bad thing, but focusing on a user without a specific goal in mind leads to scope creep and guessing at the value of features, rather than making data-driven decisions.
After a few months of working on the project, the implementation backlog had grown from 3 weeks to 5 weeks. The team had finished the internal user template creation and were now starting on the end user piece, but none of it was hooked up to real data sources yet – that was Phase 3. Leadership had to step in, pause the project and put more developers on the manual presentation customizations in order to get the backlog under control. The team was directed to pivot immediately to integrating whatever automated customization was done with real data, to expedite the manual process while the rest was being automated.
How might this have been different if the team had been given a different directive? What if instead of “create a fully automated drag-and-drop solution,” they had been told to “iteratively reduce the implementation time for custom presentations until the average time is one week?” On one hand, the team should have had the discipline to reduce release size and iterate more regardless of what direction they were given, but this was symptomatic of other organizational problems. On the other hand, that discipline is FAR easier to maintain if every day-to-day decision can be put in the context of a clear goal.
For example, picture the following exchange:Team: “Do we need to support customizing customer fonts, colors, and logos?”Product Owner: “Is this critical to reducing implementation time?”Team member 1: “Well, it is something we offer today in our report customizations.”Team member 2: “Yeah, but this doesn’t require xml manipulation- it can be done in PowerPoint after the report is created.”Product Owner: “So having our implementation staff make the change in PowerPoint rather than our tool doesn’t require any technical know-how or much more time?”Team: “Right”Product Owner: “Then this is not critical for our first release.”
This is a hopefully realistic example of how a clear goal for a release can impact day-to-day decisions.
Most of the time, not having a clear goal causes a development team to be less efficient. Sometimes, lacking a goal can contribute to a release that doesn't provide any user value or validate any hypotheses. Having a clear goal is not the only necessary component for solving this problem. It is important to have a culture of solving problems iteratively and making decisions based on data instead of guesswork. This notwithstanding, having a clear goal can improve the decisions a team makes on a daily basis and maximize the value provided.