There’s a pattern I sometimes see at software companies, particularly those targeting enterprises or on the long march moving their installed base from on-premise to SaaS. The go-to-market materials present a glowing picture of well-planned products, but underneath there’s a jumble of mismatched pieces and arcane product history and incomplete testing and randomness. I’ve started calling it product sprawl.
What Does Product Sprawl Look Like?
Give yourself one point for each item that sounds familiar:
▢ One “main” version of the flagship SaaS product …but lots of our customers are on earlier versions. Perhaps 12 different on-premise releases at 20 customers who are not ready to migrate. And 8 instances where we’re separately hosting and managing individual VMs with old versions, which we affectionately call “in the cloud.” And a few partners with source code from various branches.
▢ Dozens of integrations and connectors to adjacent products in our space …where a couple are used by most of our customers. The rest have one or two (or no!) active users, but each requires verification/testing every time we check in code. Almost every week, one or another integration partner makes a change to their APIs or interfaces — which we mostly ignore until something breaks or a customer asks for help, since the combinatorics are frightening.
▢ Assorted third-party products that we relabel and resell as our own …but we don’t have deep technical expertise or Level 2/3 support or product management for these items. When issues come up, we pull that vendor into our customer calls.
▢ Customers or partners using our product in entirely new markets …which we don’t know much about. Requirements dribble in as we learn about banking regulations in new countries, EU privacy, FDA audit trails for pharmaceutical development, livestock tracking, and a hundred other things we didn’t anticipate when our sales team boldly landed a deal outside our target segment. (We’re not staffed to address these, but now are obligated.)
▢ Field integration teams checking in fixes or improvements to our development codeline when Engineering can’t get to something soon enough …but without consistent code reviews, test cases, communication with the development team, architectural oversight, or roadmap impact. As field teams are shifted to the next project or customer, no one owns long-term maintenance.
▢ Customers or integration partners who use undocumented behaviors (sometimes even bugs!) that are buried in our code, which they depend on us not “fixing” …which we may not even be aware of. The developers who wrote it left years ago, so we sometimes wade into the code itself to determine how our software behaves, then have to decide whether the documentation or the software is wrong. We worry about refactoring and fixing bugs.
▢ Feature flags that temporarily turn off new capabilities for customers that aren’t ready yet …which are managed by hand and not well documented. These proliferate, and some customers decide they don’t ever want to enable some specific capability, so flags persist for years. This means that customers on the exact same release may have very different experiences, and our 18 feature flags can be configured 262,144 different ways.
▢ Beautiful marketing materials which describe our product set as an “integrated suite” …but which have only a passing resemblance to our collection of unrelated toolkits, acquired products, prototypes, workarounds, and futures. Each piece has its own tech stack, unique user permissions, and inconsistent data items/formats. While we pitch a unified experience, the reality is less delightful.
And so on. Historical accidents stacked on complexity layered over old versions alongside extensions that seemed like a good idea at the time. (Did you score 4 points or more?)
Why This Is Bad
From an enterprise sales viewpoint, this sounds great. We can do more things for more customers in more markets. Deliver more revenue value. Our goal is to say ‘yes’ to all reasonable requests. We’re customer-centric, even customer-driven! Smiles all around.
And in small doses, that’s true. One exception or supported back-rev or occasionally-used connector or field-developed tool isn’t a big problem by itself. But this is about accumulation. The area under the curve. Death by a thousand cuts. That one extra dessert each night for a decade. When we add it all up, product sprawl becomes a major culprit keeping us from building and supporting the next strategic things we need.
It’s as if we ran a limousine or taxi service with a fleet of identically painted cars… but each is from a different manufacturer and year with distinct engine (gasoline or diesel or hybrid or all-electric), assorted metric and imperial tools, mix of stick shift and automatic and autopilot, 16 kinds of tires, and a few left-hand-drive models. Plus one . The older models inevitably demand a lot more maintenance. We might need several different mechanics, many stacks of repair manuals, and a garage full of specialty parts to keep us rolling.
- Engineering is probably putting most of their time into maintaining old stuff. When we look carefully, we are shocked to discover that development teams are patching 8 obsolete codelines, working dozens of Level 3 escalations, reverse-engineering some old-but-very-clever data synchronization, and fixing random stuff that we forgot about. Big new projects or products make little progress.
- There’s no longer any single person who knows how our whole application works. We find ourselves looking through the code to determine what’s happening, then struggling to decide what was originally intended.
As well, there’s no one person who knows what all of our customers have deployed. Individual field teams know their individual installations, but there’s no consolidated tracking. And we lose history as account teams churn. It’s quite hard to answer questions like “who are all of the customers using Version X of Feature Y to do Thing Z?”
- Testability is a challenge. With so many options and combinations and versions and variations — plus incomplete knowledge — we can’t really test whether a fix will break something. We keep being surprised when patches fail or refactoring upsets someone. huge
- Updates are a nightmare. Customers have figured out that they are our ultimate testbed: we ship a new version, hope for the best, and see if any paying users complain. We’ve created “upgrade hesitancy.”
- Major replatforming projects keep running aground. Since no one truly knows how the current system operates and much of the problem is hidden, beautiful architectural designs crash into ugly reality. We ban the project names “NextGen” and “Suite 2.0” and “Project Big Bang.”
So some hard facts: In software, there’s no one-and-done: all software eventually breaks. We have an obligation to our customers to maintain all of the products we sold. Rigorous testing is a fundamental part of development, not an option. Customers expect updates that work. One-offs and specials are habit-forming. Even if a customer funds some piece of work, that necessarily diverts Engineering from planned and committed improvements, and must be supported for years. Every variation and quick fix and single-customer item and back-rev installation and unarchitected extension and manual-only test makes the overall job harder.
How Did We Get Here?
Software companies have a natural tension between short-term and long-term. Enterprise software companies have a natural tension between “what one big customer wants” and “what the larger market segment wants.” So product sprawl is the result of years of seemingly small decisions that each take us in the same direction: one more thing that’s not quite on strategy, doesn’t exactly fit, falls short of general usefulness… but helps one customer solve one semi-unique problem.
It’s a bit like climate change: once the glaciers have melted — or we realize we have an untestable sprawl of products and features and architectures and add-ons — the fixes are much more expensive than averting the problem earlier.
So the most likely root cause is an executive-level decision model where Sales almost always wins the short-term argument, and Product/Engineering don’t have a strong voice. Where we reflexively approve requests from our largest customers without much introspection or investigation. Where we don’t see (or don’t want to see) the cumulative impact of delaying our strategic improvements a month at a time. Where we tell ourselves that X will have broad market adoption, but don’t inspect whether this really happens.
[For clarity, there are some single-customer demands that we should absolutely accept: when our largest account will cancel over a small enhancement, or early adopters are asking for a capability that is emerging for our whole segment. But when we argue this only at the account level, we ignore the metastasis.]
It’s not that Product/Engineering should always get their way, but that they should be in the room where it happens. And that the executive team should argue about aggregate strategic impact as well as individual named accounts.
This is often tied to a professional services organizational model: while we think of ourselves as a software product company, we may actually look much more like a consulting or custom development shop. A large portion of employees are on customer-specific implementation projects; we earn 20%+ of revenue from human-delivered services; we track consulting margins and utilization at the executive level; new customer acquisition is bottlenecked on service delivery; project names tend to include a specific customer’s name (“Citibank LDAP connector”). If we organize like a consulting company and make decisions like a consulting company, we become a consulting company.
An acquisition binge can have similar symptoms. We might acquire and roll up 6 or 8 or 10 small competitors in our space, and hope (against all previous experience) that merging them into a coherent product suite will be quick and easy. In reality, we have a heap of disconnected bits, unique back ends, surprising cloud architectures, unrelated workflows, misaligned user permissions, front-end choices… but we’ve already promised our Board that we’ll immediately cross-sell these products like crazy to existing customers.
Last, a possible contributor may be lack of repeatable product experience on the technical side. If we pull together a team of industry experts (SMEs) with little time-in-grade at software product companies, they bring along internal IT assumptions/processes that are entirely inappropriate for building commercial products. IT at airlines (or banks or insurance companies or government agencies) manages software development as a cost center — focused on delivery dates and exactly meeting specs written by non-technical internal stakeholders, with constant pressure to reduce costs — but without meaningful metrics tied to company outcomes. An IT-style development/product team that hasn’t served a market with many customers brings a feature-factory approach that invites sprawl.
What Can We Do?
I see this as much more about how an executive team operates, and less about algorithms for deciding which story goes into the next sprint. Said another way, process changes don’t solve people problems or organizational issues, especially if our executive team undercuts our stated processes. Here are a few suggestions:
- Make the sprawl pattern very obvious. Turns out that what’s obvious to the development/product side of the house is often invisible to the go-to-market side of the house. Pull together a (long) list of what we’ve built and claim to support. Show (not tell) how that’s consuming much of the organization. Show (not tell) how many tickets and fixes we’re doing for no-longer-supported codelines or products. Show (not tell) why we’re unable to fully test thousands of product combinations. Show (not tell) recent instances where commitments were made without formal review by product/engineering. At all times, have a list handy with dozens of specific roadmap items that we postponed for late-arriving single-customer promises (including the expected revenue from those cancelled items). Assume that you’ll need to go through this several times, as the concept can be quite foreign to sales organizations..
- Recognize this as an executive-level behavior pattern. Saying ‘yes’ without weighing the product-side costs is how many companies operate, especially single-account-driven enterprise companies, so prioritization algorithms or 12-column spreadsheets don’t address the root issue: we have to change how we review and approve major deals. Bring a truckload of empathy to the discussion, because “specials” may be how our sales team has been hitting our revenue targets for a long time. Assume we have misaligned goals or reward systems rather than that Sales & Marketing need lecturing. Emphasize the problem before offering solutions.
- Add product representation to the Deal Desk, which usually has early visibility into emerging non-standard proposals. Can we get Product involved early enough to identify alternatives or reasonable workarounds, long before this becomes a crisis?
- Share the selection problem with Sales by making it visible and slightly painful. If your CRO/VP Sales gets to choose (only) two roadmap interrupts per quarter of limited size (e.g. less than 1 engineer-week each), we now have someone helping drive decisions who is rewarded on the company’s total revenue — instead of a single deal. The CRO has lots of incentive to greenlight the two items that will bring in the most money. Plan some very public scorekeeping, as CROs often “forget” that they gave away their two draft picks earlier in the quarter.
- Start an aggressive end-of-life program. Create a master list of old product versions and which customers have them. Then announce unambiguous end-of-support dates for the oldest bits. Give customers reasonable time (a full year in most enterprise settings) but without any extensions or exceptions or dispensations or escape clauses. Unsupported really means unsupported. (This requires an iron-clad commitment from the C-Suite that they will support Product Management when big customers call with complaints. Otherwise, this is a waste of energy.) Pick a dozen products or capabilities that we suspect are rarely used, and chase down which customers have those in production. Map out reasonable migration or upgrade plans for them, probably including technical assistance. Start to build the organization’s end-of-life muscle by dropping a few easy bits first. Rinse and repeat.
- Turn the phrase “everyone wants that” into a company laugh line: whenever we hear someone working a major account say “HugeCorp absolutely needs this,” jump in with “…and you’re about to tell us that everyone in our market wants that.” Said humorously enough times, it becomes the equivalent of “let me put that into the backlog” and loses its hypnotic power.
BTW, my super-secret data suggests that fulfilling individual big-customer requests as sequenced by those big customers rarely converges on a reusable product. I estimate that half of the “everyone will want this because GalacticLLC is a market leader” items are deployed only once. Or never.
- Consider changes in how we comp our sales teams — since that’s the best way to change actual behaviors. Can we require three customers to commit to paying for an enhancement before we build it? Do we reduce commissions on deals with “specials” that skipped engineering/product review? Maybe delay commissions on deals until all of the promised software items arrive? Suddenly, everyone is paying attention.
This isn’t about who’s right and who’s wrong. It’s about extracting the most value (and revenue and growth and customer joy and investor multiples) from a development team that will never be big enough to build everything we want . And making better collective decisions that include the long-term health of the company. So we (product and engineering folks) must approach this respectfully — making the patterns obvious (again and again) at the executive level, tying serious business problems to uncontrolled product sprawl.
Especially in the enterprise space, addressing single-customer requests in revenue order doesn’t lead to coherent or maintainable products. Instead, we get product sprawl: hundreds of vaguely related bits and features that get increasingly harder to explain and put to use. We need an underlying plan, architecture, evolutionary path that brings these together into winning products.