
After years of hyper-growth and several acquisitions, Sunrun reached an inflection point where its engineering team was struggling to ship. Leadership sensed the problem was technical debt — but “we have technical debt” isn’t a management decision. Without a quantified view of how time was actually being spent across a large engineering organization, there was no basis for conviction, no way to weigh tradeoffs, and no clear case for what needed to change.
Parable built a semantic contextual knowledge graph for Sunrun's engineering organization, pulling from M365, Salesforce, Jira, Git, and other enterprise tools into a dedicated virtual private cloud — classifying and connecting every unit of work to its business purpose to produce a dollar-quantified breakdown of how engineering capacity was allocated between technical debt and the product roadmap.
Parable’s approach began with the fundamental problem: leadership had a hypothesis but no data. The goal of the engagement was to generate a quantified, evidence-based view of how the engineering organization was actually spending its time — not how it reported spending its time.
Data was ingested from Sunrun’s core systems — M365, Salesforce, Jira, Git — into a dedicated virtual private cloud. Parable’s proprietary context graph processed this data not as raw records but as semantic units: classifying each meeting, task, and commit by its actual business purpose, connecting units of work to each other and to the strategic initiatives they served.
The output was a dollar-quantified breakdown of engineering capacity allocation — specifically, how much was going to technical debt maintenance versus product roadmap execution. This was the first time Sunrun had a precise, defensible basis for a management decision on the engineering structure.
With that quantification in hand, Parable measured the impact of each subsequent initiative — organizational redesign, AI transformation investments — against the original baseline, establishing a feedback loop between investment decisions and observed outcomes.
Enterprises and late-stage companies — 500 to 10,000+ employees — where AI transformation is a stated priority but leaders can’t articulate how work actually happens today. Especially relevant for companies coming out of hyper-growth who sense they have operational or technical debt but lack the data to act on it with conviction.





