R&D Operating Model

A comprehensive framework for engineering teams building enterprise software.

Table of Contents

  1. 1. Well-Formed Teams
  2. 2. Quarterly Planning Cadence
  3. 3. Pre-Planning
  4. 4. Quarterly Planning (QP)
  5. 5. Day-to-Day Execution
  6. 6. JIRA Structure & Tooling
  7. 7. Governance & Meeting Cadence (During Execution)
  8. 8. Bridge Sprint
  9. 9. Metrics & KPIs
    1. Delivery Metrics
    2. DORA Metrics (Engineering Performance)
    3. Quality Metrics
    4. Engineering Intelligence Metrics (Jellyfish)
    5. Team Health Dashboard
  10. 10. Scaling the Model
    1. Scaling Philosophy
    2. Scaling Tiers Overview
    3. Small Scale (2–4 Teams)
    4. Medium Scale (5–10 Teams)
    5. Large Scale (11–20+ Teams)
    6. Scaling Decision Matrix
    7. Scaling Signals
    8. When This Operating Model Doesn't Fit
  11. 11. AI-Accelerated Engineering
    1. AI-Assisted Development
    2. AI Across the SDLC
    3. AI Governance & Standards
    4. Measuring AI Impact
    5. AI Adoption Roadmap
    6. AI Tool Landscape
    7. AI Agents and the System of Record
  12. 12. Onboarding Guide for New Teams
    1. Who This Guide Is For
    2. Prerequisites — Before Day One
    3. Week 1 — Team Foundations
    4. First Sprint — Establishing Rhythm
    5. First Quarter — Full Cycle
    6. Graduation Criteria
    7. Common Pitfalls
  13. 13. Roles & Responsibilities
    1. Role Overview
    2. Core Team Roles
    3. Leadership Roles
    4. Partner Roles
    5. Roles by Scale
    6. Where to Find More
  14. 14. Running Your First Quarterly Planning
    1. Who This Guide Is For
    2. First-Time QP vs. Steady-State QP
    3. Readiness Checklist — Eight Weeks Out
    4. Logistics and Communications Plan
    5. Solving First-Time Data Gaps
    6. First-Time QP Agenda
    7. Facilitator Runbook
    8. Post-QP Close-Out Checklist
    9. Primer for First-Time Attendees
  15. 15. Customer Ideas Pipeline
    1. Who This Section Is For
    2. Idea Sources
    3. Idea Lifecycle in JPD
    4. Evaluation Criteria
    5. Review Cadence
    6. Idea to Initiative to Epic — Promotion Process
    7. Connecting to the Customer Voice Investment Category
    8. JPD Fields for Ideas
    9. Metrics and Health Signals
  16. 16. Tools Summary

1. Well-Formed Teams

The foundation of the model is the well-formed team — the atomic unit of delivery.

The composition below reflects an AI-augmented team (2026+): fewer people, higher individual leverage, AI tooling treated as a default expectation across the SDLC rather than an optional add-on. Orgs migrating from pre-AI team sizing should not shrink abruptly — see the migration note below.

Core Team Composition (AI-era)

Role Count AI-era rationale
Product Manager 1 AI-assisted research synthesis, spec generation, stakeholder comms. Output per PM rises ~1.3×; role does not shrink.
Technical Team Lead 1 Expanded scope — architecture, AI governance, prompt library ownership, code review standards for AI-assisted code. Less day-to-day coding than the pre-AI TL role.
Software Engineers (senior-weighted) 3–4 AI-augmented by default. Expect 1.5–2× individual leverage on routine work (boilerplate, test authoring, refactoring). Senior profile recommended — a team of mostly juniors does not realize the leverage.
Quality Engineer (Automation preferred) 1 QE practice is partially embedded in each SWE's workflow (AI-generated tests, in-PR coverage checks). The dedicated QE focuses on test strategy, coverage gaps, and automation infrastructure — not hand-authoring test cases.
UX Designer Shared (1 designer per 2 teams) AI design tooling (Figma AI, v0.dev, prompt-to-spec) lifts throughput. A dedicated 1:1 designer is a luxury most teams don't need; shared across 2 teams is the new default.
DevOps / Platform Engineer Shared (1 per 2 teams, or centralized platform team) IaC with AI assistance, AI-authored runbooks, and AI ops tooling reduce per-team DevOps load. Platform-team patterns (per Team Topologies) scale better than 1:1 DevOps.

AI-era baseline headcount: 8 FTE-equivalent per team (1 PM + 1 TL + 4 SWE + 1 QE + 0.5 UXD + 0.5 DevOps).

Migration from Pre-AI Team Sizing

Orgs coming from an older operating model that ran ~11 per team (5 SWE, 2 QE, 1 dedicated UXD, 1 dedicated DevOps) should not reduce headcount abruptly. Run the older composition while AI practices are being adopted; reduce over 2–3 quarters as these gates are met:

Do not downsize ahead of AI practice maturity. A team with 4 SWE and no functional AI workflow is worse off than 5 SWE with traditional process. The headcount reduction is a consequence of AI leverage, not a forcing function to create it.

Team Size — Core vs Default vs Adapted

Per §10.8 Core/Default/Optional tiering:

Partnership Roles (Shared Across Multiple Teams)

These roles support one or more engineering teams as shared partners:

Team Principles

Team Groupings

Teams are grouped by product area. For example, 4 teams may all contribute to an Inventory product within the larger software platform. These groupings share senior leadership and coordinate through shared planning and governance rituals.

See Section 10: Scaling the Model for how team structure and governance adapt from 2 teams to 20+.

See Section 10.8: When This Operating Model Doesn't Fit for contexts where team size and structure adapt (regulated industries, continuous deployment, research orgs, agencies).

See Section 11: AI-Accelerated Engineering for AI tooling adoption roadmap and governance.

See Section 12: Onboarding Guide for a step-by-step onboarding guide for new teams.


2. Quarterly Planning Cadence

The model operates on a quarterly cycle of 6 sprints, with planning built to stay one quarter ahead.

Overview

|--- Pre-Planning (3 sprints before) ---|--- Quarterly Planning (2 days) ---|--- Sprint 1-6 Execution ---|--- Bridge Sprint ---|

3. Pre-Planning

Starts: At least 3 sprints before Quarterly Planning Day 1.

Goal: Ensure the roadmap is shaped, data is gathered, and teams are ready to plan confidently.

Roadmap Inputs

The roadmap should be identified and include investment across four categories:

Investment Category Description Benchmark %
New Product New features and capabilities 30-40%
Architecture / Tech Debt Platform improvements, scalability, modernization 15-20%
KTLO (Keep The Lights On) Maintenance, operational stability, minor fixes 20-30%
Customer Voice Customer-reported defects, enhancement requests 10-20%

Note: Benchmark percentages are starting points. The actual allocation should be tuned based on the organization's maturity, growth goals, and current operational health.

Data Points for Pre-Planning

Capacity Planning Process

Capacity planning determines how much work a team can realistically commit to in a quarter. It should be completed before Quarterly Planning so that teams plan against real numbers, not assumptions.

AI-era framing: The 6-step process below is unchanged in structure — AI does not change the math of capacity planning. What it changes is the velocity input (Step 3) and introduces a new dimension: the team's position on the AI adoption curve (Step 7). An AI-augmented team produces more per engineer at a steady state, but the transition to that steady state is not free. Plan against empirical velocity, not projected AI gains.

Step 1: Determine Available Days

For each team member, calculate available working days per sprint and per quarter.

Available Days = Total Working Days − PTO − Holidays − Training − Onboarding Ramp
Input Description
Total Working Days Business days in the sprint (typically 10 for a 2-week sprint)
PTO Planned vacation, sick leave buffer
Holidays Company holidays falling within the sprint
Training Scheduled training, conferences, certifications
Onboarding Ramp New hires operate at ~50% capacity for first 1-2 sprints
AI Tooling Ramp When adopting new AI tools (new IDE, new prompt framework, new review tool), treat as a one-time 10-15% capacity hit for the first sprint of rollout — prompt library authoring and tool familiarity take time

Step 2: Calculate Team Capacity in Hours

Not all available hours are productive sprint hours. Apply a focus factor to account for meetings, support rotations, context switching, and overhead.

Sprint Capacity (hours) = Available Days × Hours Per Day × Focus Factor

Recommended Focus Factor: 0.70 - 0.80 (i.e., 70-80% of time is productive sprint work)

Example — 1 engineer, 2-week sprint:

10 available days × 8 hours × 0.75 focus factor = 60 productive hours

AI-era note: AI tooling reduces some overhead (faster PR reviews, AI-drafted docs, auto-generated boilerplate) and adds some (prompt engineering, reviewing AI output for correctness/security/provenance). Net effect on Focus Factor is typically ±0.05 — not enough to change this step's math. AI's real impact lands in velocity (Step 3), not in productive hours per day.

Step 3: Calculate Team Capacity in Story Points

Use the team's historical velocity (average of last 3-6 sprints) as the baseline.

Adjusted Velocity = Average Velocity × (Actual Capacity ÷ Full Capacity)

Example — Team averages 42 points/sprint at AI-era full strength (8 FTE-equivalent team — see §1 Well-Formed Teams), but has 1 member on PTO:

Adjusted Velocity = 42 × (7 members ÷ 8 members) ≈ 37 points

AI-era velocity caveat — historical velocity is a moving target during AI adoption.

As tooling matures and prompt libraries stabilize, a team's velocity will drift upward — typically 20–40% over 2–3 quarters of serious AI practice. This breaks two assumptions baked into the traditional formula:

Step 4: Allocate Capacity by Investment Category

Apply the investment allocation percentages to the adjusted velocity.

Example — Team with 37 adjusted points/sprint:

Category % Points/Sprint
New Product 35% 12.9
Architecture 20% 7.4
KTLO 25% 9.3
Customer Voice 20% 7.4

Step 5: Apply Sprint Capacity Recommendations

Not every sprint should be planned at full capacity. Teams must account for two realities:

  1. Sprint-specific adjustments — ramp-up at quarter start, wind-down at quarter end, and pre-planning diversion.
  2. Unplanned work reserve — every sprint has "noise" from customer-reported defects, production incidents, support escalations, and urgent security patches. This is not optional — it is a constant.

Unplanned Work Reserve

Reserve 15-20% of each sprint's capacity for unplanned work. The exact percentage depends on the team's historical incoming rate:

Team Profile Recommended Reserve Signals
Low noise 15% Stable product, few customer defects, rare incidents
Moderate noise 20% Active product with regular defect inflow and occasional incidents
High noise 25-30% Legacy product, high defect backlog, frequent incidents or security vulnerabilities

Tip: Use the team's Incoming Rate data (customer defects, vulnerabilities, incidents per sprint) from pre-planning to calibrate the right reserve. If actuals consistently exceed the reserve, increase it next quarter.

AI-era tip: During the first 2 quarters of AI adoption, temporarily add 5% to the reserve. AI-generated code can introduce new failure modes — hallucinated APIs, unexpected security patterns, dependency confusion, prompt-injection risks — that surface as unplanned work. Once AI governance per §11.3 is mature, code review standards for AI-assisted work are enforced, and escaped defect rates have stabilized, this temporary buffer can be removed. Track this separately in retros so the effect is visible.

Sprint Capacity Recommendations

Apply the following recommended capacity percentages per sprint. These include both the sprint-specific adjustment and the unplanned work reserve.

Sprint Planned Work Unplanned Reserve Total Utilized Focus Rationale
Sprint 1 60% 20% 80% Feature Development Quarter ramp-up — transitioning from QP, onboarding new work, early blockers. Higher noise as recent releases surface defects.
Sprint 2 70% 20% 90% Feature Development Teams in rhythm, resolving Sprint 1 dependencies. Noise normalizing.
Sprint 3 75% 15% 90% Feature Development Full velocity — fully ramped, peak execution. Noise stabilized.
Sprint 4 75% 15% 90% Feature Development Full velocity — mid-quarter, steady-state.
Sprint 5 70% 20% 90% Feature Complete Target sprint for feature completion and deployment to customers. All new features should be code complete, tested, and shipped by end of Sprint 5. Pre-planning for next quarter begins.
Sprint 6 60% 20% 80% Enablement No new feature work. Sprint 6 is dedicated to enablement, hardening, and quarter wrap-up (see below).

Sprint 5: Feature Complete Milestone

Sprint 5 is the feature freeze deadline. By the end of Sprint 5:

Sprint 6: Enablement Sprint

Sprint 6 is not for building new features — it is for ensuring what was built is fully enabled and supported. Sprint 6 activities include:

Activity Description
User Learning / Documentation Help docs, release notes, user guides, in-app guidance
Support Enablement Train support teams on new features, update runbooks and troubleshooting guides
Product Marketing Launch communications, feature announcements, customer-facing collateral
Customer Success Enablement Prepare CSMs with feature walkthroughs, talking points, and adoption playbooks
Release Hardening Fix post-deployment defects, monitor production stability, address edge cases
Quarterly Demo Prep Record demo videos, prepare presentation materials
Pre-Planning Continue pre-planning activities for the next quarter

Key insight: The Planned Work column is what teams should commit to in Sprint Planning. The Unplanned Reserve is not empty time — it will fill with customer defects, incidents, and urgent requests. If unplanned work doesn't materialize, teams can pull additional items from the backlog.

Example — AI-era team with an adjusted velocity of 42 points/sprint (8 FTE baseline from §1):

Sprint Planned % Planned Points Unplanned Reserve Focus
Sprint 1 60% 25 8 pts (20%) Feature Dev
Sprint 2 70% 29 8 pts (20%) Feature Dev
Sprint 3 75% 32 6 pts (15%) Feature Dev
Sprint 4 75% 32 6 pts (15%) Feature Dev
Sprint 5 70% 29 8 pts (20%) Feature Complete
Sprint 6 60% 25 8 pts (20%) Enablement
Quarter Total 172 planned 44 reserved

Feature development capacity (Sprints 1-5): 147 points Enablement capacity (Sprint 6): 25 points

Step 6: Quarter-Level Rollup

Multiply adjusted sprint capacities (planned work only) for the quarterly view:

Quarterly Planned Capacity = (Sprint 1 × 60%) + (Sprint 2 × 70%) + (Sprint 3 × 75%) + (Sprint 4 × 75%) + (Sprint 5 × 65%) + (Sprint 6 × 60%)

Step 7: Account for AI Adoption State

AI tooling adoption follows a J-curve — the team gets faster eventually, but is often slower initially as they learn new tools, build prompt libraries, and establish review standards for AI-assisted work. Capacity planning must account for where the team sits on this curve. Ignoring it leads to predictable misses in early quarters.

AI Adoption Phase Capacity Adjustment Signals that you're here
Phase 0 — Pre-adoption None (baseline Focus Factor applies) No AI tools deployed; standard tooling; no prompt library
Phase 1 — Rollout (quarters 1–2 of adoption) Reduce planned capacity by 5–10%; add +5% to unplanned reserve New tools being trialed; engineers learning prompts; prompt library forming but unstable; review standards being drafted; some AI-generated defects surfacing
Phase 2 — Stabilization (quarters 3–4) Return to baseline capacity; remove the 5% unplanned buffer Prompt library in active use; AI code review standards enforced; no formal velocity uplift yet but no drag either
Phase 3 — Leverage (quarter 5+) Re-baseline velocity per Step 3 — expect 20–40% uplift over pre-AI baseline Empirical velocity is measurably and consistently higher; teams ship faster at the same or reduced headcount; AI adoption metrics per §11.4 are stable

Governing principle: The §1 Migration from Pre-AI Team Sizing discipline applies here too — don't plan against projected AI gains until they're empirically demonstrated. Under-commit in Phase 1, deliver consistently, and let Phase 3 velocity gains compound into higher commitments naturally.

A team that plans against Phase 3 velocity in Phase 1 is the same failure mode as a team that shrinks to 4 engineers before AI practice is mature. In both cases, the org pays the cost of the transition twice — once in the adoption dip, once in the missed commitments.

Capacity Planning Template

See separate file: templates/capacity-planning-template.html


Cognitive Load Assessment

Cognitive load measures the mental burden on a team from the complexity of their work, the systems they maintain, and the processes they navigate. High cognitive load leads to slower delivery, more defects, and burnout.

The assessment should be conducted each quarter during pre-planning by the Tech Lead or Engineering Manager with input from the team.

Three Types of Cognitive Load

Type Description Examples
Intrinsic Complexity inherent to the work itself Complex domain logic, distributed systems, unfamiliar tech stack
Extraneous Complexity from the environment and processes Poor tooling, unclear requirements, too many meetings, manual deployments
Germane Productive learning and skill-building Learning new patterns, cross-training, architecture improvements

Goal: Minimize extraneous load, manage intrinsic load, and protect space for germane load.

Cognitive Load Signals to Watch

Signal Indicator of High Load
Frequent context switching Team owns too many unrelated services or domains
High defect rate Team is stretched too thin to maintain quality
Declining velocity Teams slow down as complexity accumulates
Onboarding takes too long Systems are too complex for new members to ramp
Burnout / attrition Sustained overload drives people out
Excessive dependencies Team can't deliver without coordinating with many others

Cognitive Load Worksheet

See separate file: templates/cognitive-load-worksheet.html

Tools


4. Quarterly Planning (QP)

A 2-day event where all R&D teams, senior stakeholders, and partners commit to what they will deliver in the upcoming quarter (6 sprints).

Attendees

Definition of Done


Day 1

Time Block Activity
Opening Welcome and context setting from senior leadership
4-Hour Planning Block Teams break out with their senior leaders to:
- Break down scoped features
- Identify risks and dependencies (logged in JIRA)
- Check velocity and capacity (from Capacity Planning)
- Build plans visible in JIRA Portfolio
Leadership Readout Each team presents:
- What they have committed to so far
- Confidence level in the plan
Quarterly Risk Review Senior leaders walk the QRR board, work to solve/unblock surfaced risks immediately so teams can continue planning

Goal of Day 1: Get to a solid draft plan that can be finalized on Day 2.


Day 2

Day 2 follows the same structure as Day 1 with one critical gate:

Confidence Vote: Each team must reach 70% confidence or higher in their plan during the readout.

Artifacts


5. Day-to-Day Execution

Scrum Ceremonies

Each team runs standard Scrum ceremonies. Detailed agendas follow.


Sprint Planning

Cadence: Start of each sprint (Day 1) Duration: 2-4 hours Attendees: Full team (PM, Tech Lead, Engineers, QE, UX, DevOps)

Agenda:

  1. Review sprint goal — what outcome are we targeting?
  2. Pull stories/tasks from the groomed backlog into the sprint
  3. Validate capacity (PTO, holidays, carry-over work)
  4. Break down stories into sub-tasks with estimates
  5. Identify dependencies on other teams or partners
  6. Confirm sprint commitment and finalize sprint backlog

Daily Stand-Up

Cadence: Daily Duration: 15 minutes (strict timebox) Attendees: Full team

Agenda (per person):

  1. What did I complete yesterday?
  2. What am I working on today?
  3. Are there any blockers?

Note: Blockers should be taken offline immediately after stand-up. The Tech Lead or PM owns unblocking.


Backlog Grooming (Refinement)

Cadence: Mid-sprint (1-2 sessions per sprint) Duration: 1-2 hours Attendees: PM, Tech Lead, Engineers, QE, UX

Agenda:

  1. PM presents upcoming stories (1-2 sprints ahead)
  2. Team asks clarifying questions — UX reviews designs if applicable
  3. Acceptance criteria are defined or refined
  4. Team estimates stories (story points)
  5. Identify technical risks, spikes needed, or dependencies
  6. Stories are marked "Ready" when fully groomed

Sprint Demo / Review

Cadence: End of each sprint Duration: 1 hour Attendees: Full team + stakeholders, PM, partners

Agenda:

  1. PM recaps the sprint goal
  2. Team demos completed work (live or recorded)
  3. Stakeholders provide feedback
  4. PM confirms which stories met the Definition of Done
  5. Review items not completed — carry-over rationale
  6. Discuss any changes to upcoming priorities

Sprint Retrospective

Cadence: End of each sprint (after Demo) Duration: 1 hour Attendees: Full team only (safe space)

Agenda:

  1. What went well this sprint?
  2. What didn't go well?
  3. What should we change or try next sprint?
  4. Review action items from last retro — were they addressed?
  5. Agree on 1-3 actionable improvements for next sprint

Facilitation: Rotate the facilitator each sprint to build team ownership.


Epic Closing Ceremony

Cadence: When an Epic is ready to be marked Done (typically Sprint 5) Duration: 30-60 minutes Attendees: Full team (PM, Tech Lead, Engineers, QE, UX, DevOps) + User Learning. If the Epic is marked User Impacting, Marketing must also attend.

A live meeting where the team walks through every item on the Epic Definition of Done checklist together. This is the formal gate before an Epic moves to Done.

Agenda:

  1. PM confirms the Epic scope — what was committed vs. what was delivered
  2. Walk through the Epic DoD checklist line by line (see Epic Definition of Done)
  3. Confirm all stories under the Epic are Done
  4. Review the User Impact field — if User Impacting:
  5. Confirm deployment status (GA / Beta / Dark Launch)
  6. Flag any open items or follow-ups (logged as new work items)
  7. Team agrees: Epic is Done or Epic needs follow-up work

Why a ceremony? Marking an Epic done is a significant event — it means a feature is shipped and enabled. The closing ceremony ensures nothing falls through the cracks, especially GTM readiness for user-impacting changes. It also gives the team a moment to celebrate delivery.


Governance Meetings (Detailed Agendas)

See Section 7: Governance & Meeting Cadence for Leadership Sync, Epic Refinement, and Quarterly Demo agendas.

Working Agreements


6. JIRA Structure & Tooling

JIRA Hierarchy

Work flows from strategic intent down to executable tasks through a 4-level hierarchy across three JIRA products:

Jira Product Discovery (JPD)          Portfolio Project (JIRA)         Team Projects (JIRA)
┌──────────────────────────┐          ┌─────────────────────┐          ┌─────────────────┐
│  Strategy                │          │                     │          │                 │
│    └── Initiative        │───────>  │  Epic               │───────>  │  Story          │
│                          │          │                     │          │  Task           │
│  Customer Ideas          │          │                     │          │  Bug            │
│  Customer Suggestions    │          │                     │          │  Internal Bug   │
└──────────────────────────┘          └─────────────────────┘          │  Vulnerability  │
                                                                      │  Risk           │
                                                                      └─────────────────┘
Level Where It Lives Who Owns It Description
Strategy Jira Product Discovery (JPD) VP / Senior Leadership Top-level strategic themes or pillars (e.g., "Expand into mid-market," "Platform modernization"). Typically 3-5 per year. Strategies do not change quarter to quarter.
Initiative Jira Product Discovery (JPD) Director / Senior PM A large body of work that delivers against a Strategy. Spans one or more quarters. Contains multiple Epics. (e.g., "Self-service onboarding for mid-market")
Epic Portfolio Project PM / Tech Lead A deliverable feature or capability that can be completed within a quarter. Epics link up to Initiatives in JPD and down to Stories in Team Projects. (e.g., "Guided setup wizard")
Story / Task / Bug Team Project Engineering Team Sprint-level work items that are estimated, assigned, and delivered within a sprint.

Jira Product Discovery (JPD)

JPD is the strategic planning and ideation layer for R&D.

See Appendix E: JPD Timeline & Swimlane View for the visual diagram.

Portfolio Project (R&D Roadmap)

Team Projects

Work Item Types (per Team Project)

Type Description
Epic Lives in the Portfolio Project; Stories/Tasks parent to it
Story Feature work
Task Technical or operational work
Bug Customer-reported defects
Internal Bug Internally discovered defects
Vulnerability Security vulnerabilities

JIRA Fields Reference

Standard Fields (All Work Items)

Field Usage
Summary Title of the work item
Description Detailed description, acceptance criteria
Assignee Individual responsible for the work
Reporter Person who created the item
Priority Critical, High, Medium, Low
Status To Do, In Progress, In Review, QA, Done
Sprint Current sprint assignment
Story Points Estimation (Stories and Bugs)
FixVersion Quarterly delivery tracking (e.g., QP1, QP2)
Epic Link Links story/task/bug to parent Epic
Labels Team-defined categorization
Components Product area or module

Additional Fields by Work Item Type

Field Applies To Usage
Acceptance Criteria Story Conditions that must be met for Done
User Impact Epic Custom field (dropdown: Yes / No). Indicates whether the Epic is user-impacting. Used by User Learning and Marketing to prepare GTM (go-to-market) activities.
Due Date Bug, Vulnerability Auto-calculated by ScriptRunner based on SLA
Severity Bug, Vulnerability S1 (Critical), S2 (Major), S3 (Minor), S4 (Trivial)
Found In Version Bug Version where the defect was discovered
Environment Bug Production, Staging, QA, Dev
Security Classification Vulnerability CVSS score or internal classification
Customer Bug Customer who reported the defect (if applicable)
Root Cause Bug, Internal Bug Category of root cause (for trend analysis)
Risk Category Risk Risk Status classification (Resolved, Owned, Accepted, Mitigated)
Risk Impact Risk High, Medium, Low
Dependency Team Risk Team(s) involved in the dependency

JIRA Description Templates

Standardize how work items are described in JIRA to ensure clarity, consistency, and completeness across all teams.

Epic Description Template

## Overview
[Brief description of what this Epic delivers and why it matters to the customer/business.]

## Success Metrics
- [ ] [Metric 1 — e.g., Reduce checkout abandonment by 15%]
- [ ] [Metric 2 — e.g., Feature adopted by 50% of users within 30 days of GA]
- [ ] [Metric 3 — e.g., Zero S1/S2 defects in first 2 sprints post-release]

## Scope
### In Scope
- [Feature/capability 1]
- [Feature/capability 2]
- [Feature/capability 3]

### Out of Scope
- [What is explicitly NOT included in this Epic]
- [Items deferred to a future Epic]

## User Stories
- [Link to Story 1]
- [Link to Story 2]

## Dependencies
- [Team/system dependency 1]
- [Team/system dependency 2]

## UX / Design
- [Link to Figma/design files]
- [Link to UX research or user flow]

## Technical Approach
[High-level architecture or technical approach. Link to ADR if applicable.]

## Risks
- [Risk 1 — and mitigation]
- [Risk 2 — and mitigation]

## User Impact
- **User Impacting**: [Yes / No]
- **GTM Notes**: [If user impacting — key messaging, target audience, launch timing]

## Release Plan
- **Target Release**: [QP#, Sprint #]
- **Release Type**: [GA / Beta / Dark Launch]
- **Feature Flag**: [Yes/No — flag name if applicable]

## Stakeholders
- **Product Owner**: [Name]
- **Tech Lead**: [Name]
- **UX**: [Name]
- **QE**: [Name]

Story Description Template

## User Story
As a [type of user], I want [goal] so that [reason/value].

## Acceptance Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
- [ ] [Criterion 3]

## UX / Design
- [Link to mockups/Figma]
- [Interaction notes]

## Technical Notes
[Any implementation guidance, API contracts, or architectural decisions.]

## Dependencies
- [Dependency on other story/team/service]

## Test Scenarios
- [Happy path scenario]
- [Edge case scenario]
- [Error scenario]

Bug Description Template (Customer-Reported)

## Summary
[One-line description of the defect.]

## Customer Information
- **Customer Name**: [Customer name or account]
- **Customer Tier**: [Enterprise / Mid-Market / SMB]
- **Support Ticket**: [Link to support ticket]
- **Reported Date**: [Date]
- **Customer Impact**: [Number of users affected, business impact]

## Environment
- **Product Version**: [Version number]
- **Environment**: [Production / Staging / QA]
- **Browser / Device**: [If applicable]
- **OS**: [If applicable]

## Steps to Reproduce
1. [Step 1]
2. [Step 2]
3. [Step 3]

## Expected Behavior
[What should happen.]

## Actual Behavior
[What actually happens. Include error messages if any.]

## Screenshots / Logs
[Attach screenshots, screen recordings, or relevant log snippets.]

## Severity
[S1 - Critical / S2 - Major / S3 - Minor / S4 - Trivial]

## Workaround
[Is there a workaround? Describe it.]

## Root Cause (to be filled after investigation)
[Category: Code Defect / Configuration / Data Issue / Infrastructure / Third-Party]
[Description of root cause.]

Vulnerability Description Template

## Summary
[Brief description of the vulnerability.]

## Source
- **Detected By**: [Scanner name / Penetration test / Bug bounty / Internal audit]
- **CVE ID**: [If applicable]
- **CVSS Score**: [Score] — [Critical / High / Medium / Low]
- **CWE Category**: [e.g., CWE-79 XSS, CWE-89 SQL Injection]

## Affected Systems
- **Service/Component**: [Name]
- **Version**: [Affected versions]
- **Environment**: [Production / Staging / All]

## Description
[Detailed description of the vulnerability and how it can be exploited.]

## Impact
[What data/systems are at risk. Potential business impact.]

## Remediation Plan
[Proposed fix. Link to PR or technical approach.]

## SLA
- **SLA Target**: [72 hours / 30 days / 90 days / 180 days]
- **Due Date**: [Auto-calculated by ScriptRunner]

## Verification
- [ ] Fix deployed
- [ ] Re-scan confirms vulnerability resolved
- [ ] Security team notified

Quarterly Delivery Tracking

Quarterly Risk Review

The Quarterly Risk Review provides a consolidated view of risks across all teams.

Structure:

Risk Status Categories (used as a field or status on Risk items):

Category Definition Action
Resolved Risk has been addressed and eliminated No further action needed
Owned Risk is assigned to a specific person to drive resolution Owner actively working to mitigate
Accepted Risk is acknowledged but will not be actively mitigated Team proceeds with awareness
Mitigated Actions are in place to reduce the impact or likelihood Monitor for effectiveness

SLA Management (ScriptRunner)

ScriptRunner is used to automate Due Date calculation on Bug and Vulnerability items.

Recommended SLA Targets (SaaS Benchmarks)

Bug Triage SLA — Time to triage customer-reported defects from support:

Severity Triage SLA
S1 — Critical (system down, data loss) 1 hour
S2 — Major (feature broken, no workaround) 4 business hours
S3 — Minor (feature impaired, workaround exists) 1 business day
S4 — Trivial (cosmetic, minor inconvenience) 3 business days

Bug Resolution SLA — Time to resolve/fix the defect:

Severity Resolution SLA
S1 — Critical 4 hours (hotfix)
S2 — Major 5 business days
S3 — Minor 30 business days (next sprint)
S4 — Trivial 90 business days (backlog)

Security Vulnerability SLA — Time to remediate:

CVSS Score Severity Remediation SLA
9.0 - 10.0 Critical 72 hours
7.0 - 8.9 High 30 days
4.0 - 6.9 Medium 90 days
0.1 - 3.9 Low 180 days

Note: These benchmarks are based on industry standards for SaaS companies. Adjust based on contractual obligations, compliance requirements (SOC2, ISO 27001), and customer expectations.


7. Governance & Meeting Cadence (During Execution)

Leadership Sync Review — Weekly

Cadence: Weekly (same day/time each week) Duration: 1 hour Attendees: Senior leaders, day-to-day team leads (Tech Leads, PMs), Marketing, and delivery partners.

Agenda:

  1. Quarterly Steering Review (20 min)
  2. Blockers, Risks & Dependencies (15 min)
  3. Milestones & Delivery Updates (10 min)
  4. Pre-Planning for Next Quarter (10 min)
  5. Action Items & Wrap-Up (5 min)

Epic Refinement — Bi-Weekly (Sprint Boundaries)

Cadence: Every 2 weeks, aligned with sprint boundaries Duration: 1.5-2 hours Attendees: Senior leaders, PMs, Tech Leads, Architecture, and relevant partners.

Agenda:

  1. Review upcoming Epics from the Portfolio Project (30 min)
  2. Break down Epics into Features/Stories (30 min)
  3. Dependency mapping (15 min)
  4. Prioritization and sequencing (15 min)
  5. Risks and open questions (15 min)

Quarterly Demo — End of Sprint 6

Cadence: Once per quarter, near the end of Sprint 6 Duration: 1-2 hours (depending on number of teams) Attendees: All R&D, senior leadership, Marketing, Sales, Customer Success, and other stakeholders.

Format: Pre-recorded video demos.

Agenda:

  1. Opening — Senior leadership welcome, quarter recap (5 min)
  2. Team Demos — Each team presents pre-recorded video of completed features (5-10 min per team)
  3. Q&A — Stakeholder questions after each demo (5 min per team)
  4. Closing — Summary of what shipped, shout-outs, look-ahead to next quarter (5 min)

Tip: Pre-record videos the week before. Keep each video under 5 minutes. Use a shared folder or playlist for async viewing by those who can't attend live.


Escalation Paths & Decision-Making Authority

Decisions and escalations are routed by concern type, not just seniority. Most issues should be resolved horizontally with the team's partner roles before escalating vertically.

Routing by Concern Type

Concern Type First Escalation Decision Authority Examples
People (performance, growth, conflict, hiring) Team Lead Engineering Manager / Director of Engineering Performance concerns, career development, team conflict, hiring decisions
Architecture (technical direction, unpaved work, tech debt) Partner Architect (product line) Partner Architect → Director of Engineering Architecture decisions, new technology adoption, approach for unpaved work, cross-service contracts
Program / Operations (delivery risk, process, cross-team coordination) Partner Technical Program Manager Partner TPM → Director of Engineering Delivery timeline risk, cross-team dependency resolution, process improvements, release coordination
Strategic (roadmap direction, investment shifts, org-level trade-offs) Director of Engineering VP of Engineering Roadmap reprioritization, investment reallocation across product lines, org structure changes

Escalation Flow

Team Member
  │
  ├── People concern ──────────► Team Lead ──► Engineering Manager ──┐
  │                                                                   │
  ├── Architecture concern ────► Partner Architect ──────────────────┤
  │                                                                   │
  ├── Operations concern ──────► Partner TPM ────────────────────────┤
  │                                                                   ▼
  │                                                          Director of Engineering
  │                                                          (Product Line Authority)
  │                                                                   │
  └── Strategic concern ──────────────────────────────────────► VP of Engineering
                                                              (Strategic Decisions)

Decision Authority Summary

Role Scope Authority
Team Lead Within team Day-to-day people decisions, sprint-level trade-offs, initial escalation point for team members
Engineering Manager Within team(s) People decisions (hiring, performance, growth), team structure, working agreements
Partner Architect Across product line Architecture direction, technical standards, guidance on unpaved or novel work, code-level design reviews
Partner TPM Across product line Delivery coordination, cross-team dependency resolution, process and operational improvements, risk management
Director of Engineering Product line Final authority on architecture, people, and operations within their product line. Convergence point for all three escalation lanes.
VP of Engineering R&D organization Strategic decisions, investment allocation across product lines, org-level trade-offs. Informed on cross-product-line escalations.

When to Escalate

Escalate when:

Do not escalate when:

Principle: Resolve horizontally first. Escalate vertically only when horizontal resolution fails or when the decision scope exceeds the partner role's authority.

See Section 10: Scaling the Model for how governance meetings, escalation paths, and roles change at different organization sizes.


8. Bridge Sprint

After Sprint 6, a shorter Bridge Sprint serves as a transition period — bridging the gap between execution and the next quarter's planning.

Bridge Sprint Activities

Activity Description
Hackathon Innovation time for teams to explore ideas
Spikes Technical research and proof-of-concept work
KTLO Address remaining maintenance and operational items
Final Pre-Planning Last preparation before the next Quarterly Planning
Metrics Review Review quarterly performance (see Metrics section)

9. Metrics & KPIs

Reviewed during the Bridge Sprint and used to inform the next quarter's planning. Metrics are organized into four categories.


9.1 Delivery Metrics

Metric Level Description Target
Commitment Reliability Ratio Team & Product What was committed vs. what was delivered (via FixVersion) >= 80%

9.2 DORA Metrics (Engineering Performance)

The four DORA metrics measure software delivery performance and operational stability.

Metric Description Elite High Medium Low
Deployment Frequency How often code is deployed to production On-demand (multiple/day) Weekly to monthly Monthly to every 6 months Less than once per 6 months
Lead Time for Changes Time from commit to production Less than 1 hour 1 day to 1 week 1 week to 1 month 1 to 6 months
Change Failure Rate % of deployments causing a failure in production 0-15% 16-30% 31-45% 46-60%
Mean Time to Restore (MTTR) Time to recover from a production failure Less than 1 hour Less than 1 day 1 day to 1 week More than 1 week

Goal: Teams should aim for High or Elite performance. Track these quarterly and look for trends. DORA metrics can be sourced from CI/CD pipelines, GitHub/GitLab, and incident management tools. See Section 11.2 for how AI can automate trend analysis and anomaly detection on these metrics.


9.3 Quality Metrics

Metric Description What to Watch
Created vs. Resolved Trend of incoming vs. completed work items Resolved should consistently exceed or match Created
SLA KPIs Performance against Triage SLA and Completion SLA targets % of items resolved within SLA by severity
Incidents Production incidents count and severity Trend over time, MTTR, repeat incidents
Escaped Defects Bugs found in production that were not caught in QA Should trend downward quarter over quarter
Defect Removal Efficiency (DRE) % of defects caught before they reach production: defects found pre-release ÷ total defects (pre-release + post-release). Industry-standard quality outcome metric (used in CMMI, IBM software process). Replaces Test Automation Coverage — coverage % was vanity (didn't measure whether tests were effective). Green: ≥ 90%; Amber: 75–89%; Red: < 75%. Track over rolling 6 sprints.

9.4 Engineering Intelligence Metrics (Jellyfish)

These metrics provide deeper visibility into engineering health, investment, and efficiency.

Metric Description Why It Matters
Engineering Investment Allocation Actual time spent across New Product, Architecture, KTLO, and Customer Voice vs. plan Validates that teams are spending time where leadership intends. Pair with per-category outcomes (below) — adherence alone is process compliance, not health.
Investment Outcome — New Product New-feature adoption rate, revenue impact of features shipped Measures whether New Product investment produced customer/business value
Investment Outcome — Architecture / Tech Debt Cycle time trend, incident rate, flow efficiency Measures whether Architecture investment actually reduced debt
Investment Outcome — KTLO Incident frequency, MTTR trend, defect backlog size Measures whether KTLO investment delivered reliability
Investment Outcome — Customer Voice Customer-reported defect closure rate, CSAT/NPS delta Measures whether Customer Voice investment moved customer outcomes
Cycle Time Time from work started to work completed Shorter cycle time = faster value delivery
Flow Efficiency Ratio of active work time vs. wait/blocked time Low efficiency signals process bottlenecks
Sprint Predictability Consistency of velocity across sprints per team Stable velocity = better planning confidence
Cross-team Dependency Impact How often dependencies cause delays or missed commitments Highlights org design issues and coupling
Focus Time % of engineer time in uninterrupted deep work vs. meetings/context switching Protect maker time for productivity

9.5 Team Health Dashboard

The Team Health Dashboard consolidates metrics from Sections 9.1–9.4 into a single per-team scorecard with RAG (Green / Amber / Red) signals. No new metrics are introduced — the dashboard provides a structured assessment framework for existing data.

Purpose: Give teams a quarterly self-assessment tool and give leadership a cross-team comparison view.

When to use: During pre-planning (Section 3) and quarterly retrospectives. Teams complete the detail view; leadership reviews the rollup.

Category Overview

Category What It Measures Key Sources
Planning Health Readiness to enter and execute a quarter Backlog state, QP confidence, capacity planning
Delivery Health Ability to consistently deliver on commitments Commitment Reliability Ratio (9.1), velocity, deployment frequency
Quality Health Product quality and defect management SLA compliance (Section 6), defect trends, DORA (9.2)
Engineering Health Technical health and efficiency DORA metrics (9.2), Jellyfish metrics (9.4)
Team Health Sustainability and team well-being Cognitive load (Section 3), focus time, stability

Planning Health Signals

Signal Green Amber Red
Backlog Grooming Readiness 2+ sprints of ready stories 1–2 sprints ready < 1 sprint ready
Pre-Planning Data Readiness All inputs (velocity, cognitive load, incoming rate, dependency map) available 3+ sprints before QP Some inputs available; gaps exist Inputs missing or stale at QP
Dependency Coverage at QP All cross-team dependencies identified, linked, and owned at QP close Most identified; some owners missing Significant dependencies surfaced post-QP
Cognitive Load Score <= 35% (Low) 36–55% (Moderate) > 55% (High/Critical)
Incoming Rate Trend Decreasing or stable Slight increase Significant increase

Delivery Health Signals

Signal Green Amber Red
Commitment Reliability Ratio >= 80% 70–79% < 70%
Velocity Stability Variance <= 15% 16–25% > 25%
Sprint Commitment Consistently meets Occasional misses (1–2/qtr) Frequent misses (3+/qtr)
Deployment Frequency Elite/High (weekly+) Medium (monthly) Low (< monthly)
Cycle Time Decreasing or stable Slight increase Significant increase

Quality Health Signals

Signal Green Amber Red
SLA Compliance >= 90% within SLA 75–89% < 75%
Defect Trend (Created vs Resolved) Resolved >= Created Created slightly > Resolved Backlog growing
Escaped Defects Decreasing trend Stable Increasing trend
Defect Removal Efficiency (DRE) >= 90% caught pre-release 75–89% < 75%
Change Failure Rate 0–30% (Elite/High) 31–45% (Medium) > 45% (Low)

Engineering Health Signals

Signal Green Amber Red
Lead Time for Changes < 1 week (Elite/High) 1 week–1 month (Medium) > 1 month (Low)
MTTR < 1 day (Elite/High) 1–7 days (Medium) > 1 week (Low)
Flow Efficiency Active > wait time Active ≈ wait time Wait > active time
Cross-team Dependency Impact Minimal delays Occasional delays Frequent blockers
Investment Allocation vs Plan Within +/−5% AND category outcomes trending positively (see §9.4) Within +/−10% OR outcomes flat > +/−10% variance OR outcomes trending negatively despite adherence

Team Health Signals

Signal Green Amber Red
Cognitive Load <= 35% (Low) 36–55% (Moderate) > 55% (High/Critical)
Focus Time >= 60% maker time 50–59% < 50%
Unplanned Work Accuracy Reserve sufficient (< 5% overrun) Tight (5–10% overrun) Exceeded (> 10% overrun)
Team Stability No attrition 1 departure or new member Multiple departures / high churn
Retro Action Efficacy Most actions reviewed at next retro show observable behavior change Some actions reviewed; mixed evidence of behavior change Actions closed without revisit, or no evidence of behavior change

Category RAG Logic

Each category is assessed independently:

Overall Health Score

Each category is scored: Green = 2, Amber = 1, Red = 0. Total is expressed as a percentage of the maximum (10 points).

Overall Score Status
80–100% Green — Healthy
50–79% Amber — Needs attention
< 50% Red — At risk

Action Planning

When any category scores Amber or Red:

  1. Identify the specific signals driving the rating
  2. Root-cause — use retro or 5-whys to understand why
  3. Assign an owner and target date for each action item
  4. Track completion in the next assessment cycle

Template: Use the interactive Team Health Dashboard template for self-assessment (Team Detail View) and cross-team comparison (Leadership Rollup View). See Section 11.2 for AI-powered metric analysis that can automate trend detection and anomaly alerting across dashboard signals.


10. Scaling the Model

This section provides guidance on how the operating model adapts as an R&D organization grows. The base model (Sections 1–9) is designed for medium-scale organizations (5–10 teams). This section describes what to simplify for smaller orgs and what to add for larger ones.


10.1 Scaling Philosophy

Principle: Add governance only when the cost of coordination failures exceeds the cost of the governance itself.

Every process, meeting, and role described in this model has a coordination cost. At small scale, the cost of formal governance outweighs the benefit — teams can coordinate informally. At large scale, the cost of coordination failures (misalignment, duplicated work, blocked teams) far exceeds the cost of structured governance.

The scaling guidance follows three rules:

  1. Start lean — adopt only what solves a real coordination problem today
  2. Add structure when pain emerges — not preemptively
  3. Remove structure when it stops earning its keep — governance should be periodically audited

10.2 Scaling Tiers Overview

┌──────────────────────────────────────────────────────────────────────────────────┐
│                        SCALING THE OPERATING MODEL                               │
├──────────────────────────────────────────────────────────────────────────────────┤
│                                                                                  │
│   SMALL (2-4 teams)          MEDIUM (5-10 teams)         LARGE (11-20+ teams)   │
│   ┌────────────────┐         ┌────────────────┐          ┌────────────────┐     │
│   │  Lightweight    │         │  Base Model     │          │  Program Layer  │    │
│   │                │         │                │          │                │     │
│   │ • 1-day QP     │   ──>   │ • 2-day QP     │   ──>    │ • 2.5-3 day QP │    │
│   │ • Bi-weekly    │         │ • Weekly LS     │          │ • Pre-QP align  │    │
│   │   Leadership   │         │ • Bi-weekly ER  │          │ • Area syncs    │    │
│   │ • JPD optional │         │ • Full JIRA     │          │ • Multiple      │    │
│   │ • No RTE       │         │   hierarchy     │          │   Portfolios    │    │
│   │ • Ad-hoc coord │         │ • Sections 1-9  │          │ • RTE role      │    │
│   │ • Shared EM    │         │   as written    │          │ • Arch Council  │    │
│   └────────────────┘         └────────────────┘          └────────────────┘     │
│                                                                                  │
│   Coordination:              Coordination:                Coordination:          │
│   Informal, direct           Structured, team-level       Layered, program-level│
│                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────┘

10.3 Small Scale (2–4 Teams)

At small scale, the full model is over-engineered. Simplify aggressively — the goal is delivery speed with minimal overhead.

What Changes

Dimension Base Model (Medium) Small Scale Adaptation
Quarterly Planning 2-day event 1-day event — combined planning and readout
Leadership Sync Weekly, 1 hour Bi-weekly, 30 min — or combined with stand-up of stand-ups
Epic Refinement Bi-weekly, 1.5–2 hours As-needed — PMs and Tech Leads align directly
JIRA Hierarchy JPD → Portfolio → Team Projects Portfolio → Team Projects (JPD optional)
JPD Strategies, Initiatives, Customer Ideas Optional — use a simple roadmap in Miro or a shared doc
Portfolio Project Dedicated JIRA project for Epics Can be a shared board or a label-based view
Quarterly Risk Review Formal JIRA board by Product Line Shared risk list in a spreadsheet or JIRA filter
Roles PM, Tech Lead, EM per team PM may span 2 teams; EM may be shared; no dedicated RTE
Quarterly Demo Formal event, pre-recorded videos Informal show-and-tell, live demos
Cross-team Coordination Structured via LS and ER Ad-hoc — Tech Leads talk directly
Metrics Full DORA + Jellyfish + Team Health Commitment Reliability Ratio + basic DORA (deployment frequency, lead time)

What to Keep

Even at small scale, these elements are essential:


10.4 Medium Scale (5–10 Teams)

This is the base model as documented in Sections 1–9. No modifications needed.

Dimension Details
Quarterly Planning 2-day event (Section 4)
Leadership Sync Weekly, 1 hour (Section 7)
Epic Refinement Bi-weekly, 1.5–2 hours (Section 7)
JIRA Hierarchy JPD → Portfolio → Team Projects (Section 6)
Governance As defined in Section 7
Metrics Full suite — Sections 9.1–9.5
Bridge Sprint As defined in Section 8
Team Health Dashboard with RAG scoring (Section 9.5)

Reference: If you are at medium scale, the rest of this document (Sections 1–9) is your operating playbook. Use it as-is and adapt through working agreements.


10.5 Large Scale (11–20+ Teams)

At large scale, coordination complexity grows non-linearly. The base model needs a program layer on top to prevent alignment drift, dependency gridlock, and planning chaos.

Organizational Structure

┌──────────────────────────────────────────────────────────────────────┐
│  VP of Engineering                                                    │
│                                                                      │
│  ┌──────────────────────┐   ┌──────────────────────┐                │
│  │  Product Area A       │   │  Product Area B       │               │
│  │  Director of Eng      │   │  Director of Eng      │               │
│  │                       │   │                       │               │
│  │  ┌─────┐ ┌─────┐    │   │  ┌─────┐ ┌─────┐    │               │
│  │  │ T1  │ │ T2  │    │   │  │ T5  │ │ T6  │    │               │
│  │  └─────┘ └─────┘    │   │  └─────┘ └─────┘    │               │
│  │  ┌─────┐ ┌─────┐    │   │  ┌─────┐ ┌─────┐    │               │
│  │  │ T3  │ │ T4  │    │   │  │ T7  │ │ T8  │    │               │
│  │  └─────┘ └─────┘    │   │  └─────┘ └─────┘    │               │
│  │                       │   │                       │               │
│  │  Partner Architect    │   │  Partner Architect    │               │
│  │  Partner TPM          │   │  Partner TPM          │               │
│  │  Chapter Lead(s)      │   │  Chapter Lead(s)      │               │
│  └──────────────────────┘   └──────────────────────┘                │
│                                                                      │
│  ┌────────────────────────────────────────────────────────┐         │
│  │  Cross-Cutting Roles                                    │         │
│  │  • Release Train Engineer (RTE) — QP facilitation       │         │
│  │  • Architecture Council — cross-area technical standards │         │
│  │  • Program Manager — cross-area dependency tracking      │         │
│  └────────────────────────────────────────────────────────┘         │
└──────────────────────────────────────────────────────────────────────┘

What Changes

Dimension Base Model (Medium) Large Scale Adaptation
Quarterly Planning 2-day event 2.5–3 day event with pre-QP alignment day
Pre-QP Alignment Not needed Half-day session: Directors align on cross-area priorities, dependencies, and capacity constraints before teams plan
Leadership Sync Weekly, 1 hour Split into Product-Area Syncs (weekly, 45 min per area) + Cross-Area Sync (weekly, 30 min, Directors + RTE)
Epic Refinement Bi-weekly, 1.5–2 hours Per product area, bi-weekly + cross-area dependency review monthly
JIRA Hierarchy 1 Portfolio Project Multiple Portfolio Projects (1 per product area) + a Program-level rollup view
Quarterly Demo 1 session, all teams Per-area demos + an executive summary session
Metrics Team-level dashboards Team + Area + Program rollups

New Roles at Large Scale

Role Scope Responsibilities
Release Train Engineer (RTE) Cross-area Facilitates Quarterly Planning logistics, tracks cross-team dependencies during execution, owns the program-level risk board, escalates systemic blockers
Program Manager Cross-area Manages cross-area dependency resolution, coordinates release planning across product areas, maintains program-level roadmap view
Chapter Lead Per discipline, per area Drives engineering standards within a discipline (e.g., Frontend Chapter Lead), coordinates hiring and skill development, runs community of practice sessions
Architecture Council Organization-wide Sets cross-area technical standards, reviews architecture decisions with org-wide impact, maintains the technology radar, governs shared platform decisions

Large-Scale Meeting Cadence

Meeting Cadence Duration Attendees Purpose
Product-Area Sync Weekly 45 min Director, PMs, Tech Leads, TPM within area Area-level steering, risks, delivery status
Cross-Area Sync Weekly 30 min Directors, RTE, Program Manager Cross-area dependencies, escalations, program health
Architecture Council Monthly 1.5 hours Partner Architects, Chapter Leads, invited Tech Leads Standards, ADRs with org-wide impact, technology radar updates
Cross-Area Dependency Review Monthly 1 hour RTE, TPMs, Tech Leads with active cross-area dependencies Identify, track, and resolve cross-area blockers
Program Retrospective Quarterly 2 hours All Directors, RTE, Program Manager, Chapter Leads Org-level process improvements, scaling pain points

10.6 Scaling Decision Matrix

A single comparison across all tiers for quick reference.

Dimension Small (2–4 teams) Medium (5–10 teams) Large (11–20+ teams)
QP Duration 1 day 2 days 2.5–3 days + pre-QP alignment
QP Confidence Vote Informal team check-in Per-team, ≥ 70% Per-team + per-area rollup
Leadership Sync Bi-weekly, 30 min Weekly, 1 hour Area Syncs (weekly) + Cross-Area Sync (weekly)
Epic Refinement As-needed Bi-weekly, 1.5–2 hrs Per-area bi-weekly + cross-area monthly
JIRA Structure Portfolio + Team Projects JPD + Portfolio + Team Projects JPD + Multiple Portfolios + Program rollup
JPD Usage Optional Strategies, Initiatives, Ideas Required — multi-area roadmap coordination
Quarterly Risk Review Shared list / filter JIRA board by Product Line Per-area boards + program-level board
Quarterly Demo Informal show-and-tell Formal, pre-recorded Per-area + executive summary
Key Roles PM (shared), Tech Lead, shared EM PM, Tech Lead, EM, Architect, TPM + RTE, Program Manager, Chapter Lead, Arch Council
Cross-team Coordination Ad-hoc, direct Via Leadership Sync + ER Structured via RTE, Cross-Area Sync, Dependency Review
Metrics Commitment Reliability Ratio + basic DORA Full suite (9.1–9.5) Team + Area + Program rollups
Retrospectives Team retros Team retros Team retros + Program Retrospective (quarterly)
Bridge Sprint Hackathon + prep Full Bridge Sprint (Section 8) Full Bridge Sprint + Program Retro

10.7 Scaling Signals

Use these signals to determine when it's time to scale up — or when you've over-scaled and should simplify.

When to Scale Up: Small → Medium

Signal What You're Seeing
Cross-team dependencies are causing sprint misses Teams block each other because no structured coordination exists
Leadership lacks visibility into delivery status No consolidated view of what's on track and what's at risk
Risk management is reactive Issues surface late because there's no Quarterly Risk Review process
Quarterly Planning feels rushed 1 day isn't enough to resolve dependencies and build confident plans
Customer voice is getting lost No structured way to capture and prioritize customer ideas
Teams are duplicating work Without portfolio-level visibility, teams build overlapping solutions

When to Scale Up: Medium → Large

Signal What You're Seeing
Leadership Sync is overflowing Too many teams to review in 1 hour; meetings run long or cut topics
Cross-team dependencies are a top blocker Multiple teams regularly blocked by other teams across product areas
QP takes longer than 2 days Dependency resolution and re-planning extend into Day 3+
Architecture decisions lack consistency Different areas make conflicting technical decisions
Onboarding new teams is slow No playbook for integrating new teams into the model — see Section 12 for the onboarding playbook
Metrics are hard to roll up No program-level view of delivery health across all teams
Directors spend most of their time coordinating Not enough time for strategic work because operational coordination consumes the week

When You've Over-Scaled

Signal What You're Seeing
Meetings have low attendance or engagement People attend governance meetings but don't contribute — the meeting isn't needed
Roles exist but add no value An RTE or Program Manager has been hired but there isn't enough cross-area work to justify the role
Teams feel bureaucratic overhead Process is slowing teams down rather than enabling them
Decisions take longer than before Adding layers has increased decision latency without improving quality
Metrics are collected but not acted on Dashboards exist but nobody changes behavior based on them
Leadership has more visibility than they can use Reports are generated that nobody reads

Action: If you see 3+ signals in the "over-scaled" table, conduct a governance audit. Remove or combine meetings, eliminate roles that aren't earning their keep, and simplify JIRA structure. Scaling down is just as important as scaling up.

10.8 When This Operating Model Doesn't Fit

This operating model is designed for engineering organizations with 10–200 engineers building software products on a Scrum or Scrum-adjacent cadence. That covers most of the addressable audience, but not all of it. This section is explicit about the edges — where the defaults don't apply and what changes.

This is strength, not weakness. A framework that claims to fit every context fits no context well. Naming the adaptation points up front is what separates a guided operating model from a brittle one.

How prescriptions are tiered

Every prescription in this operating model falls into one of three tiers. Knowing the tier tells you what you can safely adapt:

Tier Meaning Examples
Core Required for the model's internal coherence. Remove these and the rest stops working. 4-level hierarchy (Strategy → Initiative → Epic → Story), Quarterly Planning cadence, Confidence Vote, Risk Status (R/O/A/M), Investment Categories
Default Strong recommendation with explicit adaptation paths. Keep the default unless your context clearly warrants diverging. 60/70/75/75/70/60 capacity curve, 2-week sprints, Sprint 5 feature freeze, weekly Leadership Sync, 6-sprint quarter
Optional Offered as one good way, not the only way. Adopt if helpful; skip without penalty. ScriptRunner SLA automation, Bridge Sprint hackathon format, specific retro templates, particular meeting lengths

Most of this document is Default. A handful of items are Core (noted inline as you encounter them). Everything else is Optional by nature.

The Context Adaptation Guide

The following contexts require known adaptations. Use them as a starting point, not an exhaustive list.

Regulated industries (finance, healthcare, defense, aerospace, life sciences)
Continuous deployment / platform engineering orgs
Startups pre-product-market-fit (fewer than 10 engineers)
Research, ML, and data science organizations
Digital agencies and services organizations
Mixed-methodology organizations (e.g., 3 Scrum teams + 2 Kanban teams)

The meta-rule

If you find yourself adapting more than 50% of the defaults, stop and ask a harder question: are you still using this operating model, or fighting it?

A framework that bends infinitely also breaks silently. Name your adaptations on purpose, and keep the core load-bearing.


11. AI-Accelerated Engineering

AI tools are transforming how engineering teams plan, build, test, and ship software. This section provides a practical framework for adopting AI across the software development lifecycle — from individual productivity gains to team-level workflow integration and organizational intelligence.

Principle: AI augments engineers — it does not replace engineering judgment. Every AI output requires human review before it enters production code, documentation, or customer-facing systems.


11.1 AI-Assisted Development

Coding Assistants

AI coding assistants provide inline suggestions, code generation, and conversational programming support.

Tool Primary Use Integration
GitHub Copilot Inline code suggestions, tab-completion, chat IDE (VS Code, JetBrains, Neovim)
Claude Code Code generation, refactoring, debugging, architecture discussion CLI, IDE extensions
Cursor AI-native IDE with full-file editing, multi-file context Standalone IDE

AI Code Review & PR Summaries

AI-powered code review tools provide automated first-pass feedback on pull requests, catching common issues before human reviewers engage.

Capability What It Does Example Tools
Automated PR Review Analyzes diffs for bugs, style violations, security issues, and performance concerns CodeRabbit, GitHub Copilot PR Review
PR Summaries Generates human-readable summaries of what changed and why CodeRabbit, Copilot
Review Suggestions Proposes specific code improvements with diffs CodeRabbit, Copilot

Note: AI code review is a supplement, not a replacement for human review. Treat AI review comments as suggestions — human reviewers make the final call.

AI Test Generation

Capability What It Does When to Use
Unit Test Generation Generates test cases from function signatures and docstrings New code, increasing coverage on existing code
Edge Case Discovery Identifies boundary conditions and error scenarios Complex logic, data validation
Test Data Generation Creates realistic test fixtures and mock data Integration tests, API testing

AI Pair Programming Patterns

Three proven patterns for working effectively with AI coding assistants:

Pattern How It Works Best For
Scaffold & Refine Ask AI to generate the initial structure (boilerplate, scaffolding, plumbing), then refine the logic manually New features, boilerplate-heavy code, CRUD operations
Test-Driven with AI Write the test first (or describe the expected behavior), then ask AI to generate the implementation Well-defined requirements, algorithmic code, utilities
Rubber Duck Debugging Describe the problem to AI conversationally, share the code, and ask for analysis. The act of explaining often reveals the issue — AI adds a second perspective. Complex bugs, unfamiliar codebases, architectural questions

11.2 AI Across the SDLC

AI tools can accelerate every phase of the software development lifecycle beyond just writing code.

Planning

Activity AI Application Human Responsibility
Story Writing Generate draft user stories from feature descriptions or PRDs PM reviews, adjusts scope, validates business value
Acceptance Criteria Generate ACs from story descriptions PM and QE review for completeness and edge cases
Estimation Assist Analyze historical velocity data and suggest story point estimates Team discusses and decides — AI provides a starting point
Dependency Detection Scan epic/story descriptions for cross-team dependencies Tech Leads validate and log in JIRA

Quality Assurance

Activity AI Application Human Responsibility
Test Case Generation Generate test cases from ACs and code changes QE reviews for coverage gaps and relevance
Defect Prediction Identify high-risk code areas based on change frequency, complexity, and history QE prioritizes testing effort accordingly
Visual Regression Detect UI changes between builds using screenshot comparison QE reviews flagged diffs — false positives are common
Test Maintenance Identify and fix broken tests after refactoring QE validates fixes and confirms intent

Documentation

Activity AI Application Human Responsibility
Release Notes Draft release notes from merged PRs and JIRA items PM edits for customer-facing tone and accuracy
API Documentation Generate API docs from code, OpenAPI specs, or route definitions Developer reviews for accuracy and adds context
ADR Drafts Generate Architecture Decision Record drafts from discussion notes Architect reviews, refines reasoning, and publishes
Meeting Summaries Transcribe and summarize meeting recordings Facilitator reviews for accuracy before sharing

Metrics & Analysis

Activity AI Application Human Responsibility
Trend Analysis Identify patterns in DORA metrics, velocity, and defect rates over time Leadership interprets context and decides actions
Anomaly Detection Flag unusual metric movements (velocity drop, defect spike, SLA breach pattern) Team investigates root cause
Sprint Prediction Forecast sprint completion likelihood based on burndown patterns PM uses as input, not as commitment

See Section 9: Metrics & KPIs for the metrics AI can help analyze.


11.3 AI Governance & Standards

Tool Selection Framework

Before adopting an AI tool, evaluate it against these criteria. See Appendix G for the full evaluation template.

Criterion What to Evaluate Weight
Security & Privacy Where does data go? Is code sent to external servers? SOC2/ISO compliance? Critical
IP & Licensing Who owns AI-generated code? License implications? Indemnification? Critical
Integration Does it work with existing tools (IDE, CI/CD, JIRA)? High
Accuracy & Quality How often are suggestions correct? False positive rate? High
Cost Per-seat pricing, usage-based costs, ROI at team scale Medium
Adoption Friction How easy is onboarding? Does it disrupt existing workflows? Medium
Vendor Stability Company track record, funding, enterprise support? Medium

Prompt Engineering Guidelines

Effective use of AI tools depends on clear, specific prompts. Teams should follow these guidelines:

  1. Be specific — "Write a function that validates email addresses using RFC 5322 regex" beats "Write an email validator"
  2. Provide context — Share relevant code, types, and architectural constraints
  3. Specify constraints — Language version, framework conventions, error handling patterns
  4. Request tests — Ask for test cases alongside implementation
  5. Iterate — Treat the first response as a draft; refine with follow-up prompts
  6. Review critically — AI-generated code can be confident and wrong; always verify logic

Data Privacy & IP

Category Allowed Not Allowed
Open-source code Can be shared with AI tools
Internal application code Can be shared with enterprise-licensed tools (data not used for training) Do not share with free-tier or consumer AI products
Customer data Never share customer PII, credentials, or proprietary data with any AI tool
Security-sensitive code Do not share authentication logic, encryption keys, API secrets, or security infrastructure
Architecture & design docs Can be shared with enterprise-licensed tools Do not share with consumer AI products

Rule: If in doubt, do not share. Escalate to your security team or engineering manager.

AI Output Review Requirements

All AI-generated output must be reviewed by a human before it is used in production.

Output Type Review Standard Reviewer
Production code Full code review (same as human-written code) Peer engineer via PR
Test code Review for correctness, coverage, and relevance QE or peer engineer
Documentation Review for accuracy, tone, and completeness Author (PM, developer, or architect)
Meeting summaries Review for accuracy before distribution Meeting facilitator
PR summaries Verify accuracy of the change description PR author

Licensing & Attribution


11.4 Measuring AI Impact

Productivity Metrics

Metric Description How to Measure
Suggestion Acceptance Rate % of AI suggestions accepted by engineers Tool analytics (Copilot dashboard, etc.)
PR Turnaround Time Time from PR open to merge (with AI review vs. without) Git analytics, compare before/after adoption
Test Coverage Delta Change in test coverage after AI test generation adoption CI coverage reports
Code Review Cycle Time Time from review request to approval Git analytics
Boilerplate Reduction Reduction in time spent on scaffolding and plumbing code Developer survey (quarterly)

ROI Framework

Calculate return on investment for AI tool adoption:

ROI = (Time Saved per Engineer per Week × Engineer Cost per Hour × Number of Engineers × 52 weeks)
      ÷ (Annual Tool Cost per Seat × Number of Seats + Implementation Cost)

Example ROI tiers (based on time saved per engineer per week):

Adoption Level Time Saved / Engineer / Week Annual Savings (50 engineers, $75/hr) Annual Tool Cost Net ROI
Low (passive use) 1–2 hours $195,000–$390,000 ~$100,000 2–4x
Medium (active use) 3–5 hours $585,000–$975,000 ~$100,000 6–10x
High (integrated workflows) 5–8 hours $975,000–$1,560,000 ~$150,000 7–10x

Note: Time saved does not automatically translate to more features shipped. Recaptured time may go to quality improvements, tech debt reduction, or learning — all of which are valuable.

Adoption Tracking

Metric Description Target
Activation Rate % of engineers who have used an AI tool at least once > 90% within 30 days of rollout
Daily Active Users (DAU) % of engineers using AI tools daily > 60% after 90 days
Satisfaction Score Quarterly developer survey (1–5 scale) >= 3.5
Use Case Breadth Number of distinct SDLC phases where AI is actively used >= 3 phases within 6 months

11.5 AI Adoption Roadmap

A phased approach to adopting AI tools across the R&D organization.

┌──────────────────────────────────────────────────────────────────────────────────┐
│                        AI ADOPTION ROADMAP                                       │
├──────────────────────────────────────────────────────────────────────────────────┤
│                                                                                  │
│  PHASE 1 (Q1-Q2)            PHASE 2 (Q3-Q4)            PHASE 3 (Q5-Q6)         │
│  Individual Productivity     Team Workflows              Org Intelligence        │
│  ┌────────────────────┐     ┌────────────────────┐     ┌────────────────────┐   │
│  │ • Coding assistants │     │ • AI code review    │     │ • AI metric analysis│   │
│  │ • AI pair programming│    │ • AI test generation │    │ • Sprint prediction │   │
│  │ • Doc drafting       │    │ • CI/CD integration  │    │ • Defect prediction │   │
│  │ • Meeting summaries  │    │ • Story/AC generation│    │ • Portfolio insights│   │
│  └────────────────────┘     └────────────────────┘     └────────────────────┘   │
│                                                                                  │
│  Focus: Get tools in       Focus: Embed AI into        Focus: AI-powered        │
│  engineers' hands          team processes               decision support          │
│                                                                                  │
│  Success: >90% activation  Success: >60% DAU           Success: AI insights     │
│  Satisfaction >= 3.5       3+ SDLC phases covered      used in QP & LS reviews  │
│                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────┘

Phase 1: Individual Productivity (Q1–Q2)

Activity Owner Success Criteria
Roll out coding assistants (Copilot, Claude Code) to all engineers Engineering Manager > 90% activation within 30 days
Conduct prompt engineering workshop (2 hours) Tech Lead / Champion All engineers attend, share prompt library
Establish data privacy guidelines (Section 11.3) Security + Engineering Policy published, acknowledged by all engineers
Enable AI meeting summaries for governance meetings Operations / TPM Summaries reviewed and distributed for LS and ER
Identify 2–3 AI champions per product area Director of Engineering Champions are actively coaching peers
Baseline productivity metrics (Section 11.4) Engineering Manager Pre-AI metrics captured for comparison

Phase 2: Team Workflows (Q3–Q4)

Activity Owner Success Criteria
Deploy AI code review (CodeRabbit) to all repos DevOps + Tech Leads AI review enabled on all active repos
Integrate AI test generation into CI pipeline QE + DevOps AI-generated tests run in CI for new PRs
Train PMs on AI-assisted story writing and AC generation PM Lead PMs using AI drafts for 50%+ of new stories
Automate release notes generation DevOps + PM Release notes drafted automatically from merged PRs
Measure team-level productivity impact Engineering Manager Compare PR turnaround, test coverage, review cycle time vs. Phase 1 baseline

Phase 3: Organizational Intelligence (Q5–Q6)

Activity Owner Success Criteria
Enable AI-powered metric analysis (trend detection, anomaly alerts) Engineering Manager + Jellyfish admin AI insights reviewed in Leadership Sync
Pilot sprint prediction for 2–3 teams Tech Leads Prediction accuracy within 15% of actual
Integrate defect prediction into QE workflow QE Lead High-risk areas identified before testing begins
Review and optimize AI tool portfolio Director of Engineering Underused tools retired, high-value tools expanded
Publish AI adoption retrospective AI Champions Lessons learned shared, roadmap updated for next year

11.6 AI Tool Landscape

A reference table of AI tools by category, mapped to the adoption roadmap.

Category Tool Purpose Phase License Model
Coding Assistant GitHub Copilot Inline code suggestions, chat, PR summaries Phase 1 Per-seat subscription
Coding Assistant Claude Code Code generation, refactoring, debugging, architecture Phase 1 Per-seat / usage-based
Coding Assistant Cursor AI-native IDE, full-file and multi-file editing Phase 1 Per-seat subscription
Code Review CodeRabbit Automated PR review, summaries, suggestions Phase 2 Per-seat subscription
Code Review GitHub Copilot PR Review AI-powered review comments on pull requests Phase 2 Included with Copilot Enterprise
Testing AI Test Generation (various) Unit test, integration test, and test data generation Phase 2 Varies
Documentation AI Doc Tools (various) Release notes, API docs, ADR drafts, meeting summaries Phase 1–2 Varies
Metrics & Analysis Jellyfish + AI layer Trend analysis, anomaly detection, sprint prediction Phase 3 Enterprise license
Quality AI Defect Prediction Identify high-risk code areas before testing Phase 3 Varies

Evaluation: Before adopting any tool, complete the AI Tool Evaluation Template. See Appendix G.


11.7 AI Agents and the System of Record

A live debate in engineering circles asks whether tools like JIRA are still needed in the AI era. Markdown task files plus AI agents (Claude Code, Cursor, Linear AI) can absorb a lot of what JIRA does — drafting stories, updating status, summarizing sprints, generating reports. The question is genuine and worth answering directly.

This operating model takes the position that AI changes the interface to the system of record, not the need for one.

Two Concepts, Often Conflated

Concept What It Is Examples
System of record The durable, queryable, shared state of work — what is committed, who owns it, what's in flight, what's done, what depends on what JIRA, Linear, GitHub Issues
Interface to the system of record How humans (and now AI agents) read, write, update, and summarize that state JIRA UI, Linear UI, CLI tools, Slack bots, AI agents

The "JIRA is dead" argument observes that interfaces to JIRA have historically been painful and that AI agents now offer a much better interface. Both observations are correct. The leap to "therefore the system of record is unnecessary" does not follow.

When Markdown + AI Is Sufficient

For solo engineers and teams of 1–5, a tasks.md or TODO.md file in the repo, combined with an AI coding assistant, is often enough. The team is small enough that everyone can read every file, dependencies are local, capacity is what one or two people can hold in their head, and there is no cross-team coordination problem to solve.

This operating model is not for those teams. See Section 10.8: When This Operating Model Doesn't Fit.

Why Structured State Is Required at Scale

Once an organization crosses ~10 engineers and 2+ teams, the system of record must support things that markdown files cannot:

Requirement Why It Needs Structured Shared State
Cross-team dependencies A markdown file in one team's repo is invisible to another team. Dependencies need a single queryable system.
Capacity planning Sprint-by-sprint capacity targets (per Section 3 Pre-Planning) require querying committed work across all teams in a single view.
Portfolio rollups A VP needs to see all in-flight Initiatives across all teams in one query. Markdown files spread across N repos do not roll up.
Quarterly investment mix Tracking the New Product / Architecture / KTLO / Customer Voice split requires structured tagging across all teams.
Audit trail and governance Quarterly Risk Review, Quarterly Demo, and stakeholder reporting require persistent history that survives file edits and branch deletions.
Executive reporting and CRR Commitment Reliability Ratio, DORA metrics, and Team Health Dashboard signals all aggregate over the system of record.
Onboarding new team members A new engineer needs a queryable view of "what is everyone working on" — not N markdown files spread across N repos.

How AI Fits In This Operating Model

AI's role is the interface to JIRA, not its replacement. Concretely:

AI Role What It Does What It Doesn't Replace
Writes JIRA items Drafts Stories from PRDs, generates ACs, suggests estimates (per Section 11.2: AI Across the SDLC) The structured Story itself, queryable by Tech Leads, EMs, and dashboards
Reads JIRA items Summarizes sprint status for standups, generates exec rollups, surfaces at-risk Epics The underlying state — humans need a queryable source of truth, not just the latest summary
Updates JIRA items Auto-transitions tickets on PR merge, adds context from commits The audit trail of who decided what and when
Recommends from JIRA + operating model Reads Team Health Dashboard signals, generates recommendations grounded in the relevant operating model section The judgment of the team — recommendations are inputs, not decisions

Practical Guidance

Principle: The system of record must be shared, structured, and queryable across teams. The interface to it should be AI-native. These are the same conclusion most engineering organizations will eventually reach — this operating model takes that position from day one.


12. Onboarding Guide for New Teams

This section provides a phased, checklist-driven guide for teams newly adopting the operating model. Whether you are forming a new team, onboarding an existing team, or joining a team already using the model, follow these steps to get up and running.

12.1 Who This Guide Is For

Audience Use Case
Tech Leads & EMs Forming a new team under this operating model
PMs Joining or standing up a product team aligned to this framework
Existing teams Adopting the model after operating under a different methodology
Individual contributors Joining a team that already uses this model — understand expectations and rituals

12.2 Prerequisites — Before Day One

Complete these items before the team's first working day under the model.

What Who Owns It Reference
Team composition confirmed (roles filled or hiring plan in place) EM / Director Section 1: Well-Formed Teams
Product area assignment confirmed Director / VP Section 1: Team Groupings
JIRA Team Project created with standard work item types DevOps / EM Section 6: JIRA Structure & Tooling
Access to Portfolio Project and JPD granted DevOps / EM Section 6: JIRA Structure & Tooling
Scaling tier identified — which level of governance applies Director / EM Section 10: Scaling the Model

12.3 Week 1 — Team Foundations

Activity Owner Reference Template
Establish working agreements (core hours, communication norms, PR review SLA) Tech Lead + Team Section 1 Working Agreements Template
Define Definition of Done for Story, Bug, and Epic Tech Lead + QE Section 1 Definition of Done Template
Set up JIRA project — work item types, custom fields, workflows Tech Lead + DevOps Section 6
Identify partnership roles (Architect, TPM, SRE, User Learning) EM Section 1: Partnership Roles
Meet the governance cadence — understand Leadership Sync, Epic Refinement EM + Tech Lead Section 7: Governance
Review escalation paths and decision-making authority EM + Tech Lead Section 7: Escalation Paths

12.4 First Sprint — Establishing Rhythm

Activity Owner Reference
Run all 5 Scrum ceremonies with full agendas (Planning, Stand-Up, Grooming, Demo, Retro) Tech Lead Section 5: Day-to-Day Execution
Set initial sprint goal — keep scope conservative for a ramping team PM + Tech Lead Section 5
Plan at 50% capacity for first 1–2 sprints (onboarding ramp) Tech Lead Section 3: Capacity Planning
Set up QRR risk tracking in JIRA Tech Lead Section 6: Quarterly Risk Review
Begin DORA metrics baseline (deployment frequency, lead time, MTTR, change failure rate) Tech Lead + DevOps Section 9.2: DORA Metrics

12.5 First Quarter — Full Cycle

Activity Owner Reference Template
Complete capacity planning for next quarter Tech Lead + PM Section 3: Capacity Planning Capacity Planning Template
Conduct cognitive load assessment Tech Lead Section 3: Cognitive Load Cognitive Load Worksheet
Participate in Quarterly Planning (QP) Full Team Section 4: Quarterly Planning
Target Sprint 5 feature freeze / Sprint 6 enablement cadence PM + Tech Lead Section 2: Quarterly Cadence
Run first Epic Closing Ceremony (if shipping an Epic) PM + Tech Lead Section 5: Epic Closing Ceremony
Complete Team Health Dashboard self-assessment Full Team Section 9.5: Team Health Team Health Dashboard

12.6 Graduation Criteria

These signals indicate a team has fully adopted the operating model and is running at steady state.

Criterion How to Verify
Working agreements documented and in use Agreement file exists and is referenced in onboarding
Commitment Reliability Ratio tracked for at least 1 quarter Section 9.1 — data exists for 6 sprints
All Scrum ceremonies running with consistent attendance Sprint Planning, Stand-Up, Grooming, Demo, Retro all occurring
Capacity planning completed before QP Section 3 — plan submitted before Quarterly Planning
Team Health Dashboard baseline established Section 9.5 — at least 1 self-assessment completed
DORA metrics being tracked Section 9.2 — dashboard shows deployment frequency, lead time
Confidence vote ≥ 70% at QP Section 4 — team met the planning confidence threshold

12.7 Common Pitfalls

Pitfall Symptom Remedy
Over-engineering governance from day one Team drowns in process before establishing delivery rhythm Start with Section 10 "What to Keep" minimum viable governance
Skipping capacity planning Sprint 1 overcommitment, missed sprint goals Complete capacity planning before first sprint
Not establishing working agreements Inconsistent practices, team friction, unclear expectations Use the Working Agreements Template in Week 1
Ignoring the onboarding ramp Unrealistic Sprint 1 commitment, early burnout Plan at 50% capacity for first 1–2 sprints (Section 3)
Treating Sprint 6 as a feature sprint No enablement, poor quality handoff, support teams blindsided Reserve Sprint 6 for enablement (Section 2)
Not tracking Commitment Reliability Ratio early No accountability baseline, inability to measure improvement Start tracking from Sprint 1 (Section 9.1)

13. Roles & Responsibilities

This section consolidates the roles that govern how the operating model runs — who owns what, who decides what, and where each role fits in the organizational structure. Detailed process descriptions remain in their respective sections; this section serves as a single reference for "who does what."

Note: This section covers leadership and governance roles — the roles that steer, coordinate, and decide. Practitioner roles (Software Engineers, Quality Engineers, UX Designers, DevOps Engineers, SREs) are defined in Section 1: Well-Formed Teams.


13.1 Role Overview

The diagram below shows how roles relate to one another across the organization. Core team roles operate within a single team; leadership and partner roles span teams or product areas.

┌──────────────────────────────────────────────────────────────────────────────┐
│                         ROLES IN THE OPERATING MODEL                        │
├──────────────────────────────────────────────────────────────────────────────┤
│                                                                              │
│   VP of Engineering                                                          │
│   └── Director of Engineering (per product area)                             │
│       ├── Engineering Manager (per team or shared across 2 teams)            │
│       │   └── Technical Team Lead (per team)                                 │
│       │       └── [Well-Formed Team — Section 1]                             │
│       ├── Partner Architect (per product area)                                │
│       └── Partner TPM (per product area)                                      │
│                                                                              │
│   Product Manager (per team, dotted-line to product leadership)              │
│                                                                              │
│   ┌────────────────────────────────────────────────────────────────┐         │
│   │  Additional Roles at Large Scale (11–20+ teams)                │         │
│   │  • Release Train Engineer (RTE) — QP facilitation              │         │
│   │  • Program Manager — cross-area dependency tracking            │         │
│   │  • Chapter Lead — per-discipline standards                     │         │
│   │  • Architecture Council — org-wide technical governance        │         │
│   └────────────────────────────────────────────────────────────────┘         │
│                                                                              │
└──────────────────────────────────────────────────────────────────────────────┘

13.2 Core Team Roles

These roles exist on every well-formed team regardless of organizational scale.

Product Manager (PM)

The Product Manager owns what the team builds and why, translating business strategy into a prioritized backlog and ensuring the team delivers customer value each quarter.

Dimension Detail
Scope Single team
At Scale One PM per team at all tiers; may span 2 teams at small scale
Glossary Appendix H — PM

Owns in this model:

Key touchpoints:

See Section 6: JIRA Hierarchy — PM owns Epics in the Portfolio Project alongside the Tech Lead.


Technical Team Lead (Tech Lead)

The Technical Team Lead owns how the team builds, driving technical decisions within the team, running day-to-day ceremonies, and serving as the first escalation point for team members. In the AI era, this role expands — less hands-on coding, more architecture, AI practice governance, and review of AI-assisted work.

Dimension Detail
Scope Single team
At Scale One Tech Lead per team at all tiers
Glossary Referred to as "Team Lead" in Section 7 Decision Authority

Owns in this model:

Key touchpoints:

See Section 7: Decision Authority Summary — the "Team Lead" row describes this role's authority scope.


Engineering Manager (EM)

The Engineering Manager owns the people dimension — hiring, performance, career growth, and team health — and ensures each team has the structure and support to deliver sustainably.

Dimension Detail
Scope One or more teams
At Scale Dedicated per team at medium/large scale; may be shared across 2 teams at small scale (Section 10.3)
Glossary Appendix H — EM

Owns in this model:

Key touchpoints:

See Section 7: Escalation Paths — EM is the decision authority for people concerns escalated by the Team Lead.


13.3 Leadership Roles

These roles provide direction and authority across teams and product areas.

Director of Engineering

The Director of Engineering holds product-line authority — the convergence point for people, architecture, and operations escalations within their area.

Dimension Detail
Scope Product area (multiple teams)
At Scale One per product area at medium/large scale; may double as EM at small scale
Glossary Defined in Section 7 Decision Authority

Owns in this model:

Key touchpoints:

See Section 7: Escalation Flow — all three escalation lanes (people, architecture, operations) converge at the Director.


VP of Engineering

The VP of Engineering holds organization-level authority — setting strategic direction, allocating investment across product lines, and making org-level trade-offs.

Dimension Detail
Scope R&D organization
At Scale One per R&D organization
Glossary Defined in Section 7 Decision Authority

Owns in this model:

Key touchpoints:

See Section 6: JIRA Hierarchy — VP / Senior Leadership owns the Strategy level in JPD.


13.4 Partner Roles

Partner roles are shared across one or more teams within a product area. They provide specialized expertise without belonging to a single team's headcount.

Partner Architect

The Partner Architect owns architecture direction across a product area — setting technical standards, reviewing designs, and guiding teams through novel or unpaved work.

Dimension Detail
Scope Product area (across teams)
At Scale One per product area at medium/large scale; architecture guidance may come from Tech Leads at small scale
Glossary Defined in Section 7 Decision Authority

Owns in this model:

Key touchpoints:

See Section 7: Escalation Paths — the Partner Architect is the first escalation point for architecture concerns.


Partner TPM (Technical Program Manager)

The Partner TPM owns delivery coordination across teams — resolving cross-team dependencies, managing program-level risk, and driving operational improvements.

Dimension Detail
Scope Product area (across teams)
At Scale One per product area at medium/large scale; coordination handled informally at small scale
Glossary Appendix H — TPM

Owns in this model:

Key touchpoints:

See Section 7: Escalation Paths — the Partner TPM is the first escalation point for operations concerns.


13.5 Roles by Scale

The table below summarizes which roles are present at each scaling tier. For full tier descriptions, see Section 10: Scaling the Model.

Role Small (2–4 teams) Medium (5–10 teams) Large (11–20+ teams)
Product Manager 1 per team (may span 2) 1 per team 1 per team
Tech Lead 1 per team 1 per team 1 per team
Engineering Manager Shared across 2 teams 1 per team 1 per team
Director of Engineering May double as EM 1 per product area 1 per product area
VP of Engineering 1 1 1
Partner Architect Tech Leads fill this need 1 per product area 1 per product area
Partner TPM Informal coordination 1 per product area 1 per product area
Release Train Engineer (RTE) 1 per org (Section 10.5)
Program Manager 1 per org (Section 10.5)
Chapter Lead 1 per discipline per area (Section 10.5)
Architecture Council Org-wide body (Section 10.5)

Reference: The Scaling Decision Matrix in Section 10.6 provides the complete decision framework for when to add each governance layer.


13.6 Where to Find More

This section cross-references where each topic is covered in detail throughout the model.

Topic Primary Reference Supporting References
Team composition and core roles Section 1: Well-Formed Teams Section 12.2: Prerequisites
JIRA ownership by role Section 6: JIRA Hierarchy
Escalation paths and decision authority Section 7: Governance
Large-scale roles (RTE, Chapter Lead, Program Manager, Architecture Council) Section 10.5: Large Scale Section 10.6: Scaling Decision Matrix
Role-based onboarding checklists Section 12: Onboarding Guide
AI rollout ownership by role Section 11.5: AI Adoption Roadmap
Role abbreviations and definitions Appendix H: Glossary

14. Running Your First Quarterly Planning

This section guides leaders and facilitators through organizing and running a Quarterly Planning event for the first time. It covers readiness, logistics, data gaps, a modified agenda, facilitation, and close-out — everything that differs when no one in the room has done this before.

14.1 Who This Guide Is For

Audience Use Case
Director / VP standing up QP for a new org You are adopting this operating model and need to plan your first QP event end-to-end
Director / VP transitioning an existing org Your teams have planned before under a different methodology and you are switching to this model
EM / Tech Lead designated as facilitator (no RTE) You have been asked to facilitate QP and need a concrete runbook
Leaders from a different methodology (SAFe, Kanban, etc.) You need to understand how this model's QP differs from what you have done before

Cross-references: Section 4: Quarterly Planning covers the steady-state QP process. Section 12: Onboarding Guide covers team-level onboarding into the model.


14.2 First-Time QP vs. Steady-State QP

This table explains why a first-time QP requires additional preparation and a modified approach.

Dimension Steady-State QP First-Time QP
Velocity data 3–6 sprints of historical data per team No history — must use proxies (Section 14.5)
Investment allocation Tuned from prior quarters Must be set from scratch using org-type benchmarks
Facilitator familiarity Facilitator has run QP before Facilitator is learning the process while leading it
Tool readiness JIRA Portfolio, JPD, and Quarterly Risk Review are configured Tools may need first-time setup and access provisioning
Attendee familiarity Teams know the flow, artifacts, and expectations Teams need orientation on the model, agenda, and artifacts
Confidence vote knowledge Teams understand the 70% gate and voting mechanics Confidence vote must be explained before it can be used
Communications needs Standard reminder cadence is sufficient Extended communications with model orientation materials
Dependency process knowledge Teams know how to log and negotiate cross-team dependencies Dependency identification and negotiation process must be taught

14.3 Readiness Checklist — Eight Weeks Out

Begin this checklist at least eight weeks before QP Day 1. Items are grouped by readiness area.

Roadmap Readiness

Item Owner Reference
Strategies and Initiatives entered in JPD PM / Director Section 3: Pre-Planning
Investment allocation set (even if provisional) VP / Director Section 3: Roadmap Inputs
Epics drafted at headline level (title + 1-sentence scope) PM / Tech Lead Section 6: JIRA Hierarchy

Organizational Readiness

Item Owner Reference
Facilitator designated Director / VP Section 14.7: Facilitator Runbook
Team leads and PMs identified for every team Director Section 1: Well-Formed Teams
Scaling tier confirmed Director / EM Section 10: Scaling the Model
QP duration decided (2 days standard; 3 days if >10 teams) Facilitator Section 4: Quarterly Planning
Full attendee list finalized and invitations sent Facilitator

Tool & Access Readiness

Item Owner Reference
JIRA Team Projects created for all teams DevOps / EM Section 6: JIRA Structure & Tooling
JIRA Portfolio Project configured DevOps / EM Section 6: JIRA Structure & Tooling
JPD access granted to PMs and leadership DevOps / EM Section 6: JIRA Structure & Tooling
Portfolio view configured with teams, sprints, and epics DevOps / EM
Quarterly Risk Review created (JIRA Risk type or Miro) Facilitator Section 4: Quarterly Planning
PowerPoint template prepared for team readouts Facilitator / PM

14.4 Logistics and Communications Plan

Logistics Checklist

Item Notes
Physical room booked (or virtual meeting link created) One main room + breakout spaces for each team
Virtual setup tested (camera, screen share, audio) Required for hybrid/remote; test during Bridge Sprint Friday
Breakout rooms available (physical or virtual) One per team for planning blocks
Printed capacity templates (or shared digital copies) Pre-filled with team member names and known PTO
Quarterly Risk Review visible to all participants Projected on screen or shared virtual whiteboard
Bridge Sprint Friday prep completed See Appendix B: Quarterly Calendar View

Communications Timeline

When Action Audience
4 weeks before QP Send announcement: what QP is, why we are doing it, dates, and link to this operating model All attendees
2 weeks before QP Send pre-planning reminder: Epics should be drafted, capacity data gathered, investment allocation shared PMs, Tech Leads, EMs
1 week before QP Send logistics + agenda: room details, schedule, what each team needs to prepare All attendees
3 days before QP Final reminder + tech check: confirm virtual setup, breakout room assignments, tool access All attendees
Day before QP Send what-to-bring list: laptop, JIRA access verified, draft epic list, capacity numbers, known risks All attendees

14.5 Solving First-Time Data Gaps

No Historical Velocity

Without sprint history, teams cannot use the standard capacity planning process (Section 3: Capacity Planning). Use one of the following proxy approaches and plan conservatively.

Approach How It Works When to Use
Analogy from similar teams Use velocity data from a comparable team (similar tech stack, domain complexity, team size) and apply a 20–30% discount for first-quarter friction You have access to data from another team or org
T-shirt sizing with planning factor Size Epics as S/M/L/XL at QP, convert to points using a team-agreed scale, then apply a 60% planning factor (plan only 60% of theoretical capacity) No comparable data exists; team is newly formed
Time-based decomposition Decompose Epics into engineer-weeks, multiply by team size and sprint count, then discount by 30% Team prefers time-based estimation over points

Principle: It is better to under-commit and over-deliver in the first quarter. Velocity data from the first quarter becomes the baseline for the second QP.

First-Time Investment Allocation

If the organization has no prior allocation history, use these starting points by org type and adjust each quarter.

Org Type New Product Architecture KTLO Customer Voice Rationale
Greenfield / startup 50–60% 20–25% 5–10% 5–10% Maximize feature velocity; minimal maintenance burden
Growth-stage SaaS 35–40% 15–20% 20–25% 15–20% Balance growth with emerging operational needs
Mature / enterprise 25–30% 15–20% 30–35% 15–20% Significant maintenance and customer support load
Migration / modernization 15–20% 40–50% 20–25% 10–15% Architecture dominates while platform is rebuilt

Reference: Section 3: Roadmap Inputs provides the steady-state benchmarks. The principle is to make the investment conversation explicit, not perfect — a provisional split is far better than no split.

Existing orgs: If you have sprint data from a prior methodology (SAFe, Kanban, etc.), use it. The proxy approaches above are fallbacks for teams with no data at all.


14.6 First-Time QP Agenda

The first-time agenda builds on the steady-state agenda in Section 4. The following tables show the additions and modifications for first-time QP only.

Day 1 — Additions

Time Block Addition Duration Purpose
Opening Extended model orientation +20 min Walk through the operating model overview, QP purpose, and expected artifacts
After Opening Tool orientation block 30 min Live walkthrough of JIRA Portfolio, JPD, and the Quarterly Risk Review — show where plans are entered
Mid Planning Block Facilitator check-in 30 min Facilitator visits each team to answer questions and unblock confusion
Readout Extended readout time 15 min/team (vs. 10) Extra time for Q&A since teams and leaders are learning the format

Day 2 — Additions

Time Block Addition Duration Purpose
Opening Recap of Day 1 themes 15 min Summarize common risks, dependency patterns, and open questions from Day 1
Before Confidence Vote Confidence vote explanation 10 min Explain the mechanics and the 70% gate before teams vote for the first time

Confidence Vote Mechanics

The confidence vote measures each team's belief that they can deliver their plan as committed.

  1. Private team discussion (5 min) — Each team discusses their plan's risks and completeness privately.
  2. Individual vote — Each team member selects a number from 1 (no confidence) to 5 (full confidence).
  3. Report out — The facilitator collects each team's average score and range (e.g., "Team Alpha: 3.8, range 3–5").
  4. 70% gate — An average of 3.5 or higher (70%) means the team's plan is accepted. Below 3.5, the team must identify what needs to change — rescope, remove epics, or resolve dependencies — and re-vote.

Reference: Section 4: Day 2 describes the steady-state confidence vote. The mechanics above add facilitation detail for first-time events.

Team Readout Format

Each team presents using the following structure. First-time QP allows 15 minutes per team (vs. 10 in steady-state).

Segment Content Time
Team overview Team name, product area, team size 1 min
Investment split Planned allocation across New Product, Architecture, KTLO, Customer Voice 2 min
Epic plan Epics committed for the quarter with target sprints 5 min
Risks and dependencies Top risks (logged on Quarterly Risk Review) and cross-team dependencies 3 min
Confidence score Team's average confidence and range 1 min
Q&A Questions from leadership and other teams 3 min

14.7 Facilitator Runbook

Who Facilitates (Without an RTE)

At small and medium scale (Section 10: Scaling the Model), there is no Release Train Engineer. QP facilitation falls to:

Critical rule: The facilitator must NOT be simultaneously planning with a team. Facilitation requires full-time attention during QP.

Responsibilities by Phase

Phase Facilitator Responsibilities
Before QP (8–4 weeks) Drive the readiness checklist (Section 14.3); send communications (Section 14.4); confirm tool setup
Opening (Day 1) Deliver model orientation; set expectations for Day 1 outcomes; explain the Quarterly Risk Review
Planning Blocks Visit each team at least once per block; answer process questions; do NOT make planning decisions for teams
Readouts Timekeep strictly; capture cross-team dependencies on the Quarterly Risk Review; flag scope risks to senior leader
Day 2 Opening Summarize Day 1 themes; highlight unresolved dependencies
Confidence Vote Explain the mechanics; collect scores; identify teams below 70% and facilitate rescoping
Post-QP Drive the close-out checklist (Section 14.8)

Managing Overruns

Situation Action
Team stuck after 90 min in planning block Facilitator intervenes: identify the blocker, escalate to senior leader during the block (not at readout)
Readout running over time Enforce 10-min timebox for scope negotiation at readout; table remaining items for offline follow-up
Epic not ready for commitment Remove unready Epics from the quarter plan rather than stalling the event — they become candidates for next quarter
Dependency between two teams unresolved Use the dependency negotiation process below

Cross-Team Dependency Negotiation

When two teams identify a dependency during QP, follow this decision tree:

  1. Both teams align on scope and timing → Log as a Risk Owned item on the Quarterly Risk Review with both team names. No further action needed during QP.
  2. Teams disagree on scope or timing → Escalate to the facilitator:

Dependency Log Template

Dependency ID Requesting Team Providing Team What Is Needed By When (Sprint) Status Resolution
DEP-001 (example) (example) API endpoint for X Sprint 3 Aligned Risk Owned

Hybrid/Virtual Notes


14.8 Post-QP Close-Out Checklist

Complete all items within the first week after QP Day 2.

Item Owner When
Confirm all team plans are entered in JIRA Portfolio Tech Leads / PMs Day 2 + 1
Log all risks from the Quarterly Risk Review into JIRA (Risk type) Facilitator Day 2 + 1
Record confidence vote scores for each team Facilitator Day 2
Update JPD Timeline with committed Initiatives and Epics PMs Day 2 + 2
Send QP summary email (key decisions, committed Epics, top risks) Facilitator Day 2 + 2
Schedule Sprint 1 Planning for each team Tech Leads / EMs Day 2 + 3
Archive capacity numbers used during QP Tech Leads Day 2 + 3
Apply FixVersion (QP#) to all committed Epics in JIRA Tech Leads / PMs Day 2 + 3
Document actual investment allocation per team PMs / Director Day 2 + 5
Store readout PowerPoint in team shared drive PMs Day 2 + 3
Schedule first Leadership Sync of the quarter Director / VP Day 2 + 5

14.9 Primer for First-Time Attendees

If this is your first Quarterly Planning event, this section answers the most common questions. Share this with your team before QP Day 1.

Question Answer
What's the goal of QP? Every team leaves with a committed delivery plan for the next 6 sprints, with risks and dependencies visible to the entire organization.
What do I bring? Your laptop with JIRA access verified, your team's draft epic list, capacity numbers (available days, known PTO), and any known risks or dependencies.
What if our plan isn't ready by the end of Day 1? That's expected. Day 1 produces a draft plan. Day 2 is for refinement, dependency resolution, and final commitment.
What's the confidence vote? Each team member rates their confidence (1–5) that the plan is achievable. The team needs an average of 3.5+ (70%) for the plan to be accepted. See Section 14.6.
What if we can't reach 70% confidence? The team works with leadership to rescope — remove or defer Epics until the plan is realistic. Low confidence is a signal, not a failure.
What happens after QP? Sprint 1 starts. Your team executes against the committed plan using standard sprint ceremonies. See Section 5: Day-to-Day Execution.
What if we have no velocity data? Use the proxy approaches in Section 14.5. Plan conservatively — under-commit and build a real baseline this quarter.
Will this get easier? Yes. The first QP is the hardest because everything is new. By the second quarter, you will have velocity data, tool familiarity, and process muscle memory.

15. Customer Ideas Pipeline

This section documents how customer enhancement requests and capability suggestions flow from capture through evaluation into the roadmap. It covers the end-to-end lifecycle of an idea in Jira Product Discovery (JPD) — from initial capture to promotion into an Initiative and eventually an Epic.

Scope: This pipeline handles enhancement requests and capability suggestions only. Customer-reported defects (Bugs) follow their own SLA-driven lifecycle documented in Section 6: JIRA Structure & Tooling.

15.1 Who This Section Is For

Audience Use Case Cross-Reference
Product Manager Owns the idea backlog, facilitates review meetings, makes promotion decisions Section 7: Governance, Section 9: Metrics
Director / VP Sets Customer Voice allocation, approves high-impact promotions, reviews pipeline health Section 3: Pre-Planning, Section 7: Governance
CSM / Support Lead Captures ideas from support tickets and customer calls, provides demand signal Section 6: JIRA Structure & Tooling
Sales / Pre-Sales Submits feature requests from prospects and customers, attaches revenue context Section 3: Pre-Planning
Tech Lead Assesses technical feasibility and effort during evaluation Section 6: JIRA Structure & Tooling, Section 7: Governance

15.2 Idea Sources

Ideas can originate from any customer-facing or internal channel. The principle is: capture everything; filtering happens during evaluation.

Source Description Who Captures How It Enters JPD
Support Tickets Enhancement requests and feature suggestions from customer support cases CSM / Support Lead CSM creates idea in JPD, links to support ticket
Customer Calls / QBRs Feature requests and pain points surfaced during customer meetings or quarterly business reviews PM / CSM PM or CSM creates idea in JPD after the meeting with notes
Sales Feedback Feature gaps identified during sales cycles, prospect requests, competitive gaps Sales / Pre-Sales Sales submits via a JPD intake form or Slack channel; PM triages into JPD
Internal Observation Product team identifies UX improvements, workflow gaps, or capability opportunities PM / UX Designer PM creates idea directly in JPD
Product Analytics Usage data reveals underused features, drop-off points, or adoption barriers PM PM creates idea in JPD with supporting data
Executive / Strategic Top-down directives from leadership based on market shifts, partnerships, or strategic bets VP / Director Director or PM creates idea in JPD tagged as "Strategic"

Tip: Establish a shared Slack channel (e.g., #product-ideas) where anyone can submit raw ideas. The PM triages these into JPD on a weekly basis to prevent ideas from being lost.

15.3 Idea Lifecycle in JPD

Ideas move through a defined set of statuses in JPD. Every status transition should be documented with a comment explaining the rationale.

                                  ┌──────────┐
                                  │  Parked  │
                                  └──────────┘
                                       ▲
                                       │ (viable but not now)
                                       │
┌─────┐     ┌──────────────┐     ┌───────────┐     ┌──────────┐     ┌────────────────────────┐
│ New │────>│ Under Review │────>│ Evaluated │────>│ Accepted │────>│ Promoted to Initiative │
└─────┘     └──────────────┘     └───────────┘     └──────────┘     └────────────────────────┘
                                       │
                                       │ (does not meet criteria)
                                       ▼
                                  ┌──────────┐
                                  │ Declined │
                                  └──────────┘

Status Definitions

Status Description Who Moves It Criteria to Advance
New Idea has been captured but not yet reviewed Auto-set on creation PM picks it up for triage
Under Review PM is actively gathering context — checking for duplicates, adding customer data, requesting feasibility input PM Sufficient context exists to score the idea
Evaluated Idea has been scored against evaluation criteria (Section 15.4) PM with Tech Lead input Score meets Accepted, Parked, or Declined threshold
Accepted Idea is approved for promotion but awaiting capacity or a planning window PM / Director Capacity is available or next QP includes it
Promoted to Initiative Idea has been converted into an Initiative in JPD and enters the standard roadmap process PM / Director Initiative created, idea linked, status is terminal
Parked Idea is viable but deprioritized — will be revisited in a future review cycle PM Rationale documented; review date set
Declined Idea does not meet criteria and will not be pursued PM / Director Rationale documented and communicated to submitter

Transparency rule: Every Parked or Declined idea must have a documented rationale. When the idea originated from a customer, the CSM or PM communicates the decision back so the customer knows their input was heard and evaluated.

15.4 Evaluation Criteria

Ideas are scored on a 1–5 scale across six weighted criteria. The weighted average determines the disposition.

Criterion Weight 1 (Low) 3 (Medium) 5 (High)
Customer Impact High Affects a single user workflow Affects a segment of customers Affects most or all customers significantly
Strategic Alignment High No connection to current strategies Loosely related to a strategy Directly advances a top-3 strategy
Revenue / Retention Signal Medium No revenue or retention link Mentioned in a few renewals or deals Blocking multiple deals or cited as churn risk
Frequency / Demand Medium Single request 3–5 independent requests 10+ requests or validated by analytics
Effort Estimate Medium > 1 quarter of team effort 1 quarter for a team < 1 sprint for a team
Technical Feasibility Low Requires new platform capabilities or major re-architecture Moderate changes across multiple services Straightforward extension of existing architecture

Decision Thresholds

Disposition Criteria
Promote (→ Initiative) Weighted average ≥ 3.5 and both High-weight criteria ≥ 3
Accept (→ queue for promotion) Weighted average ≥ 3.0
Park Weighted average 2.0–2.9
Decline Weighted average < 2.0 or any High-weight criterion = 1

Note: Scores are a decision aid, not a formula. The review team may override thresholds with documented justification — for example, a strategically critical idea with low demand today may still be promoted.

15.5 Review Cadence

Idea Review Meeting

Agenda:

  1. Triage new ideas (15 min) — Move New → Under Review, merge duplicates, assign context owners
  2. Evaluate queued ideas (25 min) — Score ideas that are Under Review, apply thresholds
  3. Promotion decisions (10 min) — Confirm Accepted ideas ready for Initiative creation
  4. Backlog health check (10 min) — Review Parked ideas older than 2 quarters, check overall pipeline age

How Ideas Surface in Other Governance Meetings

Governance Meeting How Ideas Appear Cross-Reference
Weekly Leadership Sync PM reports pipeline summary: new idea count, promotions this month, backlog age trend Section 7: Governance
Bi-Weekly Epic Refinement Newly promoted Initiatives are broken into Epics and enter the refinement queue Section 7: Governance
Quarterly Planning Promoted ideas (now Initiatives with scoped Epics) compete for capacity alongside other roadmap items Section 4: Quarterly Planning

15.6 Idea to Initiative to Epic — Promotion Process

When an idea is Accepted and ready for promotion, it follows these steps:

Step Action Owner
1 Accept the idea — Confirm the idea meets promotion criteria and has Director approval PM
2 Create an Initiative in JPD — Link the Initiative to the originating idea and the relevant Strategy PM
3 Scope the Initiative — Define outcomes, success metrics, and high-level requirements PM + Tech Lead
4 Assign to a quarter — Slot the Initiative into the next available QP based on capacity and priority PM + Director
5 Break into Epics — Decompose the Initiative into delivery-sized Epics in the Portfolio Project PM + Tech Lead
6 Enter Epic Refinement — Epics are refined, estimated, and dependency-mapped in the bi-weekly Epic Refinement meeting PM + Tech Lead
7 Plan at QP — Epics are committed to teams during Quarterly Planning Full team

RACI for Idea Promotion

Activity PM Tech Lead Director CSM
Capture idea A, R R
Evaluate and score R C I C
Approve promotion R C A I
Create Initiative R C I
Scope and break into Epics A, R R I
Assign to quarter R C A
Communicate decision to customer I R

R = Responsible, A = Accountable, C = Consulted, I = Informed

15.7 Connecting to the Customer Voice Investment Category

Not all customer ideas consume Customer Voice capacity. The investment category depends on the nature of the idea, not its origin.

Idea Type Investment Category Rationale
Enhancement requests (feature improvements, UX improvements) Customer Voice Directly driven by customer demand
Strategy-aligned capability suggestions New Product Funded as part of strategic roadmap even though customer-originated
Architecture or platform requests (API access, integrations) Architecture / Tech Debt Technical investment regardless of customer origin
Customer-reported defects KTLO — separate SLA-driven lifecycle Not part of this pipeline — see Section 6

Demand vs. Supply: This pipeline manages the demand side — which ideas deserve investment. Section 3: Pre-Planning manages the supply side — how much capacity each investment category receives. The two connect during QP when promoted ideas (as Epics) compete for allocated capacity.

15.8 JPD Fields for Ideas

Configure the following fields in JPD for the Idea issue type to support the pipeline process:

Field Type Description
Summary Text Title of the idea — concise and outcome-oriented
Description Rich text Detailed description including the customer problem, desired outcome, and any supporting evidence
Source Dropdown Where the idea originated: Support Ticket, Customer Call/QBR, Sales Feedback, Internal, Analytics, Strategic
Customer(s) Multi-select / text Customer name(s) associated with the idea — supports demand tracking
Submitter User picker The person who captured the idea in JPD
Status Workflow Auto-managed: New → Under Review → Evaluated → Accepted / Parked / Declined → Promoted to Initiative
Impact Score Number Weighted average from the evaluation criteria (1.0–5.0)
Linked Initiative Link JPD link to the Initiative created when the idea is promoted
Decline / Park Reason Text Required when status is Declined or Parked — explains the rationale
Votes / Demand Count Number Count of independent requests or customer mentions for the same idea

15.9 Metrics and Health Signals

Track these metrics to ensure the pipeline is healthy and ideas are flowing through the system without stalling.

Signal Green Amber Red
Idea Backlog Age 90% of ideas < 60 days old 70–89% < 60 days > 30% of ideas older than 60 days
Idea-to-Initiative Conversion Rate 15–30% of evaluated ideas promoted per quarter 10–14% or 31–50% < 10% (too selective) or > 50% (not filtering)
Customer Voice Allocation Utilization 80–100% of allocated Customer Voice capacity consumed 60–79% consumed < 60% (pipeline starved) or > 100% (over-committed)
Feedback Loop Closure 90%+ of declined/parked ideas have customer communication within 2 weeks 70–89% communicated < 70% — customers are not hearing back
Idea Review Cadence Adherence Review meeting held every scheduled cycle 1 missed meeting per quarter 2+ missed meetings per quarter

Cross-reference: Customer Voice allocation utilization connects directly to the investment allocation metric in Section 9: Metrics & KPIs. If utilization is consistently low, either the pipeline is under-producing promotions or the allocation percentage should be redistributed.


16. Tools Summary

Tool Purpose
JIRA Work management, sprint tracking, team projects
JIRA Portfolio Epic tracking, quarterly delivery plans
Jira Product Discovery (JPD) Strategies, Initiatives, customer ideas, roadmap views, readout dashboards
ScriptRunner (JIRA) SLA automation for Bugs and Vulnerabilities
Dragonboat Roadmapping (alternative)
Miro Roadmapping, collaboration, visual planning
PowerPoint Quarterly Planning readout presentations
Jellyfish Engineering metrics and intelligence
GitHub Copilot AI coding assistant — inline suggestions, chat, code completion
Claude Code AI pair programming — code generation, refactoring, debugging, test writing
CodeRabbit AI-powered code review — automated PR analysis and feedback
AI Testing Tools AI test generation — unit tests, integration tests, visual regression


Appendix A: Operating Model — End-to-End Flow

┌─────────────────────────────────────────────────────────────────────────────────────────┐
│                         R&D OPERATING MODEL — END-TO-END FLOW                           │
└─────────────────────────────────────────────────────────────────────────────────────────┘

  ┌──────────────────────┐     ┌────────────────────┐     ┌───────────────────────────┐     ┌──────────────────┐
  │    PRE-PLANNING      │     │ QUARTERLY PLANNING │     │       EXECUTION           │     │  BRIDGE SPRINT   │
  │  (3 sprints before)  │────>│    (2 days)        │────>│   (Sprints 1-6)           │────>│   (Buffer)       │
  └──────────────────────┘     └────────────────────┘     └───────────────────────────┘     └──────────────────┘
         │                            │                            │                              │
         ├─ Build Roadmap             ├─ Day 1:                    ├─ Sprint Ceremonies:           ├─ Hackathon
         ├─ Investment Mix            │  ├─ Leadership Welcome     │  ├─ Sprint Planning           ├─ Spikes
         │  ├─ New Product            │  ├─ 4hr Team Planning      │  ├─ Daily Stand-Up            ├─ KTLO
         │  ├─ Architecture           │  ├─ Risk/Dep ID (JIRA)     │  ├─ Backlog Grooming          ├─ Metrics Review
         │  ├─ KTLO                   │  ├─ Leadership Readout     │  ├─ Sprint Demo               │  ├─ Commitment Reliability Ratio
         │  └─ Customer Voice         │  └─ Quarterly Risk Review      │  └─ Sprint Retro              │  ├─ DORA
         ├─ Capacity Planning         │                            │                              │  ├─ Quality
         ├─ Velocity Review           ├─ Day 2:                    ├─ Governance:                  │  └─ Jellyfish
         ├─ Incoming Rates            │  ├─ Continue Planning      │  ├─ Weekly Leadership Sync    ├─ Final Pre-Planning
         └─ Cognitive Load            │  ├─ Confidence Vote (≥70%) │  ├─ Bi-Weekly Epic Refine     │    (next quarter)
                                      │  └─ Re-plan if < 70%      │  └─ Quarterly Risk Review Updates        │
                                      │                            │                              │
                                      └─ Artifacts:                ├─ Quarterly Demo              │
                                         ├─ JIRA Portfolio Plans   │  (End of Sprint 6)           │
                                         └─ PowerPoint Readouts    │                              │
                                                                   └─ JIRA Tracking:              │
                                                                      ├─ FixVersion (Commitment Reliability Ratio)      │
                                                                      ├─ SLA via ScriptRunner     │
                                                                      └─ Quarterly Risk Review (Risks)       │
                                                                                                   │
  ┌────────────────────────────────────────────────────────────────────────────────────────────────┘
  │  REPEAT ──> Next Quarter Pre-Planning ──> Next QP ──> ...
  └────────────────────────────────────────────────────────────────────────────────────────────────

Appendix B: Quarterly Calendar View (13-Week Quarter)

┌─────────────────────────────────────────────────────────────────────────────────────────┐
│                     QUARTERLY CALENDAR VIEW — 13 WEEKS                                  │
├──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┤
│ Wk 1 │ Wk 2 │ Wk 3 │ Wk 4 │ Wk 5 │ Wk 6 │ Wk 7 │ Wk 8 │ Wk 9 │Wk 10 │Wk 11 │Wk 12 │Wk 13 │
├──────┴──────┼──────┴──────┼──────┴──────┼──────┴──────┼──────┴──────┼──────┴──────┼──────┤
│  Sprint 1   │  Sprint 2   │  Sprint 3   │  Sprint 4   │  Sprint 5   │  Sprint 6   │ SP 7 │
├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┼─────────────┼──────┤
│  FEATURE    │  FEATURE    │  FEATURE    │  FEATURE    │  FEATURE    │ ENABLEMENT  │      │
│  DEVELOPMENT│  DEVELOPMENT│  DEVELOPMENT│  DEVELOPMENT│  COMPLETE   │ No new      │      │
│             │             │             │             │  Deploy!    │ features    │      │
├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┼─────────────┼──────┤
│             │             │             │             │Pre-Plan Next│ Docs/Train  │      │
│             │             │             │             │Quarter Start│ QD Prep     │      │
├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┼─────────────┼──────┤
│ MEETINGS    │ MEETINGS    │ MEETINGS    │ MEETINGS    │ MEETINGS    │ MEETINGS    │      │
│ LS  ER      │ LS          │ LS  ER      │ LS          │ LS  ER      │ LS  ER  QD  │      │
└─────────────┴─────────────┴─────────────┴─────────────┴─────────────┴─────────────┴──────┘

KEY:  LS = Leadership Sync (weekly)    ER = Epic Refinement (bi-weekly)
      QD = Quarterly Demo              SP 7 = Bridge Sprint (Buffer)
      Pre-Plan = Pre-planning for NEXT quarter begins Sprint 4-5 of current quarter
      Sprint 5 = Feature Complete milestone (deploy to customers)
      Sprint 6 = Enablement (docs, support training, marketing, hardening)

BRIDGE SPRINT BREAKDOWN:
┌──────────────────────────────────────────────┐
│  Bridge Sprint (1 week)                      │
│  ├─ Mon-Tue: Hackathon                       │
│  ├─ Wed: Metrics Review & Retro              │
│  ├─ Thu: Final Pre-Planning / Epic Refinement│
│  ├─ Fri: QP Prep & Logistics                 │
│  └─ Following Mon-Tue: QUARTERLY PLANNING     │
└──────────────────────────────────────────────┘

Appendix C: JIRA Hierarchy Diagram

┌──────────────────────────────────────────────────────────────────────────────┐
│                    JIRA PRODUCT DISCOVERY (JPD)                              │
│                    Strategic Planning & Ideation                             │
│                                                                              │
│   ┌─────────────────────────────────────────────────────────────────────┐    │
│   │  STRATEGIES                                                         │    │
│   │  ┌─────────────────────┐       ┌──────────────────────┐            │    │
│   │  │ Expand Mid-Market    │       │ Platform Modernize    │           │    │
│   │  └──────────┬──────────┘       └──────────┬───────────┘            │    │
│   └─────────────┼──────────────────────────────┼───────────────────────┘    │
│                 │                              │                            │
│   ┌─────────────┴────────────┐    ┌────────────┴───────────┐               │
│   │  INITIATIVES              │    │  INITIATIVES            │               │
│   │  ┌────────────────────┐   │    │  ┌────────────────────┐ │               │
│   │  │ Self-Service Onboard│   │    │  │ API Modernization  │ │               │
│   │  └────────────────────┘   │    │  └────────────────────┘ │               │
│   │  ┌────────────────────┐   │    │                         │               │
│   │  │ Inventory Overhaul  │   │    │                         │               │
│   │  └────────────────────┘   │    │                         │               │
│   └───────────────────────────┘    └─────────────────────────┘               │
│                                                                              │
│   ┌─────────────────────────────────────────────────────────────────────┐    │
│   │  CUSTOMER IDEAS & SUGGESTIONS                                       │    │
│   │  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐              │    │
│   │  │ Idea #201 │ │ Idea #202 │ │ Idea #203 │ │ Idea #204 │             │    │
│   │  └──────────┘ └──────────┘ └──────────┘ └──────────┘              │    │
│   │  Ideas are evaluated, prioritized, and promoted to Initiatives     │    │
│   └─────────────────────────────────────────────────────────────────────┘    │
└──────────────────────────────────────────────────────────────────────────────┘
                                    │
                                    │ Initiatives link to Epics
                                    ▼
┌──────────────────────────────────────────────────────────────────────────────┐
│                    PORTFOLIO PROJECT (R&D Delivery)                          │
│                    Epic-Level Execution Tracking                             │
│                                                                              │
│   ┌─────────────────────┐  ┌─────────────────────┐  ┌──────────────────┐   │
│   │  Epic A (Onboarding) │  │  Epic C (Inventory)  │  │  Epic E (API v2) │  │
│   │  Epic B (Checkout)   │  │  Epic D (Search)     │  │  Epic F (Infra)  │  │
│   └──────────┬──────────┘  └──────────┬──────────┘  └────────┬─────────┘   │
└──────────────┼──────────────────────────┼────────────────────┼──────────────┘
               │                          │                    │
               ▼                          ▼                    ▼
┌─────────────────────┐  ┌─────────────────────┐  ┌─────────────────────┐
│ TEAM PROJECT 1      │  │ TEAM PROJECT 2      │  │ TEAM PROJECT 3      │
│ (Team Alpha)        │  │ (Team Beta)         │  │ (Team Gamma)        │
│─────────────────────│  │─────────────────────│  │─────────────────────│
│ Story               │  │ Story               │  │ Story               │
│ Task                │  │ Task                │  │ Task                │
│ Bug                 │  │ Bug                 │  │ Bug                 │
│ Internal Bug        │  │ Internal Bug        │  │ Internal Bug        │
│ Vulnerability       │  │ Vulnerability       │  │ Vulnerability       │
│ Risk ───────────────────────────────────────────────────────────┐     │
└─────────────────────┘  └─────────────────────┘  └──────────────│─────┘
                                                                  ▼
                                                    ┌──────────────────────┐
                                                    │     QUARTERLY RISK REVIEW        │
                                                    │  (by Product Line)    │
                                                    │                      │
                                                    │ Resolved | Owned     │
                                                    │ Accepted | Mitigated │
                                                    └──────────────────────┘

Appendix E: JPD Timeline & Swimlane View

This is the view used during Quarterly Planning readouts and weekly Leadership Sync meetings. It shows epics/features as horizontal bars across the 6 sprint timeline, grouped by team (swimlanes), with status coloring.

┌──────────────────────────────────────────────────────────────────────────────────────────────┐
│  JPD TIMELINE VIEW — QP2 2026                                                                │
│  Grouped by Team (Swimlanes) | Status: ■ Done  ░ In Progress  □ Not Started  ▓ At Risk     │
├──────────────────────────────────────────────────────────────────────────────────────────────┤
│                    │ Sprint 1  │ Sprint 2  │ Sprint 3  │ Sprint 4  │ Sprint 5  │ Sprint 6   │
│                    │           │           │           │           │ FEATURE   │ ENABLEMENT │
│                    │           │           │           │           │ COMPLETE  │            │
├────────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼────────────┤
│                    │           │           │           │           │           │            │
│  TEAM ALPHA        │           │           │           │           │           │            │
│  (Inventory)       │           │           │           │           │           │            │
│                    │           │           │           │           │           │            │
│  Epic A: Guided    │ ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■│           │           │            │
│  Setup Wizard      │ ══════════════════════════════════│           │           │            │
│                    │           │           │           │           │           │            │
│  Epic B: Bulk      │           │ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░│            │
│  Import Tool       │           │ ══════════════════════════════════════════════│            │
│                    │           │           │           │           │           │            │
├────────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼────────────┤
│                    │           │           │           │           │           │            │
│  TEAM BETA         │           │           │           │           │           │            │
│  (Checkout)        │           │           │           │           │           │            │
│                    │           │           │           │           │           │            │
│  Epic C: Express   │ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░│           │            │
│  Checkout Flow     │ ══════════════════════════════════════════════│           │            │
│                    │           │           │           │           │           │            │
│  Epic D: Payment   │           │           │ □□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□□│            │
│  Gateway v2        │           │           │ ══════════════════════════════════│            │
│                    │           │           │           │           │           │            │
├────────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼────────────┤
│                    │           │           │           │           │           │            │
│  TEAM GAMMA        │           │           │           │           │           │            │
│  (Platform)        │           │           │           │           │           │            │
│                    │           │           │           │           │           │            │
│  Epic E: API v2    │ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│            │
│  Migration         │ ═════════════════════════════════════════════════════════│ AT RISK    │
│                    │           │           │           │           │           │            │
│  Epic F: Infra     │ ■■■■■■■■■■■■■■■■■■■■■│           │           │           │            │
│  Hardening         │ ═════════════════════│           │           │           │            │
│                    │           │           │           │           │           │            │
├────────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼────────────┤
│                    │           │           │           │           │           │            │
│  SUMMARY           │           │           │           │           │           │            │
│  Total Epics: 6    │           │           │           │           │           │            │
│  ■ Done: 2         │           │           │           │           │           │            │
│  ░ In Progress: 2  │           │           │           │           │           │            │
│  □ Not Started: 1  │           │           │           │           │           │            │
│  ▓ At Risk: 1      │           │           │           │           │           │            │
│                    │           │           │           │           │           │            │
└────────────────────┴───────────┴───────────┴───────────┴───────────┴───────────┴────────────┘

USAGE:
  • QP Readouts: Each team walks through their swimlane, explains status and confidence
  • Leadership Sync: Review full board weekly, focus on ▓ At Risk and ░ In Progress items
  • Sprint 5 checkpoint: All bars must end by Sprint 5 column (feature complete)
  • Sprint 6 column: Should only show enablement activities, not feature development

Appendix D: Templates

Working Agreements Template

See separate file: templates/working-agreements-template.html

Definition of Done Template

See separate file: templates/definition-of-done-template.html


Appendix F: Scaling Quick Reference Card

A condensed reference for choosing the right level of governance based on organization size.

┌──────────────────────────────────────────────────────────────────────────────────┐
│                     SCALING QUICK REFERENCE CARD                                 │
├──────────────────────────────────────────────────────────────────────────────────┤
│                                                                                  │
│  HOW MANY TEAMS?     2-4             5-10              11-20+                   │
│  ────────────────     ───             ────              ──────                   │
│                                                                                  │
│  QP Duration          1 day           2 days            2.5-3 days              │
│  Leadership Sync      Bi-weekly       Weekly            Area + Cross-Area       │
│  Epic Refinement      As-needed       Bi-weekly         Per-area + cross-area   │
│  JIRA Layers          2 (Port+Team)   3 (JPD+Port+Team) 3 + Program rollup     │
│  Key New Roles        —               Architect, TPM    + RTE, PM, Chapter Lead │
│  Coordination         Ad-hoc          Structured        Layered                 │
│  Metrics              Commitment Reliability Ratio + DORA   Full suite        Full + Area + Program   │
│                                                                                  │
│  SCALE UP WHEN:                        SCALE DOWN WHEN:                         │
│  • Cross-team blocks cause misses      • Meetings have low engagement           │
│  • Leadership lacks visibility         • Roles add no value                     │
│  • Risks surface too late              • Process slows teams down               │
│  • QP feels rushed                     • Decisions take longer                  │
│  • Teams duplicate work                • Metrics collected but not used         │
│                                                                                  │
│  ALWAYS KEEP (any size):  Well-formed teams │ Quarterly cadence │ Capacity      │
│                           planning │ Sprint ceremonies │ Commitment Reliability Ratio            │
│                                                                                  │
│  See Section 10 for full details.                                               │
└──────────────────────────────────────────────────────────────────────────────────┘

Appendix G: AI Tool Evaluation Template

Use this template when evaluating a new AI tool for adoption. The evaluation should be completed by the proposing engineer or team lead and reviewed by the Engineering Manager and Security team before procurement.

Evaluation Criteria Summary

# Criterion Weight Score (1-5) Notes
1 Security & Privacy — Data handling, SOC2/ISO compliance, data residency Critical ___
2 IP & Licensing — Code ownership, license compatibility, indemnification Critical ___
3 Integration — IDE, CI/CD, JIRA compatibility; setup complexity High ___
4 Accuracy & Quality — Suggestion quality, false positive rate, relevance High ___
5 Cost — Per-seat pricing, usage tiers, volume discounts, hidden costs Medium ___
6 Adoption Friction — Onboarding effort, learning curve, workflow disruption Medium ___
7 Vendor Stability — Company maturity, funding, enterprise support, roadmap Medium ___

Decision Thresholds

Result Criteria
Approve No Critical criterion scores below 3; weighted average >= 3.5
Conditional One Critical criterion scores 2–3; weighted average >= 3.0; requires mitigation plan
Reject Any Critical criterion scores 1; weighted average < 3.0

Pilot Process

  1. Scope: 1–2 teams for 2–4 weeks
  2. Measure: Productivity metrics (Section 11.4) + developer satisfaction survey
  3. Decide: Expand, adjust, or retire based on pilot results
  4. Communicate: Share findings with all teams before org-wide rollout

Full template: Use the interactive AI Tool Evaluation Template for a structured evaluation with scoring and recommendations.


Appendix H: Glossary of Terms

A quick-reference glossary of acronyms, abbreviations, and named concepts used throughout this operating model.

Term Definition
Acceptance Criteria Conditions that must be met for a work item (Story, Bug) to be considered Done. Written at creation time and verified during review.
Adjusted Velocity Historical team velocity scaled by current capacity. Formula: Average Velocity × (Actual Capacity ÷ Full Capacity). Used during sprint planning when the team is not at full strength.
ADR Architecture Decision Record — A lightweight document capturing a significant architectural decision, its context, and consequences. Generated during Epic Refinement or spike work.
Bridge Sprint A shorter transition sprint between the end of quarterly execution (Sprint 6) and the next quarter's planning. Used for hackathons, spikes, KTLO catch-up, metrics review, and final pre-planning.
Cognitive Load The mental burden on a team from the complexity of their work, the systems they maintain, and the processes they navigate. Assessed across three types: intrinsic, extraneous, and germane.
Confidence Vote A team-level assessment during Quarterly Planning readouts. Each team must reach ≥ 70% confidence in their plan before commitments are finalized.
CSM Customer Success Manager — Partner role that receives feature walkthroughs, talking points, and adoption playbooks to support customer enablement.
Customer Voice An investment category (10–20% of capacity) covering customer-reported defects and enhancement requests. Also refers to the flow of customer ideas from JPD into the roadmap.
CVSS Common Vulnerability Scoring System — Industry-standard scoring for security vulnerabilities (0–10 scale). Drives remediation SLA tiers in the model.
Dark Launch A deployment strategy where a feature is released to production but not yet visible to all customers. Used to validate stability before General Availability.
DoD Definition of Done — A checklist of conditions that must be satisfied before a work item (Story, Epic) can be marked Done. Teams start from a standard template and customize via working agreements.
DORA DevOps Research and Assessment — Four industry-standard metrics for software delivery performance: Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Mean Time to Restore (MTTR).
EM Engineering Manager — Responsible for people decisions (hiring, performance, growth), team structure, and working agreements within their team(s).
Epic Closing Ceremony A live meeting where the team walks through every item on the Epic Definition of Done checklist. This is the formal gate before an Epic moves to Done status (typically Sprint 5).
Epic Refinement A bi-weekly governance meeting (1.5–2 hours) where senior leaders, PMs, Tech Leads, and Architecture review upcoming epics, assess readiness, and resolve dependencies.
Feature Freeze The Sprint 5 deadline by which all new features committed to the quarter must be code complete, tested, and deployed to customers (GA or Beta).
FixVersion A JIRA field used for quarterly delivery tracking (values: QP1, QP2, QP3, QP4). Drives Commitment Reliability Ratio calculations by recording what was committed vs. delivered.
Focus Factor A multiplier (recommended 0.70–0.80) applied to available hours to account for meetings, context-switching, and other non-productive overhead during capacity planning.
GA General Availability — Full production release visible to all customers. The target deployment status for features completing in Sprint 5.
GTM Go-to-Market — Cross-functional readiness activities (launch communications, collateral, customer messaging) coordinated with Marketing and User Learning before a feature reaches GA.
Incoming Rate The trend of customer-reported defects, security vulnerabilities, and production incidents per sprint. Used during pre-planning to calibrate the Unplanned Work Reserve.
Investment Categories The four budget allocation buckets for engineering capacity: New Product (40–50%), Architecture / Tech Debt (10–20%), KTLO (20–30%), and Customer Voice (10–20%).
JPD Jira Product Discovery — The strategic planning and ideation layer. Strategies, Initiatives, and customer ideas live here; it is the roadmap source of truth.
KTLO Keep The Lights On — Maintenance and operational stability work including minor fixes, infrastructure upkeep, and routine operational tasks. Typically 20–30% of capacity.
Leadership Sync A weekly governance meeting where senior leaders review RAG status across initiatives, address cross-team blockers, and make steering decisions.
MTTR Mean Time to Restore — The average time to recover from a production failure. One of the four DORA metrics. Elite/High target: < 1 day.
Onboarding Ramp A reduced-capacity period for newly onboarded team members (plan at 50% capacity for the first 1–2 sprints). Factored into the Available Days calculation during capacity planning.
PM Product Manager — Core team role (1 per team) responsible for backlog prioritization, stakeholder alignment, and quarterly planning commitments.
Portfolio Project A JIRA project containing all quarterly Epics across teams. Provides the delivery-level view of what teams are building each quarter and feeds portfolio dashboards.
QE Quality Engineer — Core team role (2 per team, automation preferred) responsible for test strategy, automated test suites, and quality gates.
QP Quarterly Planning — A 2-day event where all R&D teams, senior stakeholders, and partners commit to what they will deliver in the upcoming quarter (6 sprints).
Quarterly Demo End-of-quarter showcase where each team presents a pre-recorded video (5–10 min) of completed features to the broader organization.
RAG Red / Amber / Green — Status indicators used across dashboards and governance meetings. Green = on track, Amber = at risk, Red = blocked or off track.
Risk Status (R/O/A/M) Resolved, Owned, Accepted, Mitigated — The 4-value risk classification field applied during the Quarterly Risk Review. Resolved = no longer a concern; Owned = actively being worked; Accepted = acknowledged but not mitigated; Mitigated = actions in place to reduce impact.
Quarterly Risk Review A consolidated risk-tracking view (JIRA Risk issue type) across all teams. Reviewed during Leadership Sync and Epic Refinement to surface and manage risks.
ROI Return on Investment — Used in the model to evaluate AI tool adoption: (Time Saved × Engineer Cost × Headcount × 52) ÷ (Tool Cost × Seats + Implementation Cost).
RTE Release Train Engineer — A coordination role (used in larger-scale deployments) responsible for cross-team planning logistics, dependency tracking, and process facilitation.
Commitment Reliability Ratio A commitment-accuracy metric: the percentage of work committed at Quarterly Planning that was actually delivered. Calculated via FixVersion. Target: ≥ 80%.
ScriptRunner A JIRA automation plugin used to calculate and enforce SLA due dates on Bug and Vulnerability items based on severity/CVSS thresholds.
SDLC Software Development Lifecycle — The end-to-end process from requirements through deployment. The model maps AI tool integration points across all SDLC phases.
SLA Service Level Agreement — Time-bound commitments for work-item resolution (e.g., Critical bugs within 72 hours, High vulnerabilities within 30 days). Enforced via ScriptRunner automation.
Spike A time-boxed research or proof-of-concept effort to reduce technical uncertainty. Commonly scheduled during Bridge Sprints or as part of Architecture investment.
SRE Site Reliability Engineering — A partnership role supporting one or more teams with production reliability, incident response, and infrastructure automation.
TPM Technical Program Manager — A partner role responsible for delivery coordination, cross-team dependency resolution, process improvements, and risk management.
Unplanned Work Reserve A capacity allocation (15–20% per sprint) held for unexpected work: customer-reported defects, production incidents, support escalations, and urgent security patches.
Well-Formed Team The atomic unit of delivery in the model. A cross-functional team with defined core roles (PM, Tech Lead, Engineers, QE, UX, DevOps) operating as an autonomous Scrum team.
Working Agreements Team-defined norms covering collaboration, communication, and process (e.g., core hours, PR review SLA, Definition of Done). Built from a standard template with team autonomy to customize.