AI Tool Evaluation Template

R&D Operating Model — Evaluate AI tools before adoption (see Section 11.3)






1. Security & Privacy (Critical)

Where does data go? Is it used for training? Compliance certifications?

QuestionAnswer
Where is data sent?
Is code/data used for model training?
SOC2 / ISO 27001 certified?
Encryption in transit and at rest?
Data residency options?


2. IP & Licensing (Critical)

Code ownership, license compatibility, indemnification for IP claims.

QuestionAnswer
Who owns AI-generated code?
License compatible with our projects?
IP indemnification provided?
Clear enterprise ToS?


3. Integration (High)

IDE support, CI/CD, JIRA, SSO — how well does it fit our toolchain?

QuestionAnswer
Supported IDEs / platforms
CI/CD integration?
SSO / SAML support?
Setup complexity


4. Accuracy & Quality (High)

How good are the outputs? Acceptance rate, false positives, relevance.

QuestionAnswer
Suggestion acceptance rate (if known)
False positive rate (review/test tools)
Coverage of our tech stack


5. Cost (Medium)

Pricing model, per-seat cost, volume discounts, hidden costs.

QuestionAnswer
Pricing model
Cost per seat (monthly)
Volume discounts?
Hidden costs?


6. Adoption Friction (Medium)

How easy is onboarding? Does it disrupt existing workflows?

QuestionAnswer
Time to onboard one engineer
Learning curve
Workflow disruption


7. Vendor Stability (Medium)

Company maturity, enterprise support, product roadmap.

QuestionAnswer
Company age / funding
Enterprise customer references
Enterprise support SLA?
Product roadmap visibility


Evaluation Results

Weighted Average
Min Critical Score
0 / 7
Criteria Scored
Not Scored
Decision
Weighted Avg
1.0 — Reject 3.0 — Conditional 3.5 — Approve 5.0
Recommendation: Score all 7 criteria to see the evaluation result.

Pilot Plan

Complete this section if the decision is Approve or Conditional.

R&D Operating Model — AI Tool Evaluation Template v1.0 | See Section 11.3 for governance framework and Appendix G for scoring guide.