Decision Matrix Builder
The Decision Matrix Builder creates a structured, weighted evaluation framework for comparing multiple options against defined criteria. It replaces gut-feel decisions with a transparent, repeatable process that you can share with stakeholders to build consensus.
Managers evaluating vendors, product teams choosing between feature approaches, founders selecting tools or strategies, and anyone facing a decision with 3+ options and multiple trade-offs use this template. It works for decisions at any scale, from choosing a CRM to selecting a market entry strategy.
The prompt goes beyond a simple pros/cons list by introducing weighted criteria (some factors matter more than others), numerical scoring (forcing precise evaluation), and sensitivity analysis (checking whether the winner changes if you adjust the weights). This three-layer approach catches scenarios where a "gut winner" actually loses on the factors that matter most.
This prompt is just the starting point
Score it with AI, optimize it with one click, track versions, and build your prompt library.
The Prompt
Build a decision matrix for the following choice: **Decision to Make**: [WHAT YOU ARE DECIDING, e.g., "Which project management tool to adopt for our 25-person engineering team"] **Options** (2-5): ``` [LIST YOUR OPTIONS, e.g., 1. Linear - built for engineering teams 2. Jira - industry standard, already used by some teams 3. Asana - general purpose, strong integrations 4. Notion Projects - already use Notion for docs] ``` **Criteria That Matter** (list what factors are important): ``` [LIST YOUR CRITERIA, e.g., - Ease of adoption (team is resistant to change) - Engineering-specific features (sprints, releases, git integration) - Price (budget is $500/month max) - Integration with existing tools (Slack, GitHub, Figma) - Customization flexibility - Long-term scalability] ``` **Context**: [ADDITIONAL CONTEXT, e.g., "We are migrating from spreadsheets. The team has used Jira before and some had negative experiences. Decision needs to be made this week."] Generate: ### 1. Criteria Weighting Assign weights to each criterion (must total 100%). Explain the rationale for each weight based on the context provided. If the context suggests some factors are more important, weight them accordingly. | Criterion | Weight | Rationale | |-----------|--------|-----------| ### 2. Scoring Matrix Rate each option on each criterion from 1-5 (1 = poor, 5 = excellent). Provide a brief justification for each score. | Criterion (Weight) | Option 1 | Option 2 | Option 3 | Option 4 | |-------------------|----------|----------|----------|----------| | ... | Score + reason | Score + reason | ... | ... | ### 3. Weighted Results | Option | Weighted Score | Rank | |--------|---------------|------| | ... | X.XX | #N | ### 4. Sensitivity Analysis Test the robustness of the winner: - What happens if you increase the top criterion's weight by 15%? - What happens if you decrease it by 15%? - Is there any reasonable weighting where a different option wins? ### 5. Recommendation - Clear recommendation with the primary reasoning (2-3 sentences) - Key risk or trade-off to monitor after choosing - One condition that should trigger re-evaluation ### 6. Stakeholder Summary A 3-sentence summary suitable for presenting this decision to leadership: what was decided, why, and what it costs.
Usage Tips
- List ALL criteria you care about: Even if some feel "soft" (like team morale or adoption resistance), include them. The weighting system handles their relative importance.
- Challenge the weights: The weights are the most subjective part. If price got 30% weight but your team's main complaint is usability, adjust before accepting the result.
- Use the sensitivity analysis: If the winner changes when you adjust one weight by 15%, the decision is close and you should invest more time in validating the scores for that criterion.
- Share the matrix with stakeholders: The structured format makes it easy to present your reasoning. Even if people disagree with a specific score, the framework channels the debate productively.
- Revisit in 90 days: After implementing the choice, score the actual experience against your predictions. This calibrates your scoring for future decisions.
Get more from this prompt
Save it, score it with AI, optimize it, and track every version. Free to start.