assumption-challenger

44 stars 10 forks
0
B

Identify and challenge implicit assumptions in plans, proposals, and technical decisions. Use when strategic-cto-mentor needs to surface hidden assumptions and wishful thinking before they become costly mistakes.

CommandsAgentsMarketplace
#Technical assumptions#claude-ai#Resource assumptions#Assumptions Assumptions#cto-office#challenge#cto#roadmap
Also in: ci cd

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install alirezarezvani/claude-cto-team/assumption-challenger
skilz install alirezarezvani/claude-cto-team/assumption-challenger --agent opencode
skilz install alirezarezvani/claude-cto-team/assumption-challenger --agent codex
skilz install alirezarezvani/claude-cto-team/assumption-challenger --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/alirezarezvani/claude-cto-team
2. Copy the agent skill directory:
cp -r claude-cto-team/skills/assumption-challenger ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Stars
44
Forks
10
Type
Non-Technical
Meta-Domain
development
Primary Domain
github
Market Score
0

Agent Skill Grade

B
Score: 81/100 Click to see breakdown

Score Breakdown

Spec Compliance
12/15
PDA Architecture
22/30
Ease of Use
20/25
Writing Style
7/10
Utility
17/20
Modifiers: +3

Areas to Improve

  • Missing TOC for long files
  • Broken reference link
  • Excessive main file length

Recommendations

  • Address 2 high-severity issues first
  • Add trigger phrases to description for discoverability
  • Add table of contents for files over 100 lines

Graded: 2026-01-24

Developer Feedback

I was intrigued by how assumption-challenger frames validation as a collaborative debugging tool rather than just error-catching—the scoring reflects solid execution, but I'm curious about the edge cases where assumptions hide in plain sight.

Links:

The TL;DR

You're at 81/100, solid B-grade territory. This is graded against Anthropic's PDA (Progressive Disclosure Architecture) best practices and the 5-pillar rubric. Your strongest area is Utility (17/20)—the framework actually solves real problems. Weakest is Writing Style (7/10) and PDA (22/30)—mostly because the main file is bloated and references a skill file that doesn't exist.

What's Working Well

  • Problem-solving power is real: The categorization (Technical, Market, Organizational, Process, Resource) actually maps to how assumptions break in the real world. That's not accidental design.
  • Templates are comprehensive: The verdict system (Valid/Questionable/Invalid/Unknown) and the 4-step process give people a clear path without hand-holding them to death.
  • Trigger phrases are tight: "validating assumptions before launching", "reviewing strategic decisions"—these are specific enough that developers know exactly when to pull this skill in.
  • Examples hit the mark: The real-world assumption examples (market size, technology viability, team capability) feel grounded, not theoretical.

The Big One

Your main SKILL.md file is 356 lines without a table of contents, and you're referencing wishful-thinking-patterns.md which doesn't exist. This kills your PDA score immediately because:

  1. Long files = poor navigation = token waste
  2. Broken references = users get stuck
  3. Violates the "one reference ...

AI-Detected Topics

Extracted using NLP analysis

Technical assumptions claude-ai Resource assumptions Assumptions Assumptions cto-office challenge cto roadmap assumptions claude-code

Report Security Issue

Found a security vulnerability in this agent skill?