call-review-kit

31 stars 7 forks
28
F

Use to facilitate structured call review sessions with agendas, scorecards, and follow-ups.

Marketplace

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install gtmagents/gtm-agents/call-review-kit
skilz install gtmagents/gtm-agents/call-review-kit --agent opencode
skilz install gtmagents/gtm-agents/call-review-kit --agent codex
skilz install gtmagents/gtm-agents/call-review-kit --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/gtmagents/gtm-agents
2. Copy the agent skill directory:
cp -r gtm-agents/plugins/sales-calls/skills/call-review-kit ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Repository
gtm-agents
Stars
31
Forks
7
Type
Non-Technical
Meta-Domain
development
Primary Domain
ci cd
Market Score
28

Agent Skill Grade

F
Score: 52/100 Click to see breakdown

Score Breakdown

Spec Compliance
11/15
PDA Architecture
12/30
Ease of Use
12/25
Writing Style
7/10
Utility
9/20
Modifiers: +1

Areas to Improve

  • Description needs trigger phrases
  • Missing Reference Files
  • Vague Description Lacks Triggers

Recommendations

  • Focus on improving Pda (currently 12/30)
  • Focus on improving Ease Of Use (currently 12/25)
  • Focus on improving Utility (currently 9/20)

Graded: 2026-01-24

Developer Feedback

I found your call-review-kit while evaluating some emerging skill patterns, and I'm curious about the design choice to focus on review workflows—that's a pretty specific problem space. With a score sitting at 52, there's solid foundation here, but it looks like there are some gaps worth exploring around progressive disclosure and spec alignment that could really unlock its potential.

Links:

The TL;DR

You're at 52/100, which puts you in F territory. This is based on Anthropic's progressive disclosure architecture and skill spec best practices. Your strongest area is Spec Compliance (11/15)—the frontmatter and naming conventions are solid. But Progressive Disclosure Architecture (12/30) and Utility (9/20) are dragging you down hard. The core issue: you're promising templates and workflows that don't actually exist in the skill, which tanks both discoverability and practical usefulness.

What's Working Well

  • Clean YAML frontmatter – Valid structure with required fields present
  • Logical framework organization – The 5-part framework (Call Selection, Agenda, Scorecard, Facilitation, Follow-up) is conceptually sound
  • Good grep-friendly structure – Clear headers and sections make the content scannable, even at <100 lines
  • Stays objective – No marketing fluff; purely instructional tone

The Big One: Missing Reference Files

Here's what's killing your score: You mention "Templates" and reference specific deliverables (Call review agenda, Scorecard worksheet, Follow-up email template) but none of these files actually exist. This tanks your Progressive Disclosure Architecture (supposed to be 30 points, you got 12) and your Utility score.

The fix: Create a references/ directo...

Report Security Issue

Found a security vulnerability in this agent skill?