launch-tiering

31 stars 7 forks
28
D

Use when sizing go-to-market launches by impact, resources, and governance needs.

Marketplace

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install gtmagents/gtm-agents/launch-tiering
skilz install gtmagents/gtm-agents/launch-tiering --agent opencode
skilz install gtmagents/gtm-agents/launch-tiering --agent codex
skilz install gtmagents/gtm-agents/launch-tiering --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/gtmagents/gtm-agents
2. Copy the agent skill directory:
cp -r gtm-agents/plugins/product-launch-orchestration/skills/launch-tiering ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Repository
gtm-agents
Stars
31
Forks
7
Type
Non-Technical
Meta-Domain
general
Primary Domain
general
Market Score
28

Agent Skill Grade

D
Score: 62/100 Click to see breakdown

Score Breakdown

Spec Compliance
12/15
PDA Architecture
15/30
Ease of Use
16/25
Writing Style
8/10
Utility
10/20
Modifiers: +1

Areas to Improve

  • Missing Reference Files for Templates
  • No Executable Workflow
  • Vague Trigger Terms

Recommendations

  • Focus on improving Pda (currently 15/30)
  • Focus on improving Utility (currently 10/20)
  • Address 2 high-severity issues first

Graded: 2026-01-24

Developer Feedback

I took a look at your launch-tiering skill and noticed the spec could use more rigor around the edge cases—like what happens when tier thresholds overlap or when you're dealing with dynamic pricing models that shift mid-campaign.

Links:

TL;DR

You're at 62/100, squarely in D territory. This is graded against Anthropic's skill best practices. Your strongest area is Spec Compliance (12/15)—the YAML frontmatter and naming conventions are solid. The weak spots are Progressive Disclosure Architecture (15/30) and Utility (10/20)—basically, you need better structure and more actionable decision logic.

What's Working Well

  • Clean metadata—Frontmatter is valid YAML with all required fields; name follows hyphen-case correctly
  • Consistent terminology—You use "Tier 1/2/3" and "tiering" consistently throughout, which makes the skill easier to follow
  • Objective tone—No marketing fluff; stays instructional and professional without overselling

The Big One: Missing Reference Files & Templates

This is your main blocker. You mention templates for "tier scoring matrix", "tier playbook", and "approval checklist" but never actually provide them. This violates Progressive Disclosure Architecture (PDA)—the whole point is to keep SKILL.md lean while offloading details to references.

The fix: Create three new files:

  • references/tier-scoring-matrix.md — Actual criteria, weights, scoring thresholds
  • references/tier-playbook.md — Workstreams, artifacts, checkpoints for each tier
  • references/approval-checklist.md — Gate criteria for tier changes

Then link them from SKILL.md. This alone could add +7 points and fix your biggest PDA gap.

Other Things Worth Fixing

  1. *...

Report Security Issue

Found a security vulnerability in this agent skill?