quality-review-checklist

31 stars 7 forks
28
D

Checklist covering accuracy, style, accessibility, and localization requirements for documentation releases.

Marketplace

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install gtmagents/gtm-agents/quality-review-checklist
skilz install gtmagents/gtm-agents/quality-review-checklist --agent opencode
skilz install gtmagents/gtm-agents/quality-review-checklist --agent codex
skilz install gtmagents/gtm-agents/quality-review-checklist --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/gtmagents/gtm-agents
2. Copy the agent skill directory:
cp -r gtm-agents/plugins/technical-writing/skills/quality-review-checklist ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Repository
gtm-agents
Stars
31
Forks
7
Type
Non-Technical
Meta-Domain
productivity
Primary Domain
markdown
Market Score
28

Agent Skill Grade

D
Score: 68/100 Click to see breakdown

Score Breakdown

Spec Compliance
11/15
PDA Architecture
20/30
Ease of Use
16/25
Writing Style
9/10
Utility
11/20
Modifiers: +1

Areas to Improve

  • Description needs trigger phrases
  • Missing Trigger Terms in Description
  • No Reference Files for Templates

Recommendations

  • Focus on improving Utility (currently 11/20)
  • Address 2 high-severity issues first
  • Add trigger phrases to description for discoverability

Graded: 2026-01-24

Developer Feedback

Found an interesting approach to code review standardization here—I'm curious why you opted for a checklist-based pattern instead of leveraging automated linting or static analysis as a foundation?

Links:

The TL;DR

You're at 68/100, D grade territory—solid writing style keeps you from tanking, but utility and discoverability are dragging you down. Your strongest area is Writing Style (9/10), but Utility (11/20) is where you're losing the most points. The framework is there, but it needs teeth.

What's Working Well

  • Writing clarity is tight. Your 32 lines pack value—no fluff, imperative voice throughout, zero marketing speak. That's why you scored 9/10 on style.
  • 5-category framework is sensible. Accuracy, Style, Accessibility, Localization, and Compliance cover the right surface area for documentation QA.
  • Metadata is valid. Your YAML frontmatter is clean and follows conventions (correct hyphen-case naming, required fields present).

The Big One: Missing Trigger Phrases

This is what's hurting discoverability. Your description reads like a Wikipedia entry:

"Checklist covering accuracy, style, accessibility, and localization requirements..."

But agents need to know when to invoke you. Add specific activation triggers:

description: "Pre-publication QA checklist for documentation. Use when asked to 'review documentation quality', 'QA docs', 'check doc accuracy', 'pre-publication review', or 'audit content'. Covers accuracy, style, accessibility, and localization."

This alone bumps you +2-3 points on discoverability and makes the skill actually findable.

Other Things Worth Fixing

  1. Templates mentioned but not delivered. You refer...

Report Security Issue

Found a security vulnerability in this agent skill?