performance-tracking

31 stars 7 forks
28
C

Use when establishing measurement frameworks, dashboards, and optimization rhythms for live campaigns.

Marketplace

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install gtmagents/gtm-agents/performance-tracking
skilz install gtmagents/gtm-agents/performance-tracking --agent opencode
skilz install gtmagents/gtm-agents/performance-tracking --agent codex
skilz install gtmagents/gtm-agents/performance-tracking --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/gtmagents/gtm-agents
2. Copy the agent skill directory:
cp -r gtm-agents/plugins/campaign-orchestration/skills/performance-tracking ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Repository
gtm-agents
Stars
31
Forks
7
Type
Non-Technical
Meta-Domain
media
Primary Domain
monitoring
Market Score
28

Agent Skill Grade

C
Score: 70/100 Click to see breakdown

Score Breakdown

Spec Compliance
12/15
PDA Architecture
18/30
Ease of Use
18/25
Writing Style
8/10
Utility
13/20
Modifiers: +1

Areas to Improve

  • Missing Reference Files
  • No Actionable Workflow
  • Missing Validation Patterns

Recommendations

  • Address 2 high-severity issues first
  • Add trigger phrases to description for discoverability
  • Add table of contents for files over 100 lines

Graded: 2026-01-24

Developer Feedback

I noticed you're tackling performance tracking—it's one of those problems that seems simple until you start handling real-world edge cases. Your implementation scores at 70/100 (Grade C), so there's solid foundation here, but I spotted some opportunities where the progressive disclosure could be tightened and a few rough edges in how developers interact with the core APIs.

Links:

The TL;DR

You're sitting at 70/100, which is solid C-territory (70-79 is "adequate, with gaps"). This is based on Anthropic's skill best practices framework. Your strongest area is Spec Compliance (12/15) — the frontmatter and naming conventions are locked in. Weakest is Progressive Disclosure (18/30) — you've put everything into SKILL.md when reference files would chunk it down and save tokens for users.

What's Working Well

  • Frontmatter and naming — Valid YAML with all required fields, proper hyphen-case format. No friction there.
  • Clear logical flow — Your framework steps (Metric Hierarchy → Dashboard → Monitoring → Optimization) follow a sensible progression that matches how people actually think about measurement.
  • Consistent terminology — You're not switching between "KPI" and "metric" randomly, which saves cognitive load.
  • grep-friendly structure — The skill has a clean, scannable format (+1 bonus point).

The Big One: Missing Reference Architecture

Here's what's holding you back the most: everything is stuffed into SKILL.md. You mention templates (KPI tree worksheet, dashboard spec, optimization log) but they're just labels with no actual content. This kills your PDA score because users pay token cost for all that text, but get no working artifact.

The fix: Create a references/ direc...

Report Security Issue

Found a security vulnerability in this agent skill?