sentiment-feedback-loop

31 stars 7 forks
28
D

Process for capturing qualitative feedback and injecting it into CS playbooks.

Marketplace
Also in: ansible

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install gtmagents/gtm-agents/sentiment-feedback-loop
skilz install gtmagents/gtm-agents/sentiment-feedback-loop --agent opencode
skilz install gtmagents/gtm-agents/sentiment-feedback-loop --agent codex
skilz install gtmagents/gtm-agents/sentiment-feedback-loop --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/gtmagents/gtm-agents
2. Copy the agent skill directory:
cp -r gtm-agents/plugins/customer-success/skills/sentiment-feedback-loop ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Repository
gtm-agents
Stars
31
Forks
7
Type
Non-Technical
Meta-Domain
development
Primary Domain
github
Market Score
28

Agent Skill Grade

D
Score: 69/100 Click to see breakdown

Score Breakdown

Spec Compliance
11/15
PDA Architecture
20/30
Ease of Use
17/25
Writing Style
8/10
Utility
12/20
Modifiers: +1

Areas to Improve

  • Description needs trigger phrases
  • Missing Trigger Terms in Description
  • Templates Section Lists But Doesn't Provide

Recommendations

  • Address 2 high-severity issues first
  • Add trigger phrases to description for discoverability
  • Add table of contents for files over 100 lines

Graded: 2026-01-24

Developer Feedback

Looking at your sentiment-feedback-loop skill, I'm curious about the choice to implement feedback collection rather than just sentiment classification—seems like you're building toward continuous model improvement, but the 69/100 suggests there might be some architectural decisions worth revisiting.

Links:

The TL;DR

You're at 69/100, solidly D territory. This is based on Anthropic's skill evaluation rubric across five pillars. Your Writing Style scores best at 8/10—the prose is clear and purposeful. The weak spots are Utility (12/20) and Progressive Disclosure (20/30)—you've got a solid framework, but it needs more concrete templates and actual implementation depth to justify its weight class.

What's Working Well

  • Clean, consistent terminology – You use "sentiment," "feedback," and "VOC" consistently throughout, which makes the skill easy to follow
  • Solid five-step structure – The framework (capture → enrich → analyze → act → close loop) is logically sound and mirrors real CS workflows
  • Objective, practical tone – No marketing fluff, just straightforward instructions on how to run a feedback loop

The Big One: Missing Templates Kill Your Utility Score

Your Templates section (lines 20-23) lists three things you could build but provides zero actual content:

- Sentiment tagging spreadsheet or Notion template.
- Weekly VOC digest format for leadership.
- Follow-up tracker for commitments back to customers.

This is a -4 point hit on Utility. Someone trying to use this skill right now has to reverse-engineer what you meant instead of copy-pasting working templates. Create three reference files:

  • `references/sentiment-tagging-template.md...

Report Security Issue

Found a security vulnerability in this agent skill?