hypothesis-library
Curated repository of experiment hypotheses, assumptions, and historical learnings.
Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security
Installation for Agentic Skill
View all platforms →skilz install gtmagents/gtm-agents/hypothesis-libraryskilz install gtmagents/gtm-agents/hypothesis-library --agent opencodeskilz install gtmagents/gtm-agents/hypothesis-library --agent codexskilz install gtmagents/gtm-agents/hypothesis-library --agent geminiFirst time? Install Skilz: pip install skilz
Works with 22+ AI coding assistants
Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...
Extract and copy to ~/.claude/skills/ then restart Claude Desktop
git clone https://github.com/gtmagents/gtm-agentscp -r gtm-agents/plugins/growth-experiments/skills/hypothesis-library ~/.claude/skills/Need detailed installation help? Check our platform-specific guides:
Related Agentic Skills
hive-mind-advanced
by ruvnet
Advanced Hive Mind collective intelligence system for queen-led multi-agent coordination with consensus mechanisms and persistent memory
material-component-doc
by bytedance
用于 FlowGram 物料库组件文档撰写的专用技能,提供组件文档生成、Story 创建、翻译等功能的指导和自动化支持
gerrit
by storj
>
kebab-maker
by kagent-dev
A skill that makes a kebab for the user.
Agentic Skill Details
- Repository
- gtm-agents
- Stars
- 31
- Forks
- 7
- Type
- Non-Technical
- Meta-Domain
- general
- Primary Domain
- general
- Market Score
- 28
Agent Skill Grade
F Score: 50/100 Click to see breakdown
Score Breakdown
Areas to Improve
- Description needs trigger phrases
- Missing Reference Files for Templates
- Vague Trigger Terms
Recommendations
- Focus on improving Pda (currently 12/30)
- Focus on improving Ease Of Use (currently 11/25)
- Focus on improving Utility (currently 8/20)
Graded: 2026-01-24
Developer Feedback
I've been diving into property-based testing frameworks lately, and your hypothesis-library skill caught my attention—though the 50/100 score suggests there might be some gaps between the concept and execution that are worth digging into.
Links:
TL;DR
You're at 50/100, solidly in F grade territory. This is based on Anthropic's skill evaluation best practices across five pillars. Your Spec Compliance is actually solid (11/15)—the frontmatter is valid and naming conventions are right—but Progressive Disclosure Architecture (12/30) and Utility (8/20) are where you're losing the most points. The core issue: you've got a great framework concept, but it's missing the teeth to be actually useful.
What's Working Well
- Consistent terminology: You use 'hypothesis', 'learnings', and 'experiment' consistently throughout—no confusing terminology shifts that would make users stumble.
- Solid metadata schema thinking: The idea of using ID, theme, persona, funnel stage, and metrics is the right foundation for structured experimentation tracking.
- Logical section flow: "When to Use" → "Framework" → "Templates" → "Tips" follows a reasonable progression that's easy to scan.
The Big One: Missing Reference Files Kills Progressive Disclosure
Your skill lists three templates—intake form, learning card, portfolio dashboard—but provides zero actual content. This is a critical gap. You're telling users "here are templates" without showing them what they look like, which means they either guess or bounce.
The fix: Create three reference files:
references/intake-form.md– actual template with example fieldsreferences/learning-card.md– structured format showing context/...
Browse Category
More general Agentic SkillsReport Security Issue
Found a security vulnerability in this agent skill?
Report Security Issue
Thank you for helping keep SkillzWave secure. We'll review your report and take appropriate action.
Note: For critical security issues that require immediate attention, please also email security@skillzwave.ai directly.