hypothesis-generation
"Generate testable hypotheses. Formulate from observations, design experiments, explore competing explanations, develop predictions, propose mechanisms, for scientific inquiry across domains."
Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security
Installation for Agentic Skill
View all platforms →skilz install jimmc414/Kosmos/hypothesis-generation skilz install jimmc414/Kosmos/hypothesis-generation --agent opencode skilz install jimmc414/Kosmos/hypothesis-generation --agent codex skilz install jimmc414/Kosmos/hypothesis-generation --agent gemini
First time? Install Skilz: pip install skilz
Works with 14 AI coding assistants
Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...
Extract and copy to ~/.claude/skills/ then restart Claude Desktop
git clone https://github.com/jimmc414/Kosmos cp -r Kosmos/kosmos-reference/kosmos-claude-scientific-writer/.claude/skills/hypothesis-generation ~/.claude/skills/ Need detailed installation help? Check our platform-specific guides:
Related Agentic Skills
data-export-excel
by StarlitnightlyExport analysis results, data tables, and formatted spreadsheets to Excel files using openpyxl. Works with ANY LLM provider (GPT, Gemini, Claude, etc....
hypothesis-generation
by K-Dense-AIGenerate testable hypotheses. Formulate from observations, design experiments, explore competing explanations, develop predictions, propose mechanisms...
design-excellence
by wasintohDesign system and anti-patterns for professional UI. Ensures apps don't look "AI generated". Defines color palettes, typography, spacing, animations,...
document-processing-xlsx
by korallisProcess, parse, create, and manipulate Excel spreadsheets (.xlsx, .xls) using libraries like xlsx, exceljs, or SheetJS for data import/export and spre...
Agentic Skill Details
- Repository
- Kosmos
- Type
- Technical
- Meta-Domain
- productivity
- Primary Domain
- excel
- Market Score
- 44.2
Agent Skill Grade
B
Score: 85/100
Click to see breakdown
Score Breakdown
Areas to Improve
- No trigger phrases
- SKILL.md at 156 lines would benefit from a table of contents for quick navigation
- At 506 lines, this reference file is disproportionately long compared to others and contains redundant content
Recommendations
- Add trigger phrases to description for discoverability
- Add table of contents for files over 100 lines
Graded: 1/5/2026
Developer Feedback
I took a look at your hypothesis-generation skill and wanted to share some thoughts.
Links:
The TL;DR
You're at 85/100, solid B territory. This is graded against Anthropic's skill best practices. Your strongest area is Progressive Disclosure Architecture (27/30) — the way you layered everything across SKILL.md plus three reference files is textbook stuff. The weakest spot is Spec Compliance (11/15), mainly because you're missing trigger phrases in the description.
What's Working Well
- Solid file architecture — Main SKILL.md at 156 lines, with ~1300 lines of detail tucked into references. That's the right balance. The 4-file structure keeps the main entry point clean without hiding depth.
- Comprehensive workflow — Your 8-step numbered process with sub-bullets and quality evaluation checklists is genuinely useful. Step 5 (quality assessment) does real validation work.
- Strong template game — That 303-line output template for hypothesis statements is practical and gives users a clear target. Reference files include worked examples for experimental designs.
- Consistent vocabulary — Terms like "testable", "falsifiable", and "mechanism" are used throughout, which makes this feel coherent.
The Big One: Add Trigger Phrases to Your Description
Right now your description is:
"Generate testable hypotheses. Formulate from observations, design experiments, explore competing exp..."
It reads like an abstract rather than actionable metadata. Trigger phrases are how Claude finds skills — they're the keywords someone searches for. You need explicit triggers like:
description: Performs hypothesis generation operations. Use when asked to "generate hypotheses", "design experiment", "test predictions", or "competing explanations". Works across biomedical, physics, and experimental research domains.
This fixes three things at once: tells Claude when to activate you, makes you discoverable, and gives context about your domain specificity. You'll pick up about +2 points here.
Other Things Worth Fixing
Add a table of contents to SKILL.md — At 156 lines, readers would benefit from a quick TOC linking to Workflow, Quality Standards, and Resources. Low effort, +1 point.
Trim the literature search file — Your
references/literature_search_strategies.mdis 506 lines and has some repetitive examples. Condense to ~250 by consolidating similar techniques and removing duplicate worked examples. Tightens PDA and gains +2.Tighten the voice in "When to Use" — You've got "This skill should be used when..." which is passive. Switch to imperative: "Activate for:" or just "Use when:" and list triggers. Minor, but +1 toward writing style.
Make the description domain-specific — You mention "scientific inquiry across domains," which is true but vague. Try "for biomedical, experimental, and mechanistic research questions" instead. More specific, easier to discover.
Quick Wins
- Add trigger phrases to YAML frontmatter description (+2 points)
- Condense literature search reference by 250 lines (+2 points)
- Add TOC to SKILL.md (+1 point)
- Tighten voice in "When to Use" section (+1 point)
That's a potential 88-90/100 with focused edits. The structure is already there — this is polish work.
Checkout your skill here: SkillzWave.ai | SpillWave We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.
Browse Category
More productivity Agentic SkillsReport Security Issue
Found a security vulnerability in this agent skill?