quantizing-models-bitsandbytes

62 stars 2 forks
26

"Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers."

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install zechenzhangAGI/AI-research-SKILLs/quantizing-models-bitsandbytes
skilz install zechenzhangAGI/AI-research-SKILLs/quantizing-models-bitsandbytes --agent opencode
skilz install zechenzhangAGI/AI-research-SKILLs/quantizing-models-bitsandbytes --agent codex
skilz install zechenzhangAGI/AI-research-SKILLs/quantizing-models-bitsandbytes --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/zechenzhangAGI/AI-research-SKILLs
2. Copy the agent skill directory:
cp -r AI-research-SKILLs/10-optimization/bitsandbytes ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Stars
62
Forks
2
Type
Technical
Meta-Domain
data ai
Primary Domain
machine learning
Market Score
26

Report Security Issue

Found a security vulnerability in this agent skill?