llama-cpp

62 stars 2 forks
26

"Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU."

Also in: kubernetes

Third-Party Agent Skill: Review the code before installing. Agent skills execute in your AI assistant's environment and can access your files. Learn more about security

Installation for Agentic Skill

View all platforms →
skilz install zechenzhangAGI/AI-research-SKILLs/llama-cpp
skilz install zechenzhangAGI/AI-research-SKILLs/llama-cpp --agent opencode
skilz install zechenzhangAGI/AI-research-SKILLs/llama-cpp --agent codex
skilz install zechenzhangAGI/AI-research-SKILLs/llama-cpp --agent gemini

First time? Install Skilz: pip install skilz

Works with 22+ AI coding assistants

Cursor, Aider, Copilot, Windsurf, Qwen, Kimi, and more...

View All Agents
Download Agent Skill ZIP

Extract and copy to ~/.claude/skills/ then restart Claude Desktop

1. Clone the repository:
git clone https://github.com/zechenzhangAGI/AI-research-SKILLs
2. Copy the agent skill directory:
cp -r AI-research-SKILLs/12-inference-serving/llama-cpp ~/.claude/skills/

Need detailed installation help? Check our platform-specific guides:

Related Agentic Skills

Agentic Skill Details

Stars
62
Forks
2
Type
Technical
Meta-Domain
data ai
Primary Domain
machine learning
Market Score
26

Report Security Issue

Found a security vulnerability in this agent skill?