
Agentic Skills by zechenzhangAGI
blip-2-vision-language
by zechenzhangAGI
Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, imag...
blip-2-vision-language
by zechenzhangAGI
Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, imag...
instructor
by zechenzhangAGI
Extract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, a...
outlines
by zechenzhangAGI
Guarantee valid JSON/XML/code structure during generation, use Pydantic models for type-safe outputs, support local models (Transformers, vLLM), an...
mlflow
by zechenzhangAGI
Track ML experiments, manage model registry with versioning, deploy models to production, and reproduce experiments with MLflow - framework-agnosti...
long-context
by zechenzhangAGI
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32...
dspy
by zechenzhangAGI
Build complex AI systems with declarative programming, optimize prompts automatically, create modular RAG systems and agents with DSPy - Stanford N...
tensorboard
by zechenzhangAGI
Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - G...
moe-training
by zechenzhangAGI
Train Mixture of Experts (MoE) models using DeepSpeed or HuggingFace. Use when training large-scale models with limited compute (5× cost reduction ...
audiocraft-audio-generation
by zechenzhangAGI
PyTorch library for audio generation including text-to-music (MusicGen) and text-to-sound (AudioGen). Use when you need to generate music from text...
audiocraft-audio-generation
by zechenzhangAGI
PyTorch library for audio generation including text-to-music (MusicGen) and text-to-sound (AudioGen). Use when you need to generate music from text...
guidance
by zechenzhangAGI
Control LLM output with regex and grammars, guarantee valid JSON/XML/code generation, enforce structured formats, and build multi-step workflows wi...
pyvene-interventions
by zechenzhangAGI
Provides guidance for performing causal interventions on PyTorch models using pyvene's declarative intervention framework. Use when conducting caus...
pyvene-interventions
by zechenzhangAGI
Provides guidance for performing causal interventions on PyTorch models using pyvene's declarative intervention framework. Use when conducting caus...
huggingface-accelerate
by zechenzhangAGI
"Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automati...
axolotl
by zechenzhangAGI
"Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO, multimodal support"
quantizing-models-bitsandbytes
by zechenzhangAGI
"Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models...
chroma
by zechenzhangAGI
"Open-source embedding database for AI applications. Store embeddings and metadata, perform vector and full-text search, filter by metadata. Simple...
clip
by zechenzhangAGI
"OpenAI's model connecting vision and language. Enables zero-shot image classification, image-text matching, and cross-modal retrieval. Trained on ...
constitutional-ai
by zechenzhangAGI
"Anthropic's method for training harmless AI through self-improvement. Two-phase approach: supervised learning with self-critique/revision, then RL...
deepspeed
by zechenzhangAGI
"Expert guidance for distributed training with DeepSpeed - ZeRO optimization stages, pipeline parallelism, FP16/BF16/FP8, 1-bit Adam, sparse attent...
evaluating-llms-harness
by zechenzhangAGI
"Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing mode...
faiss
by zechenzhangAGI
"Facebook's library for efficient similarity search and clustering of dense vectors. Supports billions of vectors, GPU acceleration, and various in...
fine-tuning-with-trl
by zechenzhangAGI
"Fine-tune LLMs using reinforcement learning with TRL: SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, ...
optimizing-attention-flash
by zechenzhangAGI
"Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with lon...
gptq
by zechenzhangAGI
"Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4...
grpo-rl-training
by zechenzhangAGI
"Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training"
huggingface-accelerate
by zechenzhangAGI
"Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automati...
huggingface-tokenizers
by zechenzhangAGI
"Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds. Supports BPE, WordPiece, and Unigra...
implementing-llms-litgpt
by zechenzhangAGI
"Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean m...
langchain
by zechenzhangAGI
"Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integr...
implementing-llms-litgpt
by zechenzhangAGI
"Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean m...
llama-cpp
by zechenzhangAGI
"Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or wh...
llama-factory
by zechenzhangAGI
"Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support"
llamaguard
by zechenzhangAGI
"Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories: violence/hate, sexual content, weapons, substances, ...
llamaindex
by zechenzhangAGI
"Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vecto...
llava
by zechenzhangAGI
"Large Language and Vision Assistant. Enables visual instruction tuning and image-based conversations. Combines CLIP vision encoder with Vicuna/LLa...
evaluating-llms-harness
by zechenzhangAGI
"Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing mode...
mamba-architecture
by zechenzhangAGI
"State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV cache. Selective SSM with hardw...
mamba-architecture
by zechenzhangAGI
"State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV cache. Selective SSM with hardw...
training-llms-megatron
by zechenzhangAGI
"Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B pa...
nanogpt
by zechenzhangAGI
"Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Ka...
nemo-curator
by zechenzhangAGI
"GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features: fuzzy deduplication (16× faster), quality filtering (30...
nemo-guardrails
by zechenzhangAGI
"NVIDIA's runtime safety framework for LLM applications. Features: jailbreak detection, input/output validation, fact-checking, hallucination detec...
openrlhf-training
by zechenzhangAGI
"High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, Z...
openrlhf-training
by zechenzhangAGI
"High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, Z...
optimizing-attention-flash
by zechenzhangAGI
"Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with lon...
pinecone
by zechenzhangAGI
"Managed vector database for production AI applications. Fully managed, auto-scaling, with hybrid search (dense + sparse), metadata filtering, and ...
pytorch-fsdp
by zechenzhangAGI
"Expert guidance for Fully Sharded Data Parallel training with PyTorch FSDP - parameter sharding, mixed precision, CPU offloading, FSDP2"
pytorch-lightning
by zechenzhangAGI
"High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. S...
quantizing-models-bitsandbytes
by zechenzhangAGI
"Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models...
ray-data
by zechenzhangAGI
"Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images. Integrates with Ray Train, PyTorc...
ray-train
by zechenzhangAGI
"Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter t...
rwkv-architecture
by zechenzhangAGI
"RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Li...
rwkv-architecture
by zechenzhangAGI
"RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Li...
sentence-transformers
by zechenzhangAGI
"Framework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and re...
sentencepiece
by zechenzhangAGI
"Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memor...
serving-llms-vllm
by zechenzhangAGI
"Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference...
sglang
by zechenzhangAGI
"Fast structured generation and serving for LLMs with RadixAttention prefix caching. Use for JSON/regex outputs, constrained decoding, agentic work...
simpo-training
by zechenzhangAGI
"Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No re...
simpo-training
by zechenzhangAGI
"Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No re...
tensorrt-llm
by zechenzhangAGI
"Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), ...
training-llms-megatron
by zechenzhangAGI
"Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B pa...
fine-tuning-with-trl
by zechenzhangAGI
"Fine-tune LLMs using reinforcement learning with TRL: SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, ...
unsloth
by zechenzhangAGI
"Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization"
serving-llms-vllm
by zechenzhangAGI
"Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference...
whisper
by zechenzhangAGI
"OpenAI's general-purpose speech recognition model. Supports 99 languages, transcription, translation to English, and language identification. Six ...
crewai-multi-agent
by zechenzhangAGI
Multi-agent orchestration framework for autonomous AI collaboration. Use when building teams of specialized agents working together on complex task...
crewai-multi-agent
by zechenzhangAGI
Multi-agent orchestration framework for autonomous AI collaboration. Use when building teams of specialized agents working together on complex task...
nnsight-remote-interpretability
by zechenzhangAGI
Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to...
nnsight-remote-interpretability
by zechenzhangAGI
Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to...
sparse-autoencoder-training
by zechenzhangAGI
Provides guidance for training and analyzing Sparse Autoencoders (SAEs) using SAELens to decompose neural network activations into interpretable fe...
sparse-autoencoder-training
by zechenzhangAGI
Provides guidance for training and analyzing Sparse Autoencoders (SAEs) using SAELens to decompose neural network activations into interpretable fe...
speculative-decoding
by zechenzhangAGI
Accelerate LLM inference using speculative decoding, Medusa multiple heads, and lookahead decoding techniques. Use when optimizing inference speed ...
weights-and-biases
by zechenzhangAGI
Track ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with ...
model-merging
by zechenzhangAGI
Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain...
transformer-lens-interpretability
by zechenzhangAGI
Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints an...
transformer-lens-interpretability
by zechenzhangAGI
Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints an...
qdrant-vector-search
by zechenzhangAGI
High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neigh...
qdrant-vector-search
by zechenzhangAGI
High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neigh...
segment-anything-model
by zechenzhangAGI
Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as...
segment-anything-model
by zechenzhangAGI
Foundation model for image segmentation with zero-shot transfer. Use when you need to segment any object in images using points, boxes, or masks as...
evaluating-code-models
by zechenzhangAGI
Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comp...
evaluating-code-models
by zechenzhangAGI
Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comp...
stable-diffusion-image-generation
by zechenzhangAGI
State-of-the-art text-to-image generation with Stable Diffusion models via HuggingFace Diffusers. Use when generating images from text prompts, per...
stable-diffusion-image-generation
by zechenzhangAGI
State-of-the-art text-to-image generation with Stable Diffusion models via HuggingFace Diffusers. Use when generating images from text prompts, per...
model-pruning
by zechenzhangAGI
Reduce LLM size and accelerate inference using pruning techniques like Wanda and SparseGPT. Use when compressing models without retraining, achievi...
peft-fine-tuning
by zechenzhangAGI
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, wh...
peft-fine-tuning
by zechenzhangAGI
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, wh...
lambda-labs-gpu-cloud
by zechenzhangAGI
Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persist...
lambda-labs-gpu-cloud
by zechenzhangAGI
Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persist...
knowledge-distillation
by zechenzhangAGI
Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained perform...
hqq-quantization
by zechenzhangAGI
Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datase...
hqq-quantization
by zechenzhangAGI
Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datase...
phoenix-observability
by zechenzhangAGI
Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, runnin...
phoenix-observability
by zechenzhangAGI
Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, runnin...
gguf-quantization
by zechenzhangAGI
GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when need...
gguf-quantization
by zechenzhangAGI
GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when need...
langsmith-observability
by zechenzhangAGI
LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets,...
langsmith-observability
by zechenzhangAGI
LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets,...
skypilot-multi-cloud-orchestration
by zechenzhangAGI
Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds...
skypilot-multi-cloud-orchestration
by zechenzhangAGI
Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds...
autogpt-agents
by zechenzhangAGI
Autonomous AI agent platform for building and deploying continuous agents. Use when creating visual workflow agents, deploying persistent autonomou...
autogpt-agents
by zechenzhangAGI
Autonomous AI agent platform for building and deploying continuous agents. Use when creating visual workflow agents, deploying persistent autonomou...
modal-serverless-gpu
by zechenzhangAGI
Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML mode...
modal-serverless-gpu
by zechenzhangAGI
Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML mode...
awq-quantization
by zechenzhangAGI
Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) ...
awq-quantization
by zechenzhangAGI
Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) ...
Discover More Agentic Skills
Browse our complete catalog of AI agent skills from developers worldwide.