Back to Works

ComfyUI Worker

Production-ready Docker-based ComfyUI worker with CUDA support, optimized for serverless environments and high-load generation.

Dockerized
CUDA Support
RunPod Ready

┌─────────────────────────────────────────┐
│            ComfyUI Worker               │
│           (Docker Container)            │
├─────────────────────────────────────────┤
│  ┌──────────────┐     ┌──────────────┐  │
│  │   ComfyUI    │     │ Custom Nodes │  │
│  │    Core      │◄────┤  (Stable)    │  │
│  └──────┬───────┘     └──────────────┘  │
│         │                               │
│         ▼                               │
│  ┌──────────────┐     ┌──────────────┐  │
│  │  Model / LoRA│     │ Python venv  │  │
│  │    Cache     │     │ Requirements │  │
│  └──────┬───────┘     └──────────────┘  │
│         │                               │
├─────────┼───────────────────────────────┤
│         ▼                               │
│    NVIDIA CUDA Drivers / PyTorch        │
└─────────────────────────────────────────┘
          

Version Control

Strict version pinning for ComfyUI, custom nodes, and models to ensure reproducible generations across all environments.

Serverless Ready

Optimized for fast cold-starts on RunPod and Google Cloud Platform. Pre-cached models reduce startup time.

Custom Nodes

Pre-installed and configured set of essential custom nodes for advanced workflows (ControlNet, IPAdapter, AnimateDiff).

Built for Production

This project solves the "it works on my machine" problem for AI generation. By containerizing the entire ComfyUI environment with strict versioning, we ensure that every generation server behaves exactly the same.

  • Automated health checks and self-healing scripts
  • Optimized PyTorch & xFormers for maximum CUDA performance
  • Integrated API mode for headless operation

Deployment Example

# Run with NVIDIA GPU
docker run -d \
  --gpus all \
  -p 8188:8188 \
  -v ./models:/comfyui/models \
  oohegor/comfyui-worker:latest
View on GitHub