| Category | LLM APIs & Inference | Automation Platforms |
| Pricing | Free tier available, pay-per-token for production | Free (self-hosted), cloud from $20/mo |
| GitHub Stars | — | ✓ More stars |
| Platforms | Web | macOS, Linux, Windows, Docker |
| Key Features | - ✓ LPU hardware — custom chips for inference, not repurposed GPUs
- ✓ GPT OSS 120B at 500 tok/s ($0.15/M input)
- ✓ GPT OSS 20B at 1000 tok/s ($0.075/M input)
- ✓ Llama 4 Scout 17B at 750 tok/s with 131K context + vision
- ✓ Qwen3-32B at 400 tok/s with 131K context
- ✓ Compound AI systems with web search + code execution
- ✓ Whisper transcription ($0.04-0.11/hour)
- ✓ OpenAI-compatible API — drop-in replacement
- ✓ Free developer tier: 250-300K TPM, 1K RPM
| - ✓ 400+ integrations (APIs, databases, SaaS)
- ✓ Native AI nodes (LLM, vector store, RAG chains)
- ✓ Visual drag-and-drop workflow builder
- ✓ Self-hostable via Docker (full data control)
- ✓ Webhook triggers, cron schedules, event-driven
- ✓ JavaScript/Python code nodes for custom logic
- ✓ Credential management and encryption
- ✓ Active community (52K+ GitHub stars)
|
| Pros | - + Fastest inference available (500-1000 tok/s)
- + Free tier with generous limits (250K+ tokens/min)
- + OpenAI-compatible API — swap one line of code
- + Latest open-source models (GPT OSS, Llama 4, Qwen3)
- + Compound AI for agentic workflows (search + code exec)
| - + Self-hostable (full data control)
- + 400+ integrations
- + Visual workflow builder
- + Native AI/LLM nodes
- + Active community
|
| Cons | - − Cloud-only — cannot self-host LPU hardware
- − Rate limits on free tier (1K RPM)
- − Smaller model catalog than running locally via Ollama
| - − Resource-heavy for self-hosting
- − Learning curve for complex workflows
|
| Tags | inferencefastfreehardware | automationworkflowno-codeself-hostedintegrations |