Core Models
These are our recommended models for most users. They are automatically routed to the best underlying architecture.Pro
The most capable model for complex reasoning and creative tasks.
Fast
Highly optimized for latency and quick interactions.
Default
The standard balance of speed and capability for general use.
Chat Models
Our chat models are designed for conversational AI, coding, and reasoning.| ID | Provider | Tier | Description |
|---|---|---|---|
default | llm.kiwi | Free | Auto-select best model for your request |
fast | llm.kiwi | Free | Low latency speed optimized |
pro | llm.kiwi | Pro | High capability model routing |
gpt-4.1-nano-* | OpenAI | Pro | GPT-4.1 Nano - Fast and efficient |
gpt-5-mini | OpenAI | Pro | GPT-5 Mini - Advanced reasoning |
deepseek-v3.1 | DeepSeek | Pro | Excellent for coding |
mistral-small-3.1-* | Mistral | Pro | Balanced performance |
codestral-* | Mistral | Pro | Code generation specialist |
ministral-8b-* | Mistral | Pro | Compact and fast |
meta-llama/* | Meta | Pro | Open source power |
gemini-2.5-flash-lite | Free | Ultra fast reasoning | |
gemini-search | Pro | Web-grounded responses | |
glm-4.5-flash | Zhipu | Pro | Chinese/English bilingual |
bidara | Bidara | Free | Biomimicry design assistant |
Image & Media Models
State-of-the-art models for creative and multimodal tasks.| ID | Provider | Tier | Description |
|---|---|---|---|
flux | Flux | Pro | High quality image generation |
whisper | OpenAI | Pro | Industry-standard speech to text |
Model Updates
We continuously evaluate and update the underlying architecture of our models. Using the static slugs (pro, fast, default) ensures that your integration remains stable while automatically benefiting from the latest AI advancements.
[!TIP] Use thepromodel for tasks requiring high precision, and switch tofastfor UI-critical elements where speed is paramount.