The Tabbed Interface
When creating or editing an assistant, you’ll see three main tabs:What is the LLM?
The LLM (Large Language Model) is the “brain” of your assistant. It understands what callers say and generates smart, helpful responses.Key Settings
How do Fallbacks Work?
How do Fallbacks Work?
If your main LLM provider fails (e.g., rate limit, downtime), the system automatically tries your backup providers in order. This keeps your assistant reliable—even if one provider has issues.
Best Practices
- Start simple: Use recommended defaults, then experiment with advanced settings
- Test with real calls: Try different voices, models, and fallback setups
- Document your changes: Keep track of what works best for your use case
What’s Next?
📞 Call Management
Configure interruption handling, timeouts, and conversation flow control
🎙️ STT Advanced Settings
Fine-tune speech detection timing and audio processing
🔊 TTS Provider Details
Deep dive into voice options and audio optimization
🛠️ Tools & Custom Actions
Built-in call actions plus custom integrations with APIs, Python functions, and AWS Lambda