Playground
Ce contenu n’est pas encore disponible dans votre langue.
The Playground is a built-in testing environment inside the SynapsAI console. It lets you send messages to your agent exactly as an end user would — but with full visibility into how the agent retrieves knowledge and generates responses.
Why use the Playground?
Section titled “Why use the Playground?”Before embedding your agent on a website or connecting it to WhatsApp, use the Playground to:
- Validate knowledge base coverage — ask questions and check whether the agent finds the right documents.
- Refine the system prompt — adjust tone, guardrails, and fallback behavior based on real responses.
- Test edge cases — try off-topic questions, ambiguous queries, or languages the agent may not support.
- Preview rich components — see how carousels, buttons, forms, and rating prompts render in a live conversation.
How to access it
Section titled “How to access it”- Open the SynapsAI console at console.synapsai.app.
- Select an agent from the Agents list.
- Click Playground in the agent sidebar.
Sending messages
Section titled “Sending messages”Type a message in the input field and press Enter (or click the send button). The agent will:
- Search the knowledge base for relevant content (RAG retrieval).
- Pass the retrieved context and your system prompt to the selected LLM.
- Return a generated response in the chat window.
Each response appears in real time, so you can evaluate latency alongside quality.
Conversation context
Section titled “Conversation context”The Playground maintains conversation history within a session, meaning the agent can reference earlier messages. This is useful for testing:
- Follow-up questions — “Tell me more about that” or “What about pricing?”
- Context retention — does the agent remember the user’s name or topic across turns?
- Conversation flow — multi-step interactions like booking or form filling.
To start fresh, click the New Conversation button to clear the history.
Testing tips
Section titled “Testing tips”| Goal | What to try |
|---|---|
| Check knowledge accuracy | Ask a specific question from your uploaded documents and compare the answer to the source. |
| Test fallback behavior | Ask something the knowledge base doesn’t cover — the agent should follow your system prompt instructions (e.g., “I don’t have information about that”). |
| Evaluate tone | Ask the same question in different ways (formal, casual, frustrated) and check that the agent responds consistently. |
| Stress test limits | Send very long messages, rapid consecutive questions, or queries in unsupported languages. |
| Verify extensions | If you’ve enabled extensions (e.g., Stripe, Cal.com), trigger extension-related queries to see if the agent calls the right tools. |
Iterating on your agent
Section titled “Iterating on your agent”The Playground works best as part of an iterative loop:
- Test — send a batch of representative questions.
- Identify gaps — note wrong answers, missing knowledge, or tone issues.
- Improve — update the knowledge base, adjust the system prompt, or change the model.
- Re-test — go back to the Playground and verify improvements.
Repeat until you’re confident in the agent’s quality, then deploy to a live channel.
Next steps
Section titled “Next steps”- Configure behavior — fine-tune the system prompt that shapes your agent’s responses.
- Add documents — expand the knowledge base to improve answer coverage.
- Live Chat — embed the agent on your website once you’re satisfied with Playground results.