Ir al contenido

Playground

Esta página aún no está disponible en tu idioma.

The Playground is a built-in testing environment inside the SynapsAI console. It lets you send messages to your agent exactly as an end user would — but with full visibility into how the agent retrieves knowledge and generates responses.

Before embedding your agent on a website or connecting it to WhatsApp, use the Playground to:

  • Validate knowledge base coverage — ask questions and check whether the agent finds the right documents.
  • Refine the system prompt — adjust tone, guardrails, and fallback behavior based on real responses.
  • Test edge cases — try off-topic questions, ambiguous queries, or languages the agent may not support.
  • Preview rich components — see how carousels, buttons, forms, and rating prompts render in a live conversation.
  1. Open the SynapsAI console at console.synapsai.app.
  2. Select an agent from the Agents list.
  3. Click Playground in the agent sidebar.

Type a message in the input field and press Enter (or click the send button). The agent will:

  1. Search the knowledge base for relevant content (RAG retrieval).
  2. Pass the retrieved context and your system prompt to the selected LLM.
  3. Return a generated response in the chat window.

Each response appears in real time, so you can evaluate latency alongside quality.

The Playground maintains conversation history within a session, meaning the agent can reference earlier messages. This is useful for testing:

  • Follow-up questions — “Tell me more about that” or “What about pricing?”
  • Context retention — does the agent remember the user’s name or topic across turns?
  • Conversation flow — multi-step interactions like booking or form filling.

To start fresh, click the New Conversation button to clear the history.

GoalWhat to try
Check knowledge accuracyAsk a specific question from your uploaded documents and compare the answer to the source.
Test fallback behaviorAsk something the knowledge base doesn’t cover — the agent should follow your system prompt instructions (e.g., “I don’t have information about that”).
Evaluate toneAsk the same question in different ways (formal, casual, frustrated) and check that the agent responds consistently.
Stress test limitsSend very long messages, rapid consecutive questions, or queries in unsupported languages.
Verify extensionsIf you’ve enabled extensions (e.g., Stripe, Cal.com), trigger extension-related queries to see if the agent calls the right tools.

The Playground works best as part of an iterative loop:

  1. Test — send a batch of representative questions.
  2. Identify gaps — note wrong answers, missing knowledge, or tone issues.
  3. Improve — update the knowledge base, adjust the system prompt, or change the model.
  4. Re-test — go back to the Playground and verify improvements.

Repeat until you’re confident in the agent’s quality, then deploy to a live channel.

  • Configure behavior — fine-tune the system prompt that shapes your agent’s responses.
  • Add documents — expand the knowledge base to improve answer coverage.
  • Live Chat — embed the agent on your website once you’re satisfied with Playground results.