Skip to main content
Before building a chat interface or generating summarized answers, you need to enable the feature, configure your indexes, and create a workspace. This setup is shared across all conversational search use cases.

Enable the chat completions feature

Enable chat completions from your Meilisearch Cloud project in one of two ways:
  • Go to your project’s Settings page and enable it under Experimental features
  • Or open the Chat tab in your project and activate the feature directly from there
For self-hosted instances, enable the feature through the experimental features API by sending a PATCH request with chatCompletions set to true:
curl \
  -X PATCH 'MEILISEARCH_URL/experimental-features/' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "chatCompletions": true
  }'

Find your chat API key

Meilisearch automatically generates a “Default Chat API Key” that combines chatCompletions and search permissions on all indexes. Conversational search requires both actions: chatCompletions authorises the LLM call, and search authorises the retrieval step that feeds documents to the model. Any key you use with the /chats routes must carry both actions, so prefer the default chat API key unless you have a specific reason to create a custom one. Check if you have the key using:
curl \
  -X GET 'MEILISEARCH_URL/keys' \
  -H 'Authorization: Bearer MASTER_KEY'
Look for the key with the description “Default Chat API Key”.

Restrict chat access to specific indexes

Chat queries only search the indexes that the API key can access. The default chat API key is scoped to all indexes. To limit which indexes a chat client can reach, you have two options:
  • Create a new API key with both chatCompletions and search actions, scoped to the exact indexes you want exposed. See manage API keys for the full workflow.
  • Generate a tenant token from the default chat API key. Tenant tokens inherit both the chatCompletions and search actions from their parent key and let you narrow index access or attach search rules per user.
A tenant token cannot grant access to an index its parent API key does not already cover. Make sure the parent key is scoped to every index the token should be allowed to reach.

Troubleshooting: Missing default chat API key

If your instance does not have a Default Chat API Key, create one manually:
curl \
  -X POST 'MEILISEARCH_URL/keys' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "name": "Chat API Key",
    "description": "API key for chat completions",
    "actions": ["search", "chatCompletions"],
    "indexes": ["*"],
    "expiresAt": null
  }'

Configure your indexes

Configure the chat settings for each index you want to make available to the conversational search agent:
curl \
  -X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/chat' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "description": "A movie database containing titles, genres, release dates, keywords, and plot overviews to help users find films to watch",
    "documentTemplate": "A movie titled '\''{{doc.title}}'\'' that released in {{ doc.release_date | date: '\''%Y'\'' }}. The movie genres are: {{doc.genres}}. The key themes include: {{doc.keywords}}. The storyline is about: {{doc.overview|truncatewords: 100}}",
    "documentTemplateMaxBytes": 400
  }'
  • description tells the LLM what the index contains. A good description helps the agent decide which index to search and improves answer relevance. See optimize chat prompts for tips on writing effective descriptions
  • documentTemplate is a Liquid template that defines the text representation of each document sent to the LLM. Write it as natural language so the model can extract relevant information easily. Consult the document template best practices article for more guidance
  • documentTemplateMaxBytes sets a size limit on the text generated from the template. If the rendered text exceeds this limit, it is truncated. The default of 400 bytes balances context quality and speed
You can also configure searchParameters to control how the LLM searches the index (hybrid search, result limits, sorting, etc.). See configure index chat settings for all available options.

Configure a workspace

A workspace holds your LLM provider configuration and system prompt. Each workspace can:
  • Connect to a different LLM provider (OpenAI, Azure OpenAI, Mistral, vLLM, or any OpenAI-compatible provider)
  • Define its own system prompt and conversation context
  • Access a specific set of indexes
The specific model to use is chosen per request in the /chat/completions call, not in the workspace settings.

On Meilisearch Cloud

Your project comes with a single default workspace named cloud. Use cloud as the WORKSPACE_NAME in all API calls:
curl \
  -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "source": "openAi",
    "apiKey": "PROVIDER_API_KEY",
    "prompts": {
      "system": "You are a helpful assistant. Answer questions based only on the provided context."
    }
  }'
If your use case requires multiple workspaces, contact us. This limit may change in the future.

On self-hosted instances

You can create as many workspaces as you need. Choose any name for WORKSPACE_NAME — if the workspace does not exist, Meilisearch creates it automatically:
curl \
  -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "source": "openAi",
    "apiKey": "PROVIDER_API_KEY",
    "prompts": {
      "system": "You are a helpful assistant. Answer questions based only on the provided context."
    }
  }'
baseUrl is required for all providers except OpenAI. For OpenAI, it is optional and only needed if you are using a custom endpoint. See the workspace settings API reference for all available fields. The prompts.system field gives the agent its baseline instructions. For guidance on writing effective prompts, see configure guardrails and optimize chat prompts.

Next steps

Your conversational search setup is complete. Choose how you want to use it:

Build a chat interface

Create a multi-turn conversational interface where users ask follow-up questions.

Generate summarized answers

Display concise AI-generated answers alongside traditional search results.