dictionary setting allows you to instruct Meilisearch to consider groups of strings as a single term by adding a supplementary dictionary of user-defined terms. Entries in the dictionary override the default tokenizer, so Meilisearch will recognize them as indivisible tokens during both indexing and search.
When to use a custom dictionary
A custom dictionary is particularly useful in two situations:- Datasets with many domain-specific words and in languages where words are not separated by whitespace, such as Japanese. Adding domain terms or uninterrupted character sequences to the dictionary ensures Meilisearch treats them as single units instead of fragmenting them during tokenization.
- Space-separated languages that contain names or abbreviations with interleaved dots and spaces, such as
"J. R. R. Tolkien"and"W. E. B. Du Bois". Without a custom dictionary, the default tokenizer splits these names into separate letters and periods, which makes them hard to match as a cohesive term.
Check current dictionary
Retrieve the currentdictionary setting for an index:
[], meaning no custom dictionary entries are configured.
Update the dictionary
Add custom terms to the dictionary:"J. R. R." and "W. E. B." as single tokens. Queries for "J. R. R. Tolkien" will match documents where the name appears exactly as spelled, instead of being broken into separate characters.
Reset the dictionary
Clear the custom dictionary and return to the default tokenizer behavior:Dictionary vs. synonyms vs. stop words
- Use
synonymsto map different words to the same concept (for example,NYCtoNew York City). - Use
stopWordsto ignore common terms that add no signal to search. - Use
dictionaryto preserve multi-character or whitespace-containing sequences that the default tokenizer would otherwise split.
For the full API reference, see get dictionary.