mirror of
https://github.com/meilisearch/meilisearch.git
synced 2025-02-17 08:10:14 +08:00
Tokenizer v0.2.7 uses chars instead of graphemes for matching bytes. `unicode-segmentation` dependency isn't needed anymore. Also, oxidised the highlight code :) Co-authored-by: many <maxime@meilisearch.com>