A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
Go to file
bors[bot] 4c516c00da
Merge #426
426: Fix search highlight for non-unicode chars r=ManyTheFish a=Samyak2

# Pull Request

## What does this PR do?
Fixes https://github.com/meilisearch/MeiliSearch/issues/1480
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

## Changes

The `matching_bytes` function takes a `&Token` now and:
- gets the number of bytes to highlight (unchanged).
- uses `Token.num_graphemes_from_bytes` to get the number of grapheme clusters to highlight.

In essence, the `matching_bytes` function now returns the number of matching grapheme clusters instead of bytes.

Added proper highlighting in the HTTP UI:
- requires dependency on `unicode-segmentation` to extract grapheme clusters from tokens
- `<mark>` tag is put around only the matched part
    - before this change, the entire word was highlighted even if only a part of it matched

## Questions

Since `matching_bytes` does not return number of bytes but grapheme clusters, should it be renamed to something like `matching_chars` or `matching_graphemes`? Will this break the API?

Thank you very much `@ManyTheFish` for helping 😄 

Co-authored-by: Samyak S Sarnayak <samyak201@gmail.com>
2022-01-17 13:39:00 +00:00
.github Change self-hosted label by benchmarks 2022-01-04 16:01:01 +01:00
benchmarks Fix the benchmarks to work with optional filters 2021-12-09 12:14:16 +01:00
cli Fix the binaries that use the new optional filters 2021-12-09 11:57:53 +01:00
filter-parser Merge #427 2022-01-10 15:01:23 +00:00
helpers Update all packages to 0.21.0 2021-11-29 12:17:59 +01:00
http-ui Use chars for highlight instead of graphemes 2022-01-17 13:15:31 +05:30
infos Update all packages to 0.21.0 2021-11-29 12:17:59 +01:00
milli Merge #426 2022-01-17 13:39:00 +00:00
script format the whole project 2021-06-16 18:33:33 +02:00
.gitignore Change the project to become a workspace with milli as a default-member 2021-02-12 16:15:09 +01:00
.rustfmt.toml format the whole project 2021-06-16 18:33:33 +02:00
bors.toml update bors 2021-07-26 15:31:00 +02:00
Cargo.toml Rename the filter_parser crate into filter-parser 2021-11-09 16:41:10 +01:00
LICENSE Update LICENSE 2021-03-15 16:15:14 +01:00
README.md fix typo in repo 2021-10-18 04:00:19 +01:00

the milli logo

a concurrent indexer combined with fast and relevant search algorithms

Introduction

This repository contains the core engine used in MeiliSearch.

It contains a library that can manage one and only one index. MeiliSearch manages the multi-index itself. Milli is unable to store updates in a store: it is the job of something else above and this is why it is only able to process one update at a time.

This repository contains crates to quickly debug the engine:

  • There are benchmarks located in the benchmarks crate.
  • The http-ui crate is a simple HTTP dashboard to tests the features like for real!
  • The infos crate is used to dump the internal data-structure and ensure correctness.
  • The search crate is a simple command-line that helps run flamegraph on top of it.
  • The helpers crate is only used to modify the database inplace, sometimes.

Compile and run the HTTP debug server

You can specify the number of threads to use to index documents and many other settings too.

cd http-ui
cargo run --release -- --db my-database.mdb -vvv --indexing-jobs 8

Index your documents

It can index a massive amount of documents in not much time, I already achieved to index:

  • 115m songs (song and artist name) in ~48min and take 81GiB on disk.
  • 12m cities (name, timezone and country ID) in ~4min and take 6GiB on disk.

These metrics are done on a MacBook Pro with the M1 processor.

You can feed the engine with your CSV (comma-separated, yes) data like this:

printf "id,name,age\n1,hello,32\n2,kiki,24\n" | http POST 127.0.0.1:9700/documents content-type:text/csv

Don't forget to specify the id of the documents. Also, note that it supports JSON and JSON streaming: you can send them to the engine by using the content-type:application/json and content-type:application/x-ndjson headers respectively.

Querying the engine via the website

You can query the engine by going to the HTML page itself.

Contributing

You can setup a git-hook to stop you from making a commit too fast. It'll stop you if:

  • Any of the workspaces does not build
  • Your code is not well-formatted

These two things are also checked in the CI, so ignoring the hook won't help you merge your code. But if you need to, you can still add --no-verify when creating your commit to ignore the hook.

To enable the hook, run the following command from the root of the project:

cp script/pre-commit .git/hooks/pre-commit