5000: Implement the experimental drop search after and nb search per core r=Kerollmops a=irevoire
# Pull Request
## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/4997
Fixes https://github.com/meilisearch/meilisearch/issues/4995
## What does this PR do?
- Add an experimental parameter to decide how many searches can be processed per core
- Add an experimental parameter to configure after how many seconds a search should be dropped
Co-authored-by: Tamo <tamo@meilisearch.com>
4999: Update version for the next release (v1.10.3) in Cargo.toml r=irevoire a=meili-bot
⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.
Co-authored-by: irevoire <irevoire@users.noreply.github.com>
4949: Fix swedish language support v1.10 r=Kerollmops a=ManyTheFish
# Pull Request
Cherry-picked commits from https://github.com/meilisearch/meilisearch/pull/4945 for v1.10.2
Co-authored-by: ManyTheFish <many@meilisearch.com>
4951: Update version for the next release (v1.10.2) in Cargo.toml r=dureuill a=ManyTheFish
# Pull Request
Update Meilisearch v1.10.2
Co-authored-by: ManyTheFish <ManyTheFish@users.noreply.github.com>
4905: Do not fail the whole batch when a single document deletion by filter fails r=dureuill a=irevoire
# Pull Request
## Related issue
Fixes a small bug introduced by https://github.com/meilisearch/meilisearch/pull/4901 where a document deletion by filter could fail a whole batch of document deletion task.
## What does this PR do?
- When a document deletion by filter contains an invalid filter, only fails this task instead of the whole batch
- Adds a big test with multiple document deletions batched together ensuring everything works well
Co-authored-by: Tamo <tamo@meilisearch.com>
4901: Autobatch document deletion by filter r=dureuill a=irevoire
# Pull Request
## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/4897
## What does this PR do?
- Enable autobatching of document deletion by filter with:
- Document deletion by filter
- Document deletion
- Document clear
- Index deletion
Co-authored-by: Tamo <tamo@meilisearch.com>
4899: stop trying to process searches after one minute r=ManyTheFish a=irevoire
# Pull Request
## Related issue
May be related to #4654 and https://github.com/meilisearch/meilisearch-support/issues/350
## What does this PR do?
- If we've been waiting for one whole minute for a search to process, we cancel it
- Ideally we should check if the connection was closed instead but that’s not possible currently: https://github.com/actix/actix-web/issues/3462
Co-authored-by: Tamo <tamo@meilisearch.com>
4898: Explicitely drop the search permits r=ManyTheFish a=irevoire
# Pull Request
## Related issue
May be related to #4654 and https://github.com/meilisearch/meilisearch-support/issues/350
## What does this PR do?
- Stop spawning a tokio task that is not immediately scheduled and instead explicitly drop the search permit
This should make new search requests to be scheduled quicker than before and reduce the general load on tokio
Co-authored-by: Tamo <tamo@meilisearch.com>
4895: Update version for the next release (v1.10.1) in Cargo.toml r=irevoire a=meili-bot
⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.
Co-authored-by: irevoire <irevoire@users.noreply.github.com>
4896: Make sure the index scheduler never stops running r=dureuill a=irevoire
# Pull Request
## Related issue
Fixes#4748 for the v1.10.1
I cherry-picked the commits from https://github.com/meilisearch/meilisearch/pull/4861
Co-authored-by: Tamo <tamo@meilisearch.com>
4893: Only spawn one search queue in actix-web r=dureuill a=irevoire
# Pull Request
## Related issue
May be related to #4654 and https://github.com/meilisearch/meilisearch-support/issues/350
## What does this PR do?
- We noticed a bug where multiple search queue were spawned instead of one
Co-authored-by: Tamo <tamo@meilisearch.com>
4881: Infer locales from index settings r=curquiza a=ManyTheFish
# Pull Request
## Related issue
Fixes#4828Fixes#4816
## What does this PR do?
- Add some test using `AttributesToSearchOn`
- Make the search infer the language based on the index settings when the `locales` filed is not precise
CI is now working:
https://github.com/meilisearch/meilisearch/actions/runs/10490050545/job/29055955667
Co-authored-by: ManyTheFish <many@meilisearch.com>
4845: Fix perf regression facet strings r=ManyTheFish a=dureuill
Benchmarks between v1.9 and v1.10 show a performance regression of about x2 (+3dB regression) for most indexing workloads (+44s for hackernews).
[Benchmark interpretation in the engine weekly meeting](https://www.notion.so/meilisearch/Engine-weekly-4d49560d374c4a87b4e3d126a261d4a0?pvs=4#98a709683276450295fcfe1f8ea5cef3).
- Initial investigation pointed to #4819 as the origin of the regression.
- Further investigation points towards the hypernormalization of each facet value in `extract_facet_string_docids`
- Most of the slowdown is in `normalize_facet_strings`, and precisely in `detection.language()`.
This PR improves the situation (-10s compared with `main` for hackernews, so only +34s regression compared with `v1.9`) by skipping normalization when it can be skipped.
I'm not sure how to fix the root cause though. Should we skip facet locale normalization for now? Cc `@ManyTheFish`
---
Tentative resolution options:
1. remove locale normalization from facet. I'm not sure why this is required, I believe we weren't doing this before, so maybe we can stop doing that again.
2. don't do language detection when it can be helped: won't help with the regressions in benchmark, but maybe we can skip language detection when the locales contain only one language?
3. use a faster language detection library: `@Kerollmops` told me about https://github.com/quickwit-oss/whichlang which bolsters x10 to x100 throughput compared with whatlang. Should we consider replacing whatlang with whichlang? Now I understand whichlang supports fewer languages than whatlang, so I also suggest:
4. use whichlang when the list of locales is empty (autodetection), or when it only contains locales that whichlang can detect. If the list of locales contains locales that whichlang *cannot* detect, **then** use whatlang instead.
---
> [!CAUTION]
> this PR contains a commit that adds detailed spans, that were used to detect which part of `extract_facet_string_docids` was taking too much time. As this commit adds spans that are called too often and adds 7s overhead, it should be removed before landing.
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
4864: Don't remove facet value when multiple original values map to the same normalized value r=ManyTheFish a=dureuill
# Pull Request
## Related issue
Fixes#4860
> [!WARNING]
> This PR contains a fix to the immediate issue, but it looks like the underlying data model is faulty: there is only one possible "original" value for each normalized value in a facet of a document, while because of array values (or manually written nested fields, if you're evil), it is technically possible to have multiple, distinct original values mapping to the same normalized value.
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
4858: also intersect the universe for searchOnAttributes r=irevoire a=dureuill
# Pull Request
## Related issue
Fixes#4857
## What does this PR do?
- intersect with the universe (which does not contain the filtered out ids) when looking up documents for words, even when using `searchOnAttributes`
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
4846: Add OpenAI tests r=dureuill a=dureuill
# Pull Request
## Related issue
Part of fixing #4757
## What does this PR do?
- OpenAI embedder: don't pass apiKey when it is empty (slightly improves error messages)
- rest embedder and rest-based embedders: specialize the authorization denied error message depending on the configuration source
- fix existing tests
- Adds assets containing prerecorded texts to embed and the embeddings obtained from OpenAI
- Adds an asset containing a tokenized long document and the embedding obtained from OpenAI for this token
- Uses the wiremock crate to mock the OpenAI API: parse the openai request, lookup the response in assets, craft an openai response
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
4853: Fix rhai deletion r=irevoire a=dureuill
# Pull Request
## Related issue
Fixes#4849
## What does this PR do?
- insert inside of the bitmap instead of pushing into it.
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
4850: Use a fixed date format regardless of features r=irevoire a=dureuill
# Pull Request
## Related issue
Fixes#4844
## What does this PR do?
Given the following script:
```
cargo run -- --db-path meili.ms
sleep 3
curl -s -X POST http://127.0.0.1:7700/indexes -H 'Content-Type: application/json' --data-binary '{"uid": "movies", "primaryKey": "id"}'
sleep 3
cargo run -p meilisearch --db-path meili.ms
sleep 3
curl -s -X POST http://127.0.0.1:7700/indexes/movies/search -H 'Content-Type: application/json' --data-binary '{}'
```
- Before this PR, the final search returns a decoding error.
- After this PR, the search completes successfully
### Technical standpoint
This PR fixes two locations where the formatting of dates were dependent on the feature set of the `time` crate.
1. The `IndexStats` had two fields without the serialization format specified
2. More subtly, the index dates (`createdAt,` `updatedAt`) were using value remapping in the main DB to `SerdeJson<OffsetDateTime>`, which was using whatever default format was available. This was fixed by creating a local `OffsetDateTime` wrapper that would specify the serialization format
Co-authored-by: Louis Dureuil <louis@meilisearch.com>