728: Add some integration tests on the sort criterion r=ManyTheFish a=loiclec
This is simply an integration test ensuring that the sort criterion works properly.
However, only one version of the algorithm is tested here (the iterative one). To test the version that uses the facet DB, one has to manually set the `CANDIDATES_THRESHOLD` constant to `0`. I have done that and ensured that the test still succeeds. However, in the future, we will probably want to have an option to force which algorithm is used at runtime, for testing purposes.
Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
3236: Improves clarity of the code that receives payloads r=Kerollmops a=Kerollmops
This PR makes small changes to #3164. It improves the clarity and simplicity of some parts of the code.
Co-authored-by: Kerollmops <clement@meilisearch.com>
740: Fix two nightly errors r=Kerollmops a=irevoire
Currently, we have these two errors on rust nightly. It would be nice to help rustc understand what's going on
```
error[E0658]: anonymous lifetimes in `impl Trait` are unstable
--> filter-parser/src/lib.rs:173:53
|
173 | fn ws<'a, O>(inner: impl FnMut(Span<'a>) -> IResult<O>) -> impl FnMut(Span<'a>) -> IResult<O> {
| ^ expected named lifetime parameter
|
= help: add `#![feature(anonymous_lifetime_in_impl_trait)]` to the crate attributes to enable
help: consider introducing a named lifetime parameter
|
173 | fn ws<'a, 'a, O>(inner: impl FnMut(Span<'a>) -> IResult<'a, O>) -> impl FnMut(Span<'a>) -> IResult<O> {
| +++ +++
error[E0658]: anonymous lifetimes in `impl Trait` are unstable
--> filter-parser/src/error.rs:36:49
|
36 | mut parser: impl FnMut(Span<'a>) -> IResult<O>,
| ^ expected named lifetime parameter
|
= help: add `#![feature(anonymous_lifetime_in_impl_trait)]` to the crate attributes to enable
help: consider introducing a named lifetime parameter
|
35 ~ pub fn cut_with_err<'a, 'a, O>(
36 ~ mut parser: impl FnMut(Span<'a>) -> IResult<'a, O>,
|
For more information about this error, try `rustc --explain E0658`.
error: could not compile `filter-parser` due to 2 previous errors
```
Co-authored-by: Tamo <tamo@meilisearch.com>
3164: Improve the way we receive the documents payload r=Kerollmops a=jiangbo212
# Pull Request
## Related issue
Fixes#3037
## What does this PR do?
- writing the playload to a temporary file via BufWritter
- deserialising the json tempporary file to an array of Objects by means of a memory map
- deserialising thie csv tempporary file by means of a memory map
- Adapted some read_json tests
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
Co-authored-by: jiangbo212 <peiyaoliukuan@gmail.com>
Co-authored-by: jiangbo212 <peiyaoliukuan@126.com>
737: Fix typo initial candidates computation r=Kerollmops a=ManyTheFish
When `Typo` criterion was after a different criterion than `Words` and the previous criterion wasn't returning any candidates at the first iteration of the bucket sort, then the `initial_candidates` were lost.
Now, `Typo`ensure to keep the `initial_candidates` between iterations.
related to https://github.com/meilisearch/meilisearch/issues/3200#issuecomment-1345179578
related to https://github.com/meilisearch/meilisearch/issues/3228
Co-authored-by: ManyTheFish <many@meilisearch.com>
734: Fix bug 2945/3021 (missing key in documents database) r=Kerollmops a=loiclec
# Pull Request
## Related issue
Fixes (partially, until merged into meilisearch) https://github.com/meilisearch/meilisearch/issues/2945 (until we integrate the new milli bump into meilisearch).
**Note that a dump will not be sufficient to upgrade from meilisearch v0.30.2 to meilisearch v0.30.3 due to this fix** because the bug could have caused the `documents` database to be corrupted. Instead, a full manual reimport of the documents will be necessary.
## What does this PR do?
There was a bug happening when:
1. A few documents are added to the index
2. Some of these documents are soft-deleted
3. New documents are added, replacing existing ones and triggering a hard-deletion
The `IndexDocuments::execute` method would then perform the hard-deletion but forget to change the `external_document_ids` structure appropriately. As a result, the `external_document_ids` would contain keys corresponding to documents that do no exist anymore.
To fix this bug, I split the `DeleteDocuments::execute` method into two: `execute_inner` and `execute`.
- `execute_inner` returns a `DetailedDocumentDeletionResult` which says whether soft-deletion was used or not
- `execute` keeps the exact same signature and behaviour
Then, when deleting replaced documents inside `IndexDocuments::execute`, we call `DeleteDocuments::execute_inner` instead of `DeleteDocuments::execute`. If soft-deletion was used, nothing more is done. But if hard-deletion was used, we remove every reference to soft-deleted documents in the new `external_documents_ids` structure.
## Correctness
- Every other test still passes
- The reproduction test case now passes
- In a different branch ([`update-fuzz-test`](https://github.com/meilisearch/milli/pull/735)), I created a fuzz-test that reproduces the past two bugs. This fuzz test cannot find this bug through any combination of some hand-selected `DocumentAddition / DocumentDeletion / DocumentClear / SettingsUpdate` operations. In that test, each relevant operations can be executed with or without soft-deletion, and document additions can be done in batches, replacing or updating existing documents.
Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
3223: Bring back release-v0.30.2 changes into main r=irevoire a=curquiza
Only bring back the necessary changes from `release-v0.30.2` to `main`, following v0.30.2 release
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: curquiza <clementine@meilisearch.com>
3224: Fix update-cargo-toml-version.yml r=curquiza a=mohitsaxenaknoldus
# Pull Request
## Related issue
Fixes#3219
## What does this PR do?
- ...
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
Co-authored-by: Mohit Saxena <76725454+mohitsaxenaknoldus@users.noreply.github.com>
3229: Add a nightly CI: create every day a `nightly` Docker tag based on the latest commit on `main` r=Kerollmops a=curquiza
Also, fixes#3195
Easy to follow with the commits
- In the Docker CI:
- create every day a `nightly` Docker tag based on the latest commit on `main`
- check if the release is the latest one, before creating the `latest` Docker tag. A script has been added.
- add the `worflow_dispatch` event to trigger the CI to build the `nightly` tag when we want (always on the latest commit on `main`)
- In multiple CIs: replace the `released` type by `published`, see [here](https://stackoverflow.com/questions/59319281/github-action-different-between-release-created-and-published) why. Will not impact anything, but will prevent to fail our future automation
- Remove a useless CI (code coverage, not used for 1 year)
- Remove useless lines (comments and CI logic) that don't have any impact
Co-authored-by: curquiza <clementine@meilisearch.com>
We were writing the instance-uid as bytes instead of string in the data.ms and thus we were unable to parse it later.
Also it was less practical for our user to retrieve it and send it to us.