2357: chore(dump): add dump tests r=Kerollmops a=irevoire
Add tests on the import of dump v1, v2, v3 and v4.
Since the dumps are slow to decompress, I made the `flate2` crate always compile in optimized.
And since they're also slow to index, I also made the `milli` crate always compile in optimized. What do you think of this `@MarinPostma?`
Should we keep milli unoptimized in case it could help us debug some things? 👀
Co-authored-by: Tamo <tamo@meilisearch.com>
2404: Bring `release-v0.27.1` into `main` r=curquiza a=curquiza
Following the v0.27.1 hotfixes
Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2313: fix(search): remove the back and forth between the IndexMap and the serde_json::Map r=irevoire a=irevoire
This is ok because we're using the preserve_order feature in serde_json which is already internally using an IndexMap.
See https://github.com/meilisearch/meilisearch/pull/2298#discussion_r845228412_
Co-authored-by: Tamo <tamo@meilisearch.com>
2297: Feat(Search): Enhance formating search results r=ManyTheFish a=ManyTheFish
Add new settings and change crop_len behavior to count words instead of characters.
- [x] `highlightPreTag`
- [x] `highlightPostTag`
- [x] `cropMarker`
- [x] `cropLength` count word instead of chars
- [x] `cropLength` 0 is now considered as no `cropLength`
- [ ] ~smart crop finding the best matches interval~ (postponed)
Partially fixes #2214. (no smart crop)
Co-authored-by: ManyTheFish <many@meilisearch.com>
2207: Fix: avoid embedding the user input into the error response. r=Kerollmops a=CNLHC
# Pull Request
## What does this PR do?
Fix#2107.
The problem is meilisearch embeds the user input to the error message.
The reason for this problem is `milli` throws a `serde_json: Error` whose `Display` implementation will do this embedding.
I tried to solve this problem in this PR by manually implementing the `Display` trait for `DocumentFormatError` instead of deriving automatically.
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
Co-authored-by: Liu Hancheng <cn_lhc@qq.com>
Co-authored-by: LiuHanCheng <2463765697@qq.com>
2281: Hard limit the number of results returned by a search r=Kerollmops a=Kerollmops
This PR fixes#2133 by hard-limiting the number of results that a search request can return at any time. I would like the guidance of `@MarinPostma` to test that, should I use a mocking test here? Or should I do anything else?
I talked about touching the _nb_hits_ value with `@qdequele` and we concluded that it was not correct to do so.
Could you please confirm that it is the right place to change that?
Co-authored-by: Kerollmops <clement@meilisearch.com>
2244: chore(all): bump milli r=curquiza a=MarinPostma
continues the work initiated by `@psvnlsaikumar` in #2228
Co-authored-by: Sai Kumar <psvnlsaikumar@gmail.com>
2173: chore(all): replace chrono with time r=irevoire a=irevoire
Chrono has been unmaintained for a few month now and there is a CVE on it.
Also I updated all the error messages related to the API key as you can see here: https://github.com/meilisearch/specifications/pull/114fix#2172
Co-authored-by: Irevoire <tamo@meilisearch.com>
2098: feat(dump): Provide the same cli options as the snapshots r=MarinPostma a=irevoire
Add two cli options for the dump:
- `--ignore-missing-dump`
- `--ignore-dump-if-db-exists`
Fix#2087
Co-authored-by: Tamo <tamo@meilisearch.com>
2101: chore(all): update actix-web dependency to 4.0.0-beta.21 r=MarinPostma a=robjtede
# Pull Request
## What does this PR do?
I don't expect any more breaking changes to Actix Web that will affect Meilisearch so bump to latest beta.
Fixes #N/A?
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->
## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to MeiliSearch!
Co-authored-by: Rob Ede <robjtede@icloud.com>
2095: feat(error): Update the error message when you have no version file r=MarinPostma a=irevoire
Following this [issue](https://github.com/meilisearch/meilisearch-kubernetes/issues/95) we decided to change the error message from:
```
Version file is missing or the previous MeiliSearch engine version was below 0.24.0. Use a dump to update MeiliSearch.
```
to
```
Version file is missing or the previous MeiliSearch engine version was below 0.25.0. Use a dump to update MeiliSearch.
```
Co-authored-by: Tamo <tamo@meilisearch.com>
2075: Allow payloads with no documents r=irevoire a=MarinPostma
accept addition with 0 documents.
0 bytes payload are still refused, since they are not valid json/jsonlines/csv anyways...
close#1987
Co-authored-by: mpostma <postma.marin@protonmail.com>
2068: chore(http): migrate from structopt to clap3 r=Kerollmops a=MarinPostma
migrate from structopt to clap3
This fix the long lasting issue with flags require a value, such as `--no-analytics` or `--schedule-snapshot`.
All flag arguments now take NO argument, i.e:
`meilisearch --schedule-snapshot true` becomes `meilisearch --schedule-snapshot`
as per https://docs.rs/clap/latest/clap/struct.Arg.html#method.env, the env variable is defines as:
> A false literal is n, no, f, false, off or 0. An absent environment variable will also be considered as false. Anything else will considered as true.
`@gmourier`
`@curquiza`
`@meilisearch/docs-team`
Co-authored-by: mpostma <postma.marin@protonmail.com>
2057: fix(dump): Uncompress the dump IN the data.ms r=irevoire a=irevoire
When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
one. The problem is that if the `data.ms` is a mount point because
that's what peoples do with docker usually. We can't override
a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
be on the same partition as the `data.ms`. This means when we tried to move
the dump over the `data.ms`, it was also failing because we can't move data
between two partitions.
------------------
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.
fix#1833
Co-authored-by: Tamo <tamo@meilisearch.com>
When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
one. The problem is that if the `data.ms` is a mount point because
that's what peoples do with docker usually. We can't override
a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
be on the same partition as the `data.ms`. This means when we tried to move
the dump over the `data.ms`, it was also failing because we can't move data
between two partitions.
==============
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.
fix#1833
2008: bug(lib): fix get dumps bad error code r=curquiza a=MarinPostma
fix bad error code being returned whet getting a dump status, and add a test
close#1994
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
- Add API keys in snapshots
- Add API keys in dumps
- Rename action indexes.add to indexes.create
- fix QA #1979fix#1979fix#1995fix#2001fix#2003
related to #1890