2173: chore(all): replace chrono with time r=irevoire a=irevoire
Chrono has been unmaintained for a few month now and there is a CVE on it.
Also I updated all the error messages related to the API key as you can see here: https://github.com/meilisearch/specifications/pull/114fix#2172
Co-authored-by: Irevoire <tamo@meilisearch.com>
2098: feat(dump): Provide the same cli options as the snapshots r=MarinPostma a=irevoire
Add two cli options for the dump:
- `--ignore-missing-dump`
- `--ignore-dump-if-db-exists`
Fix#2087
Co-authored-by: Tamo <tamo@meilisearch.com>
2101: chore(all): update actix-web dependency to 4.0.0-beta.21 r=MarinPostma a=robjtede
# Pull Request
## What does this PR do?
I don't expect any more breaking changes to Actix Web that will affect Meilisearch so bump to latest beta.
Fixes #N/A?
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->
## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to MeiliSearch!
Co-authored-by: Rob Ede <robjtede@icloud.com>
2095: feat(error): Update the error message when you have no version file r=MarinPostma a=irevoire
Following this [issue](https://github.com/meilisearch/meilisearch-kubernetes/issues/95) we decided to change the error message from:
```
Version file is missing or the previous MeiliSearch engine version was below 0.24.0. Use a dump to update MeiliSearch.
```
to
```
Version file is missing or the previous MeiliSearch engine version was below 0.25.0. Use a dump to update MeiliSearch.
```
Co-authored-by: Tamo <tamo@meilisearch.com>
2075: Allow payloads with no documents r=irevoire a=MarinPostma
accept addition with 0 documents.
0 bytes payload are still refused, since they are not valid json/jsonlines/csv anyways...
close#1987
Co-authored-by: mpostma <postma.marin@protonmail.com>
2068: chore(http): migrate from structopt to clap3 r=Kerollmops a=MarinPostma
migrate from structopt to clap3
This fix the long lasting issue with flags require a value, such as `--no-analytics` or `--schedule-snapshot`.
All flag arguments now take NO argument, i.e:
`meilisearch --schedule-snapshot true` becomes `meilisearch --schedule-snapshot`
as per https://docs.rs/clap/latest/clap/struct.Arg.html#method.env, the env variable is defines as:
> A false literal is n, no, f, false, off or 0. An absent environment variable will also be considered as false. Anything else will considered as true.
`@gmourier`
`@curquiza`
`@meilisearch/docs-team`
Co-authored-by: mpostma <postma.marin@protonmail.com>
2057: fix(dump): Uncompress the dump IN the data.ms r=irevoire a=irevoire
When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
one. The problem is that if the `data.ms` is a mount point because
that's what peoples do with docker usually. We can't override
a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
be on the same partition as the `data.ms`. This means when we tried to move
the dump over the `data.ms`, it was also failing because we can't move data
between two partitions.
------------------
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.
fix#1833
Co-authored-by: Tamo <tamo@meilisearch.com>
When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
one. The problem is that if the `data.ms` is a mount point because
that's what peoples do with docker usually. We can't override
a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
be on the same partition as the `data.ms`. This means when we tried to move
the dump over the `data.ms`, it was also failing because we can't move data
between two partitions.
==============
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.
fix#1833
2008: bug(lib): fix get dumps bad error code r=curquiza a=MarinPostma
fix bad error code being returned whet getting a dump status, and add a test
close#1994
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
- Add API keys in snapshots
- Add API keys in dumps
- Rename action indexes.add to indexes.create
- fix QA #1979fix#1979fix#1995fix#2001fix#2003
related to #1890
1965: Reintroduce engine version file r=MarinPostma a=irevoire
Right now if you boot up MeiliSearch and point it to a DB directory created with a previous version of MeiliSearch the existing indexes will be deleted. This [used to be](51d7c84e73) prevented by a startup check which would compare the current engine version vs what was stored in the DB directory's version file, but this functionality seems to have been lost after a few refactorings of the code.
In order to go back to the old behavior we'll need to reintroduce the `VERSION` file that used to be present; I considered reusing the `metadata.json` file used in the dumps feature, but this seemed like the simpler and more approach. As the intent is just to restore functionality, the implementation is quite basic. I imagine that in the future we could build on this and do things like compatibility across major/minor versions and even migrating between formats.
This PR was made thanks to `@mbStavola` and is basically a port of his PR #1860 after a big refacto of the code #1796.
Closes#1840
Co-authored-by: Matt Stavola <m.freitas@offensive-security.com>