2020-08-04 21:40:02 +08:00
|
|
|
<p align="center">
|
2020-11-05 18:41:31 +08:00
|
|
|
<img alt="the milli logo" src="http-ui/public/logo-black.svg">
|
2020-08-04 21:40:02 +08:00
|
|
|
</p>
|
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
<p align="center">a concurrent indexer combined with fast and relevant search algorithms</p>
|
2020-06-28 18:40:08 +08:00
|
|
|
|
|
|
|
## Introduction
|
|
|
|
|
|
|
|
This engine is a prototype, do not use it in production.
|
|
|
|
This is one of the most advanced search engine I have worked on.
|
|
|
|
It currently only supports the proximity criterion.
|
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
### Compile and Run the server
|
|
|
|
|
|
|
|
You can specify the number of threads to use to index documents and many other settings too.
|
2020-06-28 18:40:08 +08:00
|
|
|
|
|
|
|
```bash
|
2020-11-05 18:16:39 +08:00
|
|
|
cd http-ui
|
2021-04-17 02:08:43 +08:00
|
|
|
cargo run --release -- --db my-database.mdb -vvv --indexing-jobs 8
|
2020-06-28 18:40:08 +08:00
|
|
|
```
|
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
### Index your documents
|
2020-06-28 18:40:08 +08:00
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
It can index a massive amount of documents in not much time, I already achieved to index:
|
|
|
|
- 115m songs (song and artist name) in ~1h and take 107GB on disk.
|
|
|
|
- 12m cities (name, timezone and country ID) in 15min and take 10GB on disk.
|
2020-06-28 18:40:08 +08:00
|
|
|
|
|
|
|
All of that on a 39$/month machine with 4cores.
|
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
You can feed the engine with your CSV (comma-seperated, yes) data like this:
|
2020-06-28 18:40:08 +08:00
|
|
|
|
|
|
|
```bash
|
2021-04-29 15:25:35 +08:00
|
|
|
printf "name,age\nhello,32\nkiki,24\n" | http POST 127.0.0.1:9700/documents content-type:text/csv
|
2020-06-28 18:40:08 +08:00
|
|
|
```
|
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
Here ids will be automatically generated as UUID v4 if they doesn't exist in some or every documents.
|
2020-06-28 18:40:08 +08:00
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
Note that it also support JSON and JSON streaming, you can send them to the engine by using
|
|
|
|
the `content-type:application/json` and `content-type:application/x-ndjson` headers respectively.
|
2020-06-28 18:40:08 +08:00
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
### Querying the engine via the website
|
2020-06-28 18:40:08 +08:00
|
|
|
|
2020-11-03 01:06:10 +08:00
|
|
|
You can query the engine by going to [the HTML page itself](http://127.0.0.1:9700).
|
2021-06-17 00:33:33 +08:00
|
|
|
|
|
|
|
|
|
|
|
## Contributing
|
|
|
|
|
|
|
|
You can setup a `git-hook` to stop you from making a commit too fast. It'll stop you if:
|
|
|
|
- Any of the workspaces does not build
|
|
|
|
- Your code is not well-formatted
|
|
|
|
|
|
|
|
These two things are also checked in the CI, so ignoring the hook won't help you merge your code.
|
|
|
|
But if you need to, you can still add `--no-verify` when creating your commit to ignore the hook.
|
|
|
|
|
|
|
|
To enable the hook, run the following command from the root of the project:
|
|
|
|
```
|
|
|
|
cp script/pre-commit .git/hooks/pre-commit
|
|
|
|
```
|