2021-06-21 12:08:06 +08:00
# Development
2022-02-09 14:27:46 +08:00
This document will help to set up your Milvus development environment and to run tests. Please [file an issue ](https://github.com/milvus-io/milvus/issues/new/choose ) if there's a problem.
2021-06-21 12:08:06 +08:00
2023-06-06 14:06:36 +08:00
# Table of contents
2023-04-04 13:38:28 +08:00
- [Development ](#development )
- [Table of contents ](#table-of-contents )
- [Building Milvus with Docker ](#building-milvus-with-docker )
- [Building Milvus on a local OS/shell environment ](#building-milvus-on-a-local-osshell-environment )
- [Hardware Requirements ](#hardware-requirements )
- [Software Requirements ](#software-requirements )
- [Prerequisites ](#prerequisites )
- [Installing Dependencies ](#installing-dependencies )
- [Caveats ](#caveats )
- [CMake \& Conan ](#cmake--conan )
- [Go ](#go )
- [Docker \& Docker Compose ](#docker--docker-compose )
- [Building Milvus ](#building-milvus )
2024-02-27 10:52:57 +08:00
- [Building Milvus v2.3.4 arm image to support ky10 sp3 ](#building-milvus-v234-arm-image-to-support-ky10-sp3 )
- [Software Requirements ](#software-requirements )
- [Install cmake ](#install-cmake )
- [Installing Dependencies ](#installing-dependencies )
- [Install conan ](#install-conan )
- [Install GO 1.80 ](#install-go-180 )
- [Download source code ](#download-source-code )
- [Check OS PAGESIZE ](#check-os-pagesize )
- [Modify the MILVUS_JEMALLOC_LG_PAGE setting ](#modify-the-milvus_jemalloc_lg_page-setting )
- [Build Image ](#build-image )
2023-04-04 13:38:28 +08:00
- [A Quick Start for Testing Milvus ](#a-quick-start-for-testing-milvus )
- [Pre-submission Verification ](#pre-submission-verification )
- [Unit Tests ](#unit-tests )
- [Code coverage ](#code-coverage )
- [E2E Tests ](#e2e-tests )
- [Test on local branch ](#test-on-local-branch )
- [With Linux and MacOS ](#with-linux-and-macos )
- [With docker ](#with-docker )
- [GitHub Flow ](#github-flow )
- [FAQs ](#faqs )
2021-07-07 15:12:00 +08:00
## Building Milvus with Docker
2022-02-09 14:27:46 +08:00
Our official Milvus versions are releases as Docker images. To build Milvus Docker on your own, please follow [these instructions ](https://github.com/milvus-io/milvus/blob/master/build/README.md ).
2021-07-07 15:12:00 +08:00
2021-06-21 12:08:06 +08:00
## Building Milvus on a local OS/shell environment
2022-02-09 14:27:46 +08:00
The details below outline the hardware and software requirements for building on Linux and MacOS.
2021-06-21 12:08:06 +08:00
### Hardware Requirements
2022-02-09 14:27:46 +08:00
The following specification (either physical or virtual machine resources) is recommended for Milvus to build and run from source code.
2021-06-21 12:08:06 +08:00
```
- 8GB of RAM
- 50GB of free disk space
```
2021-09-28 09:52:04 +08:00
### Software Requirements
2021-06-21 12:08:06 +08:00
2022-02-09 14:27:46 +08:00
All Linux distributions are available for Milvus development. However a majority of our contributor worked with Ubuntu or CentOS systems, with a small portion of Mac (both x86_64 and Apple Silicon) contributors. If you would like Milvus to build and run on other distributions, you are more than welcome to file an issue and contribute!
2021-06-21 12:08:06 +08:00
2022-02-09 14:27:46 +08:00
Here's a list of verified OS types where Milvus can successfully build and run:
2021-06-21 12:08:06 +08:00
2023-06-06 14:06:36 +08:00
- Debian/Ubuntu
2023-08-21 09:54:20 +08:00
- Amazon Linux
2023-06-06 14:06:36 +08:00
- MacOS (x86_64)
- MacOS (Apple Silicon)
2022-02-09 14:27:46 +08:00
2024-05-06 16:17:28 +08:00
### Compiler Setup
You can use Vscode to integrate C++ and Go together. Please replace user.settings file with below configs:
```bash
{
"go.toolsEnvVars": {
2024-08-25 16:25:04 +08:00
"PKG_CONFIG_PATH": "${env:PKG_CONFIG_PATH}:${workspaceFolder}/internal/core/output/lib/pkgconfig:${workspaceFolder}/internal/core/output/lib64/pkgconfig",
"LD_LIBRARY_PATH": "${env:LD_LIBRARY_PATH}:${workspaceFolder}/internal/core/output/lib:${workspaceFolder}/internal/core/output/lib64",
"RPATH": "${env:RPATH}:${workspaceFolder}/internal/core/output/lib:${workspaceFolder}/internal/core/output/lib64",
2024-05-06 16:17:28 +08:00
},
"go.testEnvVars": {
2024-08-25 16:25:04 +08:00
"PKG_CONFIG_PATH": "${env:PKG_CONFIG_PATH}:${workspaceFolder}/internal/core/output/lib/pkgconfig:${workspaceFolder}/internal/core/output/lib64/pkgconfig",
"LD_LIBRARY_PATH": "${env:LD_LIBRARY_PATH}:${workspaceFolder}/internal/core/output/lib:${workspaceFolder}/internal/core/output/lib64",
"RPATH": "${env:RPATH}:${workspaceFolder}/internal/core/output/lib:${workspaceFolder}/internal/core/output/lib64",
2024-05-06 16:17:28 +08:00
},
"go.buildFlags": [
2024-08-25 16:25:04 +08:00
"-ldflags=-r=/Users/zilliz/workspace/milvus/internal/core/output/lib"
2024-05-06 16:17:28 +08:00
],
"terminal.integrated.env.linux": {
2024-08-25 16:25:04 +08:00
"PKG_CONFIG_PATH": "${env:PKG_CONFIG_PATH}:${workspaceFolder}/internal/core/output/lib/pkgconfig:${workspaceFolder}/internal/core/output/lib64/pkgconfig",
"LD_LIBRARY_PATH": "${env:LD_LIBRARY_PATH}:${workspaceFolder}/internal/core/output/lib:${workspaceFolder}/internal/core/output/lib64",
"RPATH": "${env:RPATH}:${workspaceFolder}/internal/core/output/lib:${workspaceFolder}/internal/core/output/lib64",
2024-05-06 16:17:28 +08:00
},
"go.useLanguageServer": true,
"gopls": {
"formatting.gofumpt": true
},
"go.formatTool": "gofumpt",
"go.lintTool": "golangci-lint",
2024-08-25 16:25:04 +08:00
"go.testTags": "test,dynamic",
2024-05-06 16:17:28 +08:00
"go.testTimeout": "10m"
}
```
2022-02-09 14:27:46 +08:00
#### Prerequisites
2023-06-06 14:06:36 +08:00
Linux systems (Recommend Ubuntu 20.04 or later):
2022-02-09 14:27:46 +08:00
```bash
Upgrade go from 1.20 to 1.21 (#33047)
Signed-off-by: shaoting-huang [shaoting-huang@zilliz.com]
issue: https://github.com/milvus-io/milvus/issues/32982
# Background
Go 1.21 introduces several improvements and changes over Go 1.20, which
is quite stable now. According to
[Go 1.21 Release Notes](https://tip.golang.org/doc/go1.21), the big
difference of Go 1.21 is enabling Profile-Guided Optimization by
default, which can improve performance by around 2-14%. Here are the
summary steps of PGO:
1. Build Initial Binary (Without PGO)
2. Deploying the Production Environment
3. Run the program and collect Performance Analysis Data (CPU pprof)
4. Analyze the Collected Data and Select a Performance Profile for PGO
5. Place the Performance Analysis File in the Main Package Directory and
Name It default.pgo
6. go build Detects the default.pgo File and Enables PGO
7. Build and Release the Updated Binary (With PGO)
8. Iterate and Repeat the Above Steps
<img width="657" alt="Screenshot 2024-05-14 at 15 57 01"
src="https://github.com/milvus-io/milvus/assets/167743503/b08d4300-0be1-44dc-801f-ce681dabc581">
# What does this PR do
There are three experiments, search benchmark by Zilliz test platform,
search benchmark by open-source
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file),
and search benchmark with PGO. We do both search benchmarks by Zilliz
test platform and by VectorDBBench to reduce reliance on a single
experimental result. Besides, we validate the performance enhancement
with PGO.
## Search Benchmark Report by Zilliz Test Platform
An upgrade to Go 1.21 was conducted on a Milvus Standalone server,
equipped with 16 CPUs and 64GB of memory. The search performance was
evaluated using a 1 million entry local dataset with an L2 metric type
in a 768-dimensional space. The system was tested for concurrent
searches with 50 concurrent tasks for 1 hour, each with a 20-second
interval. The reason for using one server rather than two servers to
compare is to guarantee the same data source and same segment state
after compaction.
Test Sequence:
1. Go 1.20 Initial Run: Insert data, build index, load index, and
search.
2. Go 1.20 Rebuild: Rebuild the index with the same dataset, load index,
and search.
3. Go 1.21 Load: Upload to Go 1.21 within the server. Then load the
index from the second run, and search.
4. Go 1.21 Rebuild: Rebuild the index with the same dataset, load index,
and search.
Search Metrics:
| Metric | Go 1.20 | Go 1.20 Rebuild Index | Go 1.21 | Go 1.21 Rebuild
Index |
|----------------------------|------------------|-----------------|------------------|-----------------|
| `search requests` | 10,942,683 | 16,131,726 | 16,200,887 | 16,331,052
|
| `search fails` | 0 | 0 | 0 | 0 |
| `search RT_avg` (ms) | 16.44 | 11.15 | 11.11 | 11.02 |
| `search RT_min` (ms) | 1.30 | 1.28 | 1.31 | 1.26 |
| `search RT_max` (ms) | 446.61 | 233.22 | 235.90 | 147.93 |
| `search TP50` (ms) | 11.74 | 10.46 | 10.43 | 10.35 |
| `search TP99` (ms) | 92.30 | 25.76 | 25.36 | 25.23 |
| `search RPS` | 3,039 | 4,481 | 4,500 | 4,536 |
### Key Findings
The benchmark tests reveal that the index build time with Go 1.20 at
340.39 ms and Go 1.21 at 337.60 ms demonstrated negligible performance
variance in index construction. However, Go 1.21 offers slightly better
performance in search operations compared to Go 1.20, with improvements
in handling concurrent tasks and reducing response times.
## Search Benchmark Report By VectorDb Bench
Follow
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file)
to create a VectorDb Bench test for Go 1.20 and Go 1.21. We test the
search performance with Go 1.20 and Go 1.21 (without PGO) on the Milvus
Standalone system. The tests were conducted using the Cohere dataset
with 1 million entries in a 768-dimensional space, utilizing the COSINE
metric type.
Search Metrics:
Metric | Go 1.20 | Go 1.21 without PGO
-- | -- | --
Load Duration (seconds) | 1195.95 | 976.37
Queries Per Second (QPS) | 841.62 | 875.89
99th Percentile Serial Latency (seconds) | 0.0047 | 0.0076
Recall | 0.9487 | 0.9489
### Key Findings
Go 1.21 indicates faster index loading times and larger search QPS
handling.
## PGO Performance Test
Milvus has already added
[net/http/pprof](https://pkg.go.dev/net/http/pprof) in the metrics. So
we can curl the CPU profile directly by running
`curl -o default.pgo
"http://${MILVUS_SERVER_IP}:${MILVUS_SERVER_PORT}/debug/pprof/profile?seconds=${TIME_SECOND}"`
to collect the profile as the default.pgo during the first search. Then
I build Milvus with PGO and use the same index to run the search again.
The result is as below:
Search Metrics
| Metric | Go 1.21 Without PGO | Go 1.21 With PGO | Change (%) |
|---------------------------------------------|------------------|-----------------|------------|
| `search Requests` | 2,644,583 | 2,837,726 | +7.30% |
| `search Fails` | 0 | 0 | N/A |
| `search RT_avg` (ms) | 11.34 | 10.57 | -6.78% |
| `search RT_min` (ms) | 1.39 | 1.32 | -5.18% |
| `search RT_max` (ms) | 349.72 | 143.72 | -58.91% |
| `search TP50` (ms) | 10.57 | 9.93 | -6.05% |
| `search TP99` (ms) | 26.14 | 24.16 | -7.56% |
| `search RPS` | 4,407 | 4,729 | +7.30% |
### Key Findings
PGO led to a notable enhancement in search performance, particularly in
reducing the maximum response time by 58% and increasing the search QPS
by 7.3%.
### Further Analysis
Generate a diff flame graphs between two CPU profiles by running `go
tool pprof -http=:8000 -diff_base nopgo.pgo pgo.pgo -normalize`
<img width="1894" alt="goprofiling"
src="https://github.com/milvus-io/milvus/assets/167743503/ab9e91eb-95c7-4963-acd9-d1c3c73ee010">
Further insight of HnswIndexNode and Milvus Search Handler
<img width="1906" alt="hnsw"
src="https://github.com/milvus-io/milvus/assets/167743503/a04cf4a0-7c97-4451-b3cf-98afc20a0b05">
<img width="1873" alt="search_handler"
src="https://github.com/milvus-io/milvus/assets/167743503/5f4d3982-18dd-4115-8e76-460f7f534c7f">
After applying PGO to the Milvus server, the CPU utilization of the
faiss::fvec_L2 function has decreased. This optimization significantly
enhances the performance of the
[HnswIndexNode::Search::searchKnn](https://github.com/zilliztech/knowhere/blob/e0c9c41aa22d8f6e6761a0a54460e4573de15bfe/src/index/hnsw/hnsw.cc#L203)
method, which is frequently invoked by Knowhere during high-concurrency
searches. As the explanation from Go release notes, the function might
be more aggressively inlined by Go compiler during the second build with
the CPU profiling collected from the first run. As a result, the search
handler efficiency within Milvus DataNode has improved, allowing the
server to process a higher number of search queries per second (QPS).
# Conclusion
The combination of Go 1.21 and PGO has led to substantial enhancements
in search performance for Milvus server, particularly in terms of search
QPS and response times, making it more efficient for handling
high-concurrency search operations.
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
2024-05-22 13:21:39 +08:00
go: >= 1.21
2022-02-09 14:27:46 +08:00
cmake: >= 3.18
gcc: 7.5
2024-05-06 16:17:28 +08:00
conan: 1.61
2021-06-21 12:08:06 +08:00
```
2022-02-09 14:27:46 +08:00
MacOS systems with x86_64 (Big Sur 11.5 or later recommended):
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
```bash
Upgrade go from 1.20 to 1.21 (#33047)
Signed-off-by: shaoting-huang [shaoting-huang@zilliz.com]
issue: https://github.com/milvus-io/milvus/issues/32982
# Background
Go 1.21 introduces several improvements and changes over Go 1.20, which
is quite stable now. According to
[Go 1.21 Release Notes](https://tip.golang.org/doc/go1.21), the big
difference of Go 1.21 is enabling Profile-Guided Optimization by
default, which can improve performance by around 2-14%. Here are the
summary steps of PGO:
1. Build Initial Binary (Without PGO)
2. Deploying the Production Environment
3. Run the program and collect Performance Analysis Data (CPU pprof)
4. Analyze the Collected Data and Select a Performance Profile for PGO
5. Place the Performance Analysis File in the Main Package Directory and
Name It default.pgo
6. go build Detects the default.pgo File and Enables PGO
7. Build and Release the Updated Binary (With PGO)
8. Iterate and Repeat the Above Steps
<img width="657" alt="Screenshot 2024-05-14 at 15 57 01"
src="https://github.com/milvus-io/milvus/assets/167743503/b08d4300-0be1-44dc-801f-ce681dabc581">
# What does this PR do
There are three experiments, search benchmark by Zilliz test platform,
search benchmark by open-source
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file),
and search benchmark with PGO. We do both search benchmarks by Zilliz
test platform and by VectorDBBench to reduce reliance on a single
experimental result. Besides, we validate the performance enhancement
with PGO.
## Search Benchmark Report by Zilliz Test Platform
An upgrade to Go 1.21 was conducted on a Milvus Standalone server,
equipped with 16 CPUs and 64GB of memory. The search performance was
evaluated using a 1 million entry local dataset with an L2 metric type
in a 768-dimensional space. The system was tested for concurrent
searches with 50 concurrent tasks for 1 hour, each with a 20-second
interval. The reason for using one server rather than two servers to
compare is to guarantee the same data source and same segment state
after compaction.
Test Sequence:
1. Go 1.20 Initial Run: Insert data, build index, load index, and
search.
2. Go 1.20 Rebuild: Rebuild the index with the same dataset, load index,
and search.
3. Go 1.21 Load: Upload to Go 1.21 within the server. Then load the
index from the second run, and search.
4. Go 1.21 Rebuild: Rebuild the index with the same dataset, load index,
and search.
Search Metrics:
| Metric | Go 1.20 | Go 1.20 Rebuild Index | Go 1.21 | Go 1.21 Rebuild
Index |
|----------------------------|------------------|-----------------|------------------|-----------------|
| `search requests` | 10,942,683 | 16,131,726 | 16,200,887 | 16,331,052
|
| `search fails` | 0 | 0 | 0 | 0 |
| `search RT_avg` (ms) | 16.44 | 11.15 | 11.11 | 11.02 |
| `search RT_min` (ms) | 1.30 | 1.28 | 1.31 | 1.26 |
| `search RT_max` (ms) | 446.61 | 233.22 | 235.90 | 147.93 |
| `search TP50` (ms) | 11.74 | 10.46 | 10.43 | 10.35 |
| `search TP99` (ms) | 92.30 | 25.76 | 25.36 | 25.23 |
| `search RPS` | 3,039 | 4,481 | 4,500 | 4,536 |
### Key Findings
The benchmark tests reveal that the index build time with Go 1.20 at
340.39 ms and Go 1.21 at 337.60 ms demonstrated negligible performance
variance in index construction. However, Go 1.21 offers slightly better
performance in search operations compared to Go 1.20, with improvements
in handling concurrent tasks and reducing response times.
## Search Benchmark Report By VectorDb Bench
Follow
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file)
to create a VectorDb Bench test for Go 1.20 and Go 1.21. We test the
search performance with Go 1.20 and Go 1.21 (without PGO) on the Milvus
Standalone system. The tests were conducted using the Cohere dataset
with 1 million entries in a 768-dimensional space, utilizing the COSINE
metric type.
Search Metrics:
Metric | Go 1.20 | Go 1.21 without PGO
-- | -- | --
Load Duration (seconds) | 1195.95 | 976.37
Queries Per Second (QPS) | 841.62 | 875.89
99th Percentile Serial Latency (seconds) | 0.0047 | 0.0076
Recall | 0.9487 | 0.9489
### Key Findings
Go 1.21 indicates faster index loading times and larger search QPS
handling.
## PGO Performance Test
Milvus has already added
[net/http/pprof](https://pkg.go.dev/net/http/pprof) in the metrics. So
we can curl the CPU profile directly by running
`curl -o default.pgo
"http://${MILVUS_SERVER_IP}:${MILVUS_SERVER_PORT}/debug/pprof/profile?seconds=${TIME_SECOND}"`
to collect the profile as the default.pgo during the first search. Then
I build Milvus with PGO and use the same index to run the search again.
The result is as below:
Search Metrics
| Metric | Go 1.21 Without PGO | Go 1.21 With PGO | Change (%) |
|---------------------------------------------|------------------|-----------------|------------|
| `search Requests` | 2,644,583 | 2,837,726 | +7.30% |
| `search Fails` | 0 | 0 | N/A |
| `search RT_avg` (ms) | 11.34 | 10.57 | -6.78% |
| `search RT_min` (ms) | 1.39 | 1.32 | -5.18% |
| `search RT_max` (ms) | 349.72 | 143.72 | -58.91% |
| `search TP50` (ms) | 10.57 | 9.93 | -6.05% |
| `search TP99` (ms) | 26.14 | 24.16 | -7.56% |
| `search RPS` | 4,407 | 4,729 | +7.30% |
### Key Findings
PGO led to a notable enhancement in search performance, particularly in
reducing the maximum response time by 58% and increasing the search QPS
by 7.3%.
### Further Analysis
Generate a diff flame graphs between two CPU profiles by running `go
tool pprof -http=:8000 -diff_base nopgo.pgo pgo.pgo -normalize`
<img width="1894" alt="goprofiling"
src="https://github.com/milvus-io/milvus/assets/167743503/ab9e91eb-95c7-4963-acd9-d1c3c73ee010">
Further insight of HnswIndexNode and Milvus Search Handler
<img width="1906" alt="hnsw"
src="https://github.com/milvus-io/milvus/assets/167743503/a04cf4a0-7c97-4451-b3cf-98afc20a0b05">
<img width="1873" alt="search_handler"
src="https://github.com/milvus-io/milvus/assets/167743503/5f4d3982-18dd-4115-8e76-460f7f534c7f">
After applying PGO to the Milvus server, the CPU utilization of the
faiss::fvec_L2 function has decreased. This optimization significantly
enhances the performance of the
[HnswIndexNode::Search::searchKnn](https://github.com/zilliztech/knowhere/blob/e0c9c41aa22d8f6e6761a0a54460e4573de15bfe/src/index/hnsw/hnsw.cc#L203)
method, which is frequently invoked by Knowhere during high-concurrency
searches. As the explanation from Go release notes, the function might
be more aggressively inlined by Go compiler during the second build with
the CPU profiling collected from the first run. As a result, the search
handler efficiency within Milvus DataNode has improved, allowing the
server to process a higher number of search queries per second (QPS).
# Conclusion
The combination of Go 1.21 and PGO has led to substantial enhancements
in search performance for Milvus server, particularly in terms of search
QPS and response times, making it more efficient for handling
high-concurrency search operations.
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
2024-05-22 13:21:39 +08:00
go: >= 1.21
2022-02-09 14:27:46 +08:00
cmake: >= 3.18
2023-04-04 13:38:28 +08:00
llvm: >= 15
2024-05-06 16:17:28 +08:00
conan: 1.61
2022-02-09 14:27:46 +08:00
```
2021-06-28 16:30:17 +08:00
2022-02-09 14:27:46 +08:00
MacOS systems with Apple Silicon (Monterey 12.0.1 or later recommended):
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
```bash
Upgrade go from 1.20 to 1.21 (#33047)
Signed-off-by: shaoting-huang [shaoting-huang@zilliz.com]
issue: https://github.com/milvus-io/milvus/issues/32982
# Background
Go 1.21 introduces several improvements and changes over Go 1.20, which
is quite stable now. According to
[Go 1.21 Release Notes](https://tip.golang.org/doc/go1.21), the big
difference of Go 1.21 is enabling Profile-Guided Optimization by
default, which can improve performance by around 2-14%. Here are the
summary steps of PGO:
1. Build Initial Binary (Without PGO)
2. Deploying the Production Environment
3. Run the program and collect Performance Analysis Data (CPU pprof)
4. Analyze the Collected Data and Select a Performance Profile for PGO
5. Place the Performance Analysis File in the Main Package Directory and
Name It default.pgo
6. go build Detects the default.pgo File and Enables PGO
7. Build and Release the Updated Binary (With PGO)
8. Iterate and Repeat the Above Steps
<img width="657" alt="Screenshot 2024-05-14 at 15 57 01"
src="https://github.com/milvus-io/milvus/assets/167743503/b08d4300-0be1-44dc-801f-ce681dabc581">
# What does this PR do
There are three experiments, search benchmark by Zilliz test platform,
search benchmark by open-source
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file),
and search benchmark with PGO. We do both search benchmarks by Zilliz
test platform and by VectorDBBench to reduce reliance on a single
experimental result. Besides, we validate the performance enhancement
with PGO.
## Search Benchmark Report by Zilliz Test Platform
An upgrade to Go 1.21 was conducted on a Milvus Standalone server,
equipped with 16 CPUs and 64GB of memory. The search performance was
evaluated using a 1 million entry local dataset with an L2 metric type
in a 768-dimensional space. The system was tested for concurrent
searches with 50 concurrent tasks for 1 hour, each with a 20-second
interval. The reason for using one server rather than two servers to
compare is to guarantee the same data source and same segment state
after compaction.
Test Sequence:
1. Go 1.20 Initial Run: Insert data, build index, load index, and
search.
2. Go 1.20 Rebuild: Rebuild the index with the same dataset, load index,
and search.
3. Go 1.21 Load: Upload to Go 1.21 within the server. Then load the
index from the second run, and search.
4. Go 1.21 Rebuild: Rebuild the index with the same dataset, load index,
and search.
Search Metrics:
| Metric | Go 1.20 | Go 1.20 Rebuild Index | Go 1.21 | Go 1.21 Rebuild
Index |
|----------------------------|------------------|-----------------|------------------|-----------------|
| `search requests` | 10,942,683 | 16,131,726 | 16,200,887 | 16,331,052
|
| `search fails` | 0 | 0 | 0 | 0 |
| `search RT_avg` (ms) | 16.44 | 11.15 | 11.11 | 11.02 |
| `search RT_min` (ms) | 1.30 | 1.28 | 1.31 | 1.26 |
| `search RT_max` (ms) | 446.61 | 233.22 | 235.90 | 147.93 |
| `search TP50` (ms) | 11.74 | 10.46 | 10.43 | 10.35 |
| `search TP99` (ms) | 92.30 | 25.76 | 25.36 | 25.23 |
| `search RPS` | 3,039 | 4,481 | 4,500 | 4,536 |
### Key Findings
The benchmark tests reveal that the index build time with Go 1.20 at
340.39 ms and Go 1.21 at 337.60 ms demonstrated negligible performance
variance in index construction. However, Go 1.21 offers slightly better
performance in search operations compared to Go 1.20, with improvements
in handling concurrent tasks and reducing response times.
## Search Benchmark Report By VectorDb Bench
Follow
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file)
to create a VectorDb Bench test for Go 1.20 and Go 1.21. We test the
search performance with Go 1.20 and Go 1.21 (without PGO) on the Milvus
Standalone system. The tests were conducted using the Cohere dataset
with 1 million entries in a 768-dimensional space, utilizing the COSINE
metric type.
Search Metrics:
Metric | Go 1.20 | Go 1.21 without PGO
-- | -- | --
Load Duration (seconds) | 1195.95 | 976.37
Queries Per Second (QPS) | 841.62 | 875.89
99th Percentile Serial Latency (seconds) | 0.0047 | 0.0076
Recall | 0.9487 | 0.9489
### Key Findings
Go 1.21 indicates faster index loading times and larger search QPS
handling.
## PGO Performance Test
Milvus has already added
[net/http/pprof](https://pkg.go.dev/net/http/pprof) in the metrics. So
we can curl the CPU profile directly by running
`curl -o default.pgo
"http://${MILVUS_SERVER_IP}:${MILVUS_SERVER_PORT}/debug/pprof/profile?seconds=${TIME_SECOND}"`
to collect the profile as the default.pgo during the first search. Then
I build Milvus with PGO and use the same index to run the search again.
The result is as below:
Search Metrics
| Metric | Go 1.21 Without PGO | Go 1.21 With PGO | Change (%) |
|---------------------------------------------|------------------|-----------------|------------|
| `search Requests` | 2,644,583 | 2,837,726 | +7.30% |
| `search Fails` | 0 | 0 | N/A |
| `search RT_avg` (ms) | 11.34 | 10.57 | -6.78% |
| `search RT_min` (ms) | 1.39 | 1.32 | -5.18% |
| `search RT_max` (ms) | 349.72 | 143.72 | -58.91% |
| `search TP50` (ms) | 10.57 | 9.93 | -6.05% |
| `search TP99` (ms) | 26.14 | 24.16 | -7.56% |
| `search RPS` | 4,407 | 4,729 | +7.30% |
### Key Findings
PGO led to a notable enhancement in search performance, particularly in
reducing the maximum response time by 58% and increasing the search QPS
by 7.3%.
### Further Analysis
Generate a diff flame graphs between two CPU profiles by running `go
tool pprof -http=:8000 -diff_base nopgo.pgo pgo.pgo -normalize`
<img width="1894" alt="goprofiling"
src="https://github.com/milvus-io/milvus/assets/167743503/ab9e91eb-95c7-4963-acd9-d1c3c73ee010">
Further insight of HnswIndexNode and Milvus Search Handler
<img width="1906" alt="hnsw"
src="https://github.com/milvus-io/milvus/assets/167743503/a04cf4a0-7c97-4451-b3cf-98afc20a0b05">
<img width="1873" alt="search_handler"
src="https://github.com/milvus-io/milvus/assets/167743503/5f4d3982-18dd-4115-8e76-460f7f534c7f">
After applying PGO to the Milvus server, the CPU utilization of the
faiss::fvec_L2 function has decreased. This optimization significantly
enhances the performance of the
[HnswIndexNode::Search::searchKnn](https://github.com/zilliztech/knowhere/blob/e0c9c41aa22d8f6e6761a0a54460e4573de15bfe/src/index/hnsw/hnsw.cc#L203)
method, which is frequently invoked by Knowhere during high-concurrency
searches. As the explanation from Go release notes, the function might
be more aggressively inlined by Go compiler during the second build with
the CPU profiling collected from the first run. As a result, the search
handler efficiency within Milvus DataNode has improved, allowing the
server to process a higher number of search queries per second (QPS).
# Conclusion
The combination of Go 1.21 and PGO has led to substantial enhancements
in search performance for Milvus server, particularly in terms of search
QPS and response times, making it more efficient for handling
high-concurrency search operations.
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
2024-05-22 13:21:39 +08:00
go: >= 1.21 (Arch=ARM64)
2022-02-09 14:27:46 +08:00
cmake: >= 3.18
2023-04-04 13:38:28 +08:00
llvm: >= 15
2024-05-06 16:17:28 +08:00
conan: 1.61
2022-02-09 14:27:46 +08:00
```
#### Installing Dependencies
2021-07-09 16:24:39 +08:00
2022-02-09 14:27:46 +08:00
In the Milvus repository root, simply run:
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
```bash
$ ./scripts/install_deps.sh
2021-06-28 16:30:17 +08:00
```
2022-02-09 14:27:46 +08:00
#### Caveats
2023-06-06 14:06:36 +08:00
- [Google Test ](https://github.com/google/googletest.git ) is automatically cloned from GitHub, which in some case could conflict with your local google test library.
2022-02-09 14:27:46 +08:00
2021-06-21 12:08:06 +08:00
Once you have finished, confirm that `gcc` and `make` are installed:
```shell
2021-09-17 10:25:50 +08:00
$ gcc --version
$ make --version
2021-06-21 12:08:06 +08:00
```
2023-04-04 13:38:28 +08:00
#### CMake & Conan
2021-06-21 12:08:06 +08:00
The algorithm library of Milvus, Knowhere is written in c++. CMake is required in the Milvus compilation. If you don't have it, please follow the instructions in the [Installing CMake ](https://cmake.org/install/ ).
Confirm that cmake is available:
```shell
2021-09-17 10:25:50 +08:00
$ cmake --version
2021-06-21 12:08:06 +08:00
```
2023-06-06 14:06:36 +08:00
2023-04-04 13:38:28 +08:00
Note: 3.25 or higher cmake version is required to build Milvus.
Milvus uses Conan to manage third-party dependencies for c++.
2023-06-06 14:06:36 +08:00
Install Conan
2023-04-04 13:38:28 +08:00
```shell
2024-08-02 19:22:15 +08:00
pip install conan==1.64.1
2023-04-04 13:38:28 +08:00
```
2023-06-06 14:06:36 +08:00
2024-03-20 10:23:07 +08:00
Note: Conan version 2.x is not currently supported, please use version 1.61.
2021-06-21 12:08:06 +08:00
#### Go
Milvus is written in [Go ](http://golang.org/ ). If you don't have a Go development environment, please follow the instructions in the [Go Getting Started guide ](https://golang.org/doc/install ).
Confirm that your `GOPATH` and `GOBIN` environment variables are correctly set as detailed in [How to Write Go Code ](https://golang.org/doc/code.html ) before proceeding.
```shell
2021-09-17 10:25:50 +08:00
$ go version
2021-06-21 12:08:06 +08:00
```
Upgrade go from 1.20 to 1.21 (#33047)
Signed-off-by: shaoting-huang [shaoting-huang@zilliz.com]
issue: https://github.com/milvus-io/milvus/issues/32982
# Background
Go 1.21 introduces several improvements and changes over Go 1.20, which
is quite stable now. According to
[Go 1.21 Release Notes](https://tip.golang.org/doc/go1.21), the big
difference of Go 1.21 is enabling Profile-Guided Optimization by
default, which can improve performance by around 2-14%. Here are the
summary steps of PGO:
1. Build Initial Binary (Without PGO)
2. Deploying the Production Environment
3. Run the program and collect Performance Analysis Data (CPU pprof)
4. Analyze the Collected Data and Select a Performance Profile for PGO
5. Place the Performance Analysis File in the Main Package Directory and
Name It default.pgo
6. go build Detects the default.pgo File and Enables PGO
7. Build and Release the Updated Binary (With PGO)
8. Iterate and Repeat the Above Steps
<img width="657" alt="Screenshot 2024-05-14 at 15 57 01"
src="https://github.com/milvus-io/milvus/assets/167743503/b08d4300-0be1-44dc-801f-ce681dabc581">
# What does this PR do
There are three experiments, search benchmark by Zilliz test platform,
search benchmark by open-source
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file),
and search benchmark with PGO. We do both search benchmarks by Zilliz
test platform and by VectorDBBench to reduce reliance on a single
experimental result. Besides, we validate the performance enhancement
with PGO.
## Search Benchmark Report by Zilliz Test Platform
An upgrade to Go 1.21 was conducted on a Milvus Standalone server,
equipped with 16 CPUs and 64GB of memory. The search performance was
evaluated using a 1 million entry local dataset with an L2 metric type
in a 768-dimensional space. The system was tested for concurrent
searches with 50 concurrent tasks for 1 hour, each with a 20-second
interval. The reason for using one server rather than two servers to
compare is to guarantee the same data source and same segment state
after compaction.
Test Sequence:
1. Go 1.20 Initial Run: Insert data, build index, load index, and
search.
2. Go 1.20 Rebuild: Rebuild the index with the same dataset, load index,
and search.
3. Go 1.21 Load: Upload to Go 1.21 within the server. Then load the
index from the second run, and search.
4. Go 1.21 Rebuild: Rebuild the index with the same dataset, load index,
and search.
Search Metrics:
| Metric | Go 1.20 | Go 1.20 Rebuild Index | Go 1.21 | Go 1.21 Rebuild
Index |
|----------------------------|------------------|-----------------|------------------|-----------------|
| `search requests` | 10,942,683 | 16,131,726 | 16,200,887 | 16,331,052
|
| `search fails` | 0 | 0 | 0 | 0 |
| `search RT_avg` (ms) | 16.44 | 11.15 | 11.11 | 11.02 |
| `search RT_min` (ms) | 1.30 | 1.28 | 1.31 | 1.26 |
| `search RT_max` (ms) | 446.61 | 233.22 | 235.90 | 147.93 |
| `search TP50` (ms) | 11.74 | 10.46 | 10.43 | 10.35 |
| `search TP99` (ms) | 92.30 | 25.76 | 25.36 | 25.23 |
| `search RPS` | 3,039 | 4,481 | 4,500 | 4,536 |
### Key Findings
The benchmark tests reveal that the index build time with Go 1.20 at
340.39 ms and Go 1.21 at 337.60 ms demonstrated negligible performance
variance in index construction. However, Go 1.21 offers slightly better
performance in search operations compared to Go 1.20, with improvements
in handling concurrent tasks and reducing response times.
## Search Benchmark Report By VectorDb Bench
Follow
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file)
to create a VectorDb Bench test for Go 1.20 and Go 1.21. We test the
search performance with Go 1.20 and Go 1.21 (without PGO) on the Milvus
Standalone system. The tests were conducted using the Cohere dataset
with 1 million entries in a 768-dimensional space, utilizing the COSINE
metric type.
Search Metrics:
Metric | Go 1.20 | Go 1.21 without PGO
-- | -- | --
Load Duration (seconds) | 1195.95 | 976.37
Queries Per Second (QPS) | 841.62 | 875.89
99th Percentile Serial Latency (seconds) | 0.0047 | 0.0076
Recall | 0.9487 | 0.9489
### Key Findings
Go 1.21 indicates faster index loading times and larger search QPS
handling.
## PGO Performance Test
Milvus has already added
[net/http/pprof](https://pkg.go.dev/net/http/pprof) in the metrics. So
we can curl the CPU profile directly by running
`curl -o default.pgo
"http://${MILVUS_SERVER_IP}:${MILVUS_SERVER_PORT}/debug/pprof/profile?seconds=${TIME_SECOND}"`
to collect the profile as the default.pgo during the first search. Then
I build Milvus with PGO and use the same index to run the search again.
The result is as below:
Search Metrics
| Metric | Go 1.21 Without PGO | Go 1.21 With PGO | Change (%) |
|---------------------------------------------|------------------|-----------------|------------|
| `search Requests` | 2,644,583 | 2,837,726 | +7.30% |
| `search Fails` | 0 | 0 | N/A |
| `search RT_avg` (ms) | 11.34 | 10.57 | -6.78% |
| `search RT_min` (ms) | 1.39 | 1.32 | -5.18% |
| `search RT_max` (ms) | 349.72 | 143.72 | -58.91% |
| `search TP50` (ms) | 10.57 | 9.93 | -6.05% |
| `search TP99` (ms) | 26.14 | 24.16 | -7.56% |
| `search RPS` | 4,407 | 4,729 | +7.30% |
### Key Findings
PGO led to a notable enhancement in search performance, particularly in
reducing the maximum response time by 58% and increasing the search QPS
by 7.3%.
### Further Analysis
Generate a diff flame graphs between two CPU profiles by running `go
tool pprof -http=:8000 -diff_base nopgo.pgo pgo.pgo -normalize`
<img width="1894" alt="goprofiling"
src="https://github.com/milvus-io/milvus/assets/167743503/ab9e91eb-95c7-4963-acd9-d1c3c73ee010">
Further insight of HnswIndexNode and Milvus Search Handler
<img width="1906" alt="hnsw"
src="https://github.com/milvus-io/milvus/assets/167743503/a04cf4a0-7c97-4451-b3cf-98afc20a0b05">
<img width="1873" alt="search_handler"
src="https://github.com/milvus-io/milvus/assets/167743503/5f4d3982-18dd-4115-8e76-460f7f534c7f">
After applying PGO to the Milvus server, the CPU utilization of the
faiss::fvec_L2 function has decreased. This optimization significantly
enhances the performance of the
[HnswIndexNode::Search::searchKnn](https://github.com/zilliztech/knowhere/blob/e0c9c41aa22d8f6e6761a0a54460e4573de15bfe/src/index/hnsw/hnsw.cc#L203)
method, which is frequently invoked by Knowhere during high-concurrency
searches. As the explanation from Go release notes, the function might
be more aggressively inlined by Go compiler during the second build with
the CPU profiling collected from the first run. As a result, the search
handler efficiency within Milvus DataNode has improved, allowing the
server to process a higher number of search queries per second (QPS).
# Conclusion
The combination of Go 1.21 and PGO has led to substantial enhancements
in search performance for Milvus server, particularly in terms of search
QPS and response times, making it more efficient for handling
high-concurrency search operations.
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
2024-05-22 13:21:39 +08:00
Note: go >= 1.21 is required to build Milvus.
2021-06-21 12:08:06 +08:00
#### Docker & Docker Compose
2021-10-26 14:31:03 +08:00
Milvus depends on etcd, Pulsar and MinIO. Using Docker Compose to manage these is an easy way in local development. To install Docker and Docker Compose in your development environment, follow the instructions from the Docker website below:
2021-06-21 12:08:06 +08:00
2023-06-06 14:06:36 +08:00
- Docker: https://docs.docker.com/get-docker/
- Docker Compose: https://docs.docker.com/compose/install/
2021-06-21 12:08:06 +08:00
### Building Milvus
To build the Milvus project, run the following command:
```shell
2023-04-04 13:38:28 +08:00
$ make
2021-06-21 12:08:06 +08:00
```
2024-08-29 14:21:01 +08:00
Milvus uses `conan` to manage 3rd-party dependencies. `conan` will check the consistency of these dependencies every time you run `make` . This process can take a considerable amount of time, especially if the network is poor. If you make sure that the 3rd-party dependencies are consistent, you can use the following command to skip this step:
```shell
$ make SKIP_3RDPARTY=1
```
2024-05-17 14:53:37 +08:00
If this command succeeds, you will now have an executable at `bin/milvus` in your Milvus project directory.
If you want to run the `bin/milvus` executable on the host machine, you need to set `LD_LIBRARY_PATH` temporarily:
```shell
$ LD_LIBRARY_PATH=./internal/core/output/lib:lib:$LD_LIBRARY_PATH ./bin/milvus
```
2021-06-21 12:08:06 +08:00
2021-10-26 14:31:03 +08:00
If you want to update proto file before `make` , we can use the following command:
2023-06-06 14:06:36 +08:00
2021-10-05 21:10:22 +08:00
```shell
$ make generated-proto-go
```
If you want to know more, you can read Makefile.
2024-02-27 10:52:57 +08:00
## Building Milvus v2.3.4 arm image to support ky10 sp3
### Software Requirements
The details below outline the software requirements for building on Ubuntu 20.04
#### Install cmake
```bash
apt update
wget https://github.com/Kitware/CMake/releases/download/v3.27.9/cmake-3.27.9-linux-aarch64.tar.gz
tar zxf cmake-3.27.9-linux-aarch64.tar.gz
mv cmake-3.27.9-linux-aarch64 /usr/local/cmake
vi /etc/profile
export PATH=$PATH:/usr/local/cmake/bin
source /etc/profile
cmake --version
```
#### Installing Dependencies
```bash
sudo apt install -y clang-format clang-tidy ninja-build gcc g++ curl zip unzip tar
```
#### Install conan
```bash
2024-05-28 14:35:42 +08:00
# Verify python3 version, need python3 version > 3.8 and version <= 3.11
2024-02-27 10:52:57 +08:00
python3 --version
2024-08-02 19:22:15 +08:00
# pip install conan 1.64.1
pip3 install conan==1.64.1
2024-02-27 10:52:57 +08:00
```
#### Install GO 1.80
```bash
2024-08-05 16:12:15 +08:00
wget https://go.dev/dl/go1.21.11.linux-arm64.tar.gz
tar zxf go1.21.11.linux-arm64.tar.gz
2024-02-27 10:52:57 +08:00
mv ./go /usr/local
vi /etc/profile
export PATH=$PATH:/usr/local/go/bin
source /etc/profile
go version
```
#### Download source code
```bash
git clone https://github.com/milvus-io/milvus.git
git checkout v2.3.4
cd ./milvus
```
#### Check OS PAGESIZE
```bash
getconf PAGESIZE
```
The PAGESIZE for the ky10 SP3 operating system is 65536, which is 64KB.
#### Modify the MILVUS_JEMALLOC_LG_PAGE setting
The `MILVUS_JEMALLOC_LG_PAGE` variable's primary function is to specify the size of large pages during the compilation of jemalloc. Jemalloc is a memory allocator designed to enhance the performance and efficiency of applications in a multi-threaded environment. By specifying the size of large pages, memory management and access can be optimized, thereby improving performance.
Large page support allows the operating system to manage and allocate memory in larger blocks, reducing the number of page table entries, thereby decreasing the time for page table lookups and improving the efficiency of memory access. This is particularly important when processing large amounts of data, as it can significantly reduce page faults and Translation Lookaside Buffer (TLB) misses, enhancing application performance.
On ARM64 architectures, different systems may support different page sizes, such as 4KB and 64KB. The `MILVUS_JEMALLOC_LG_PAGE` setting allows developers to customize the compilation of jemalloc for the target platform, ensuring it can efficiently operate on systems with varying page sizes. By specifying the `--with-lg-page` configuration option, jemalloc can utilize the optimal page size supported by the system when managing memory.
For example, if a system supports a 64KB page size, by setting `MILVUS_JEMALLOC_LG_PAGE` to the corresponding value (the power of 2, 64KB is 2 to the 16th power, so the value is 16), jemalloc can allocate and manage memory in 64KB units, which can improve the performance of applications running on that system.
Modify the make configuration file, located at: `./milvus/scripts/core_build.sh` , with the following changes:
```diff
arch=$(uname -m)
CMAKE_CMD="cmake \
${CMAKE_EXTRA_ARGS} \
-DBUILD_UNIT_TEST=${BUILD_UNITTEST} \
-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX}
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DCMAKE_CUDA_COMPILER=${CUDA_COMPILER} \
-DCMAKE_LIBRARY_ARCHITECTURE=${arch} \
-DBUILD_COVERAGE=${BUILD_COVERAGE} \
-DMILVUS_GPU_VERSION=${GPU_VERSION} \
-DMILVUS_CUDA_ARCH=${CUDA_ARCH} \
-DEMBEDDED_MILVUS=${EMBEDDED_MILVUS} \
-DBUILD_DISK_ANN=${BUILD_DISK_ANN} \
+ -DMILVUS_JEMALLOC_LG_PAGE=16 \
-DUSE_ASAN=${USE_ASAN} \
-DUSE_DYNAMIC_SIMD=${USE_DYNAMIC_SIMD} \
-DCPU_ARCH=${CPU_ARCH} \
-DINDEX_ENGINE=${INDEX_ENGINE} "
if [ -z "$BUILD_WITHOUT_AZURE" ]; then
CMAKE_CMD=${CMAKE_CMD}"-DAZURE_BUILD_DIR=${AZURE_BUILD_DIR} \
-DVCPKG_TARGET_TRIPLET=${VCPKG_TARGET_TRIPLET} "
fi
CMAKE_CMD=${CMAKE_CMD}"${CPP_SRC_DIR}"
```
Using `-DMILVUS_JEMALLOC_LG_PAGE=16` as a compilation option for jemalloc is because it specifies the size
of "large pages" as 2 to the 16th power bytes, which equals 65536 bytes or 64KB. This value is set to optimize memory management and improve performance, especially on systems that support or prefer using large pages to reduce the overhead of page table management.
Specifying `-DMILVUS_JEMALLOC_LG_PAGE=16` during the compilation of jemalloc informs jemalloc to assume the system's large page size is 64KB. This allows jemalloc to work more efficiently with the operating system's memory manager, using large pages to optimize performance. This is crucial for ensuring optimal performance on systems with different default page sizes, particularly in environments that might have different memory management needs due to varying hardware or system configurations.
### Build Image
```bash
cd ./milvus
cp build/docker/milvus/ubuntu20.04/Dockerfile .
```
Modify the Dockerfile as follows:
```dockerfile
# Copyright (C) 2019-2022 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under the License.
FROM ubuntu:focal-20220426
ARG TARGETARCH
RUN apt-get update & & \
apt-get install -y --no-install-recommends curl ca-certificates libaio-dev libgomp1 & & \
apt-get remove --purge -y & & \
rm -rf /var/lib/apt/lists/*
COPY ./bin/ /milvus/bin/
COPY ./configs/ /milvus/configs/
COPY ./internal/core/output/lib/ /milvus/lib/
ENV PATH=/milvus/bin:$PATH
ENV LD_LIBRARY_PATH=/milvus/lib:$LD_LIBRARY_PATH:/usr/lib
ENV LD_PRELOAD=/milvus/lib/libjemalloc.so
ENV MALLOC_CONF=background_thread:true
# Add Tini
ADD https://github.com/krallin/tini/releases/download/v0.19.0/tini-$TARGETARCH /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
WORKDIR /milvus/
```
Build command: `docker build -t ghostbaby/milvus:v2.3.4_arm64 . `
Verify the image: `docker run ghostbaby/milvus:v2.3.4_arm64 milvus run proxy`
2021-10-05 21:10:22 +08:00
2021-06-21 12:08:06 +08:00
## A Quick Start for Testing Milvus
2022-02-09 14:27:46 +08:00
### Pre-submission Verification
2021-06-21 12:08:06 +08:00
2022-02-09 14:27:46 +08:00
Pre-submission verification provides a battery of checks and tests to give your pull request the best chance of being accepted. Developers need to run as many verification tests as possible locally.
2021-06-21 12:08:06 +08:00
2022-02-09 14:27:46 +08:00
To run all pre-submission verification tests, use this command:
2021-06-21 12:08:06 +08:00
```shell
2021-09-17 10:25:50 +08:00
$ make verifiers
2021-06-21 12:08:06 +08:00
```
### Unit Tests
2022-02-09 14:27:46 +08:00
It is required that all pull request candidates should pass all Milvus unit tests.
2021-06-21 12:08:06 +08:00
2022-02-09 14:27:46 +08:00
Beforce running unit tests, you need to first bring up the Milvus deployment environment.
You may set up a local docker environment with our docker compose yaml file to start unit testing.
For Apple Silicon users (Apple M1):
2023-06-06 14:06:36 +08:00
2021-06-21 12:08:06 +08:00
```shell
2022-02-09 14:27:46 +08:00
$ cd deployments/docker/dev
2024-08-02 19:32:32 +08:00
$ docker compose -f docker-compose-apple-silicon.yml up -d
2022-02-09 14:27:46 +08:00
$ cd ../../../
2021-09-17 10:25:50 +08:00
$ make unittest
2021-06-21 12:08:06 +08:00
```
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
For others:
2023-06-06 14:06:36 +08:00
2021-10-05 21:10:22 +08:00
```shell
$ cd deployments/docker/dev
2024-08-02 19:32:32 +08:00
$ docker compose up -d
2021-10-05 21:10:22 +08:00
$ cd ../../../
$ make unittest
```
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
To run only cpp test:
2023-06-06 14:06:36 +08:00
2021-10-05 21:10:22 +08:00
```shell
2022-02-09 14:27:46 +08:00
$ make test-cpp
2021-10-05 21:10:22 +08:00
```
2022-02-09 14:27:46 +08:00
To run only go test:
2023-06-06 14:06:36 +08:00
2021-10-05 21:10:22 +08:00
```shell
2022-02-09 14:27:46 +08:00
$ make test-go
2021-10-05 21:10:22 +08:00
```
2022-02-09 14:27:46 +08:00
To run a single test case (TestSearchTask in /internal/proxy directory, for example):
2023-06-06 14:06:36 +08:00
2021-09-13 09:44:02 +08:00
```shell
2022-06-30 14:00:18 +08:00
$ source scripts/setenv.sh & & go test -v ./internal/proxy/ -test.run TestSearchTask
```
If using Mac with M1 chip
2023-06-06 14:06:36 +08:00
2022-06-30 14:00:18 +08:00
```
$ source scripts/setenv.sh & & go test -tags=dynamic -v ./internal/proxy/ -test.run TestSearchTask
2021-09-13 09:44:02 +08:00
```
2021-09-22 17:47:53 +08:00
### Code coverage
2022-02-09 14:27:46 +08:00
Before submitting your pull request, make sure your code change is covered by unit test. Use the following commands to check code coverage rate:
2021-09-22 17:47:53 +08:00
2021-10-04 09:07:56 +08:00
Run unit test and generate code coverage report:
2023-06-06 14:06:36 +08:00
2021-09-22 17:47:53 +08:00
```shell
$ make codecov
```
2023-06-06 14:06:36 +08:00
2021-09-22 17:47:53 +08:00
This command will generate html report for Golang and C++ respectively.
For Golang report, open the `go_coverage.html` under milvus project path.
For C++ report, open the `cpp_coverage/index.html` under milvus project path.
You also can generate Golang coverage report by:
2023-06-06 14:06:36 +08:00
2021-09-22 17:47:53 +08:00
```shell
$ make codecov-go
```
2023-06-06 14:06:36 +08:00
2021-09-22 17:47:53 +08:00
Or C++ coverage report by:
2023-06-06 14:06:36 +08:00
2021-09-22 17:47:53 +08:00
```shell
$ make codecov-cpp
```
2021-09-13 09:44:02 +08:00
2021-06-21 12:08:06 +08:00
### E2E Tests
2024-05-06 16:17:28 +08:00
Milvus uses Python SDK to write test cases to verify the correctness of Milvus functions. Before running E2E tests, you need a running Milvus. There are two modes of operation to build Milvus — Milvus Standalone and Milvus Cluster. Milvus Standalone operates independently as a single instance. Milvus Cluster operates across multiple nodes. All milvus instances are clustered together to form a unified system to support larger volumes of data and higher traffic loads.
Both include three components:
1. Milvus: The core functional component.
2. Etcd: The metadata engine. Access and store metadata of Milvus’ internal components.
3. MinIO: The storage engine. Responsible for data persistence for Milvus.
Milvus Cluster includes further component — Pulsar, to be distributed through Pub/Sub mechanism.
2021-06-21 12:08:06 +08:00
```shell
2021-10-05 21:10:22 +08:00
# Running Milvus cluster
2021-09-17 10:25:50 +08:00
$ cd deployments/docker/dev
2024-08-02 19:32:32 +08:00
$ docker compose up -d
2021-09-17 10:25:50 +08:00
$ cd ../../../
$ ./scripts/start_cluster.sh
2021-10-05 21:10:22 +08:00
# Or running Milvus standalone
2022-03-31 09:21:28 +08:00
$ cd deployments/docker/dev
2024-08-02 19:32:32 +08:00
$ docker compose up -d
2021-10-05 21:10:22 +08:00
$ cd ../../../
$ ./scripts/start_standalone.sh
2021-06-21 12:08:06 +08:00
```
2021-10-26 14:31:03 +08:00
To run E2E tests, use these commands:
2021-06-21 12:08:06 +08:00
```shell
2021-09-17 10:25:50 +08:00
$ cd tests/python_client
$ pip install -r requirements.txt
$ pytest --tags=L0 -n auto
2021-06-21 12:08:06 +08:00
```
2021-10-01 08:55:21 +08:00
### Test on local branch
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
#### With Linux and MacOS
2023-06-06 14:06:36 +08:00
2021-10-05 21:10:22 +08:00
After preparing deployment environment, we can start the cluster on your host machine
2021-10-01 08:55:21 +08:00
```shell
$ ./scripts/start_cluster.sh
```
#### With docker
2023-06-06 14:06:36 +08:00
2021-10-01 08:55:21 +08:00
start the cluster on your host machine
2023-06-06 14:06:36 +08:00
2021-10-01 08:55:21 +08:00
```shell
$ ./build/builder.sh make install // build milvus
2023-10-24 09:30:10 +08:00
$ ./build/build_image.sh // build milvus latest docker image
2021-10-01 08:55:21 +08:00
$ docker images // check if milvus latest image is ready
REPOSITORY TAG IMAGE ID CREATED SIZE
milvusdb/milvus latest 63c62ff7c1b7 52 minutes ago 570MB
```
2021-06-21 12:08:06 +08:00
## GitHub Flow
2021-09-28 09:52:04 +08:00
To check out code to work on, please refer to the [GitHub Flow ](https://guides.github.com/introduction/flow/ ).
2022-02-09 14:27:46 +08:00
## FAQs
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
Q: The go building phase fails on Apple Silicon (Mac M1) machines.
A: Please double-check that you have [right Go version ](https://go.dev/dl/ ) installed, i.e. with OS=macOS and Arch=ARM64.
---
2023-06-06 14:06:36 +08:00
Q: "make" fails with "_ld: library not found for -lSystem_" on MacOS.
2022-02-09 14:27:46 +08:00
A: There are a couple of things you could try:
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
1. Use **Software Update** (from **About this Mac** -> **Overview** ) to install updates.
2. Try the following commands:
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
```bash
sudo rm -rf /Library/Developer/CommandLineTools
sudo xcode-select --install
```
---
2023-06-06 14:06:36 +08:00
Q: Rocksdb fails to compile with "_ld: warning: object file was built for newer macOS version (11.6) than being linked (11.0)._" on MacOS.
2022-02-09 14:27:46 +08:00
A: Use **Software Update** (from **About this Mac** -> **Overview** ) to install updates.
---
2023-06-06 14:06:36 +08:00
2022-02-09 14:27:46 +08:00
Q: Some Go unit tests failed.
2022-03-16 18:51:21 +08:00
A: We are aware that some tests can be flaky occasionally. If there's something you believe is abnormal (i.e. tests that fail every single time). You are more than welcome to [file an issue ](https://github.com/milvus-io/milvus/issues/new/choose )!
2024-05-06 16:17:28 +08:00
---
Q: Brew: Unexpected Disconnect while reading sideband packet
```bash
==> Tapping homebrew/core
remote: Enumerating objects: 1107077, done.
remote: Counting objects: 100% (228/228), done.
remote: Compressing objects: 100% (157/157), done.
error: 545 bytes of body are still expected.44 MiB | 341.00 KiB/s
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: index-pack failed
Failed during: git fetch --force origin refs/heads/master:refs/remotes/origin/master
```
A: try to increase http post buffer
```bash
git config --global http.postBuffer 1M
```
---
Q: Brew: command not found” after installation
A: set up git config
```bash
git config --global user.email xxx
git config --global user.name xxx
```
---
Q: Docker: error getting credentials - err: exit status 1, out: ``
A: removing “credsStore”:from ~/.docker/config.json
---
Q: ModuleNotFoundError: No module named 'imp'
A: Python 3.12 has removed the imp module, please downgrade to 3.11 for now.
---
Q: Conan: Unrecognized arguments: — install-folder conan
A: The version is not correct. Please change to 1.61 for now.
---
Q: Conan command not found
A: Fixed by exporting Python bin PATH in your bash.
---
Q: Llvm: use of undeclared identifier ‘ kSecFormatOpenSSL’
A: Reinstall llvm@15
```bash
brew reinstall llvm@15
export LDFLAGS="-L/opt/homebrew/opt/llvm@15/lib"
export CPPFLAGS="-I/opt/homebrew/opt/llvm@15/include"
2024-05-17 14:53:37 +08:00
```