issue: ##36621
- For simple types in a struct, add "string" to the JSON tag for
automatic string conversion during JSON encoding.
- For complex types in a struct, replace "int64" with "string."
Signed-off-by: jaime <yun.zhang@zilliz.com>
issue: #37172
- add redo interceptor to implement append context refresh. (make new
timetick)
- add create segment handler for flusher.
- make empty segment flushable and directly change it into dropped.
- add create segment message into wal when creating new growing segment.
- make the insert operation into following seq: createSegment -> insert
-> insert -> flushSegment.
- make manual flush into following seq: flushTs -> flushsegment ->
flushsegment -> manualflush.
---------
Signed-off-by: chyezh <chyezh@outlook.com>
issue:https://github.com/milvus-io/milvus/issues/27576
# Main Goals
1. Create and describe collections with geospatial fields, enabling both
client and server to recognize and process geo fields.
2. Insert geospatial data as payload values in the insert binlog, and
print the values for verification.
3. Load segments containing geospatial data into memory.
4. Ensure query outputs can display geospatial data.
5. Support filtering on GIS functions for geospatial columns.
# Solution
1. **Add Type**: Modify the Milvus core by adding a Geospatial type in
both the C++ and Go code layers, defining the Geospatial data structure
and the corresponding interfaces.
2. **Dependency Libraries**: Introduce necessary geospatial data
processing libraries. In the C++ source code, use Conan package
management to include the GDAL library. In the Go source code, add the
go-geom library to the go.mod file.
3. **Protocol Interface**: Revise the Milvus protocol to provide
mechanisms for Geospatial message serialization and deserialization.
4. **Data Pipeline**: Facilitate interaction between the client and
proxy using the WKT format for geospatial data. The proxy will convert
all data into WKB format for downstream processing, providing column
data interfaces, segment encapsulation, segment loading, payload
writing, and cache block management.
5. **Query Operators**: Implement simple display and support for filter
queries. Initially, focus on filtering based on spatial relationships
for a single column of geospatial literal values, providing parsing and
execution for query expressions.
6. **Client Modification**: Enable the client to handle user input for
geospatial data and facilitate end-to-end testing.Check the modification
in pymilvus.
---------
Signed-off-by: tasty-gumi <1021989072@qq.com>
issue: #36672
The expression supports filling elements through templates, which helps
to reduce the overhead of parsing the elements.
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
issue: #36621
1. Add API to access task runtime metrics, including:
- build index task
- compaction task
- import task
- balance (including load/release of segments/channels and some leader
tasks on querycoord)
- sync task
2. Add a debug model to the webpage by using debug=true or debug=false
in the URL query parameters to enable or disable debug mode.
Signed-off-by: jaime <yun.zhang@zilliz.com>
Native support for Google cloud storage using the Google Cloud Storage
libraries. Authentication is performed using GCS service account
credentials JSON.
Currently, Milvus supports Google Cloud Storage using S3-compatible APIs
via the AWS SDK. This approach has the following limitations:
1. Overhead: Translating requests between S3-compatible APIs and GCS can
introduce additional overhead.
2. Compatibility Limitations: Some features of the original S3 API may
not fully translate or work as expected with GCS.
To address these limitations, This enhancement is needed.
Related Issue: #36212
See #36122
This PR is designed to enhance log performance through two improvements:
1. Optimize JSON encoding by switching JSON serializer to
`json-iterator`.
2. Adding support of lazy initialization `WithLazy`.
---------
Signed-off-by: Ted Xu <ted.xu@zilliz.com>
issue: #33285
- using streaming service in insert/upsert/flush/delete/querynode
- fixup flusher bugs and refactor the flush operation
- enable streaming service for dml and ddl
- pass the e2e when enabling streaming service
- pass the integration tst when enabling streaming service
---------
Signed-off-by: chyezh <chyezh@outlook.com>
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #34746
This PR add segment level field in response of
`GetPersistentSegmentInfo` and `GetQuerySegmentInfo`
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #32995
To speed up the construction and querying of Bloom filters, we chose a
blocked Bloom filter instead of a basic Bloom filter implementation.
WARN: This PR is compatible with old version bf impl, but if fall back
to old milvus version, it may causes bloom filter deserialize failed.
In single Bloom filter test cases with a capacity of 1,000,000 and a
false positive rate (FPR) of 0.001, the blocked Bloom filter is 5 times
faster than the basic Bloom filter in both querying and construction, at
the cost of a 30% increase in memory usage.
- Block BF construct time {"time": "54.128131ms"}
- Block BF size {"size": 3021578}
- Block BF Test cost {"time": "55.407352ms"}
- Basic BF construct time {"time": "210.262183ms"}
- Basic BF size {"size": 2396308}
- Basic BF Test cost {"time": "192.596229ms"}
In multi Bloom filter test cases with a capacity of 100,000, an FPR of
0.001, and 100 Bloom filters, we reuse the primary key locations for all
Bloom filters to avoid repeated hash computations. As a result, the
blocked Bloom filter is also 5 times faster than the basic Bloom filter
in querying.
- Block BF TestLocation cost {"time": "529.97183ms"}
- Basic BF TestLocation cost {"time": "3.197430181s"}
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Signed-off-by: shaoting-huang [shaoting-huang@zilliz.com]
issue: https://github.com/milvus-io/milvus/issues/32982
# Background
Go 1.21 introduces several improvements and changes over Go 1.20, which
is quite stable now. According to
[Go 1.21 Release Notes](https://tip.golang.org/doc/go1.21), the big
difference of Go 1.21 is enabling Profile-Guided Optimization by
default, which can improve performance by around 2-14%. Here are the
summary steps of PGO:
1. Build Initial Binary (Without PGO)
2. Deploying the Production Environment
3. Run the program and collect Performance Analysis Data (CPU pprof)
4. Analyze the Collected Data and Select a Performance Profile for PGO
5. Place the Performance Analysis File in the Main Package Directory and
Name It default.pgo
6. go build Detects the default.pgo File and Enables PGO
7. Build and Release the Updated Binary (With PGO)
8. Iterate and Repeat the Above Steps
<img width="657" alt="Screenshot 2024-05-14 at 15 57 01"
src="https://github.com/milvus-io/milvus/assets/167743503/b08d4300-0be1-44dc-801f-ce681dabc581">
# What does this PR do
There are three experiments, search benchmark by Zilliz test platform,
search benchmark by open-source
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file),
and search benchmark with PGO. We do both search benchmarks by Zilliz
test platform and by VectorDBBench to reduce reliance on a single
experimental result. Besides, we validate the performance enhancement
with PGO.
## Search Benchmark Report by Zilliz Test Platform
An upgrade to Go 1.21 was conducted on a Milvus Standalone server,
equipped with 16 CPUs and 64GB of memory. The search performance was
evaluated using a 1 million entry local dataset with an L2 metric type
in a 768-dimensional space. The system was tested for concurrent
searches with 50 concurrent tasks for 1 hour, each with a 20-second
interval. The reason for using one server rather than two servers to
compare is to guarantee the same data source and same segment state
after compaction.
Test Sequence:
1. Go 1.20 Initial Run: Insert data, build index, load index, and
search.
2. Go 1.20 Rebuild: Rebuild the index with the same dataset, load index,
and search.
3. Go 1.21 Load: Upload to Go 1.21 within the server. Then load the
index from the second run, and search.
4. Go 1.21 Rebuild: Rebuild the index with the same dataset, load index,
and search.
Search Metrics:
| Metric | Go 1.20 | Go 1.20 Rebuild Index | Go 1.21 | Go 1.21 Rebuild
Index |
|----------------------------|------------------|-----------------|------------------|-----------------|
| `search requests` | 10,942,683 | 16,131,726 | 16,200,887 | 16,331,052
|
| `search fails` | 0 | 0 | 0 | 0 |
| `search RT_avg` (ms) | 16.44 | 11.15 | 11.11 | 11.02 |
| `search RT_min` (ms) | 1.30 | 1.28 | 1.31 | 1.26 |
| `search RT_max` (ms) | 446.61 | 233.22 | 235.90 | 147.93 |
| `search TP50` (ms) | 11.74 | 10.46 | 10.43 | 10.35 |
| `search TP99` (ms) | 92.30 | 25.76 | 25.36 | 25.23 |
| `search RPS` | 3,039 | 4,481 | 4,500 | 4,536 |
### Key Findings
The benchmark tests reveal that the index build time with Go 1.20 at
340.39 ms and Go 1.21 at 337.60 ms demonstrated negligible performance
variance in index construction. However, Go 1.21 offers slightly better
performance in search operations compared to Go 1.20, with improvements
in handling concurrent tasks and reducing response times.
## Search Benchmark Report By VectorDb Bench
Follow
[VectorDBBench](https://github.com/zilliztech/VectorDBBench?tab=readme-ov-file)
to create a VectorDb Bench test for Go 1.20 and Go 1.21. We test the
search performance with Go 1.20 and Go 1.21 (without PGO) on the Milvus
Standalone system. The tests were conducted using the Cohere dataset
with 1 million entries in a 768-dimensional space, utilizing the COSINE
metric type.
Search Metrics:
Metric | Go 1.20 | Go 1.21 without PGO
-- | -- | --
Load Duration (seconds) | 1195.95 | 976.37
Queries Per Second (QPS) | 841.62 | 875.89
99th Percentile Serial Latency (seconds) | 0.0047 | 0.0076
Recall | 0.9487 | 0.9489
### Key Findings
Go 1.21 indicates faster index loading times and larger search QPS
handling.
## PGO Performance Test
Milvus has already added
[net/http/pprof](https://pkg.go.dev/net/http/pprof) in the metrics. So
we can curl the CPU profile directly by running
`curl -o default.pgo
"http://${MILVUS_SERVER_IP}:${MILVUS_SERVER_PORT}/debug/pprof/profile?seconds=${TIME_SECOND}"`
to collect the profile as the default.pgo during the first search. Then
I build Milvus with PGO and use the same index to run the search again.
The result is as below:
Search Metrics
| Metric | Go 1.21 Without PGO | Go 1.21 With PGO | Change (%) |
|---------------------------------------------|------------------|-----------------|------------|
| `search Requests` | 2,644,583 | 2,837,726 | +7.30% |
| `search Fails` | 0 | 0 | N/A |
| `search RT_avg` (ms) | 11.34 | 10.57 | -6.78% |
| `search RT_min` (ms) | 1.39 | 1.32 | -5.18% |
| `search RT_max` (ms) | 349.72 | 143.72 | -58.91% |
| `search TP50` (ms) | 10.57 | 9.93 | -6.05% |
| `search TP99` (ms) | 26.14 | 24.16 | -7.56% |
| `search RPS` | 4,407 | 4,729 | +7.30% |
### Key Findings
PGO led to a notable enhancement in search performance, particularly in
reducing the maximum response time by 58% and increasing the search QPS
by 7.3%.
### Further Analysis
Generate a diff flame graphs between two CPU profiles by running `go
tool pprof -http=:8000 -diff_base nopgo.pgo pgo.pgo -normalize`
<img width="1894" alt="goprofiling"
src="https://github.com/milvus-io/milvus/assets/167743503/ab9e91eb-95c7-4963-acd9-d1c3c73ee010">
Further insight of HnswIndexNode and Milvus Search Handler
<img width="1906" alt="hnsw"
src="https://github.com/milvus-io/milvus/assets/167743503/a04cf4a0-7c97-4451-b3cf-98afc20a0b05">
<img width="1873" alt="search_handler"
src="https://github.com/milvus-io/milvus/assets/167743503/5f4d3982-18dd-4115-8e76-460f7f534c7f">
After applying PGO to the Milvus server, the CPU utilization of the
faiss::fvec_L2 function has decreased. This optimization significantly
enhances the performance of the
[HnswIndexNode::Search::searchKnn](e0c9c41aa2/src/index/hnsw/hnsw.cc (L203))
method, which is frequently invoked by Knowhere during high-concurrency
searches. As the explanation from Go release notes, the function might
be more aggressively inlined by Go compiler during the second build with
the CPU profiling collected from the first run. As a result, the search
handler efficiency within Milvus DataNode has improved, allowing the
server to process a higher number of search queries per second (QPS).
# Conclusion
The combination of Go 1.21 and PGO has led to substantial enhancements
in search performance for Milvus server, particularly in terms of search
QPS and response times, making it more efficient for handling
high-concurrency search operations.
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
See also #33062
This PR:
- Add `lock.RWMutex` & `lock.Mutex` alias to switch implementation based
on build flags
- When build flags has `test` in it, use `go-deadlock` to detect
possible deadlocks
- Replace all `sync.RWMutex` & `sync.Mutex` in datacoord pkg
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Bumping version to v2.4.2. Also bump milvus-proto version to v2.4.3.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>