Due to the removal of injection and syncSegments from the compaction, we
need to ensure that no compaction is successfully executed during the
rolling upgrade. This PR renames Compaction to CompactionV2, with the
following effects:
- New datacoord + old datanode: Utilizes the CompactionV2 interface,
resulting in the datanode error "CompactionV2 not implemented," causing
compaction to fail;
- Old datacoord + new datanode: Utilizes the CompactionV1 interface,
resulting in the datanode error "CompactionV1 not implemented," causing
compaction to fail.
issue: https://github.com/milvus-io/milvus/issues/32809
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Default llvm toolchain version in Ubuntu 20.04 is 10, while Ubuntu 22.04
does not have `clang-tidy-10` or `clang-format-10` by default.
issue: #33142
Signed-off-by: Patrick Weizhi Xu <weizhi.xu@zilliz.com>
Signed-off-by: Yinzuo Jiang <jiangyinzuo@foxmail.com>
See also #33561
This PR:
- Use zero copy when buffering insert messages
- Make `storage.InsertCodec` support serialize multiple insert data
chunk into same batch binlog files
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #29419
* sparse float vector to support raw data mmap
For get vector from chunk cache, I added a unit test but marking it as
skipped due to a known issue. I have tested it locally.
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
issue: #32698
This PR add two rest api for component stop and status check:
1. `/management/stop?role=querynode` can stop the specified component
2. `/management/check/ready?role=rootcoord` can check whether the target
component is serviceable
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
See also #33608
Make `fixDefaultDBIDConsistency` also write back collection dbid
modification when nonDB id collection is found.
This fix shall prevent dropped collections of this kind show up again
after dropping and restart.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
The data coordinator computed the appropriate number of import segments,
thus when importing in the data node, one can randomly select a segment.
issue: https://github.com/milvus-io/milvus/issues/33604
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
1. Fill log ID of stats log from import
2. Add a check to validate the log ID before writing to meta
issue: https://github.com/milvus-io/milvus/issues/33476
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
`UseDefaultConsistency` param is crucial for debugging slow query
problems. It could be confusing when guarantee timestamp is 1 while this
param is not logged
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #27675
Store pk to minimal timestamp in `inData` instead of bloom filter to
check whether some delete entry hit current insert batch
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #33483
Wrap `C.InitTrace` & `C.SetTrace` with timeout preventing otlp
initializtion hangs forever when endpoint is not set correctly
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
query coord send load request to delegator, delegator load bf first,
then forward load request to qn worker. but when qn worker has no more
memory, it will return load failed immediatelly. then delegator roll
back the loaded bf. query coord wil retry the load request, and
delegator will load and roll back bf again and again.
this PR delay the loading bf step until load segment succeed in worker.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #32995
To speed up the construction and querying of Bloom filters, we chose a
blocked Bloom filter instead of a basic Bloom filter implementation.
WARN: This PR is compatible with old version bf impl, but if fall back
to old milvus version, it may causes bloom filter deserialize failed.
In single Bloom filter test cases with a capacity of 1,000,000 and a
false positive rate (FPR) of 0.001, the blocked Bloom filter is 5 times
faster than the basic Bloom filter in both querying and construction, at
the cost of a 30% increase in memory usage.
- Block BF construct time {"time": "54.128131ms"}
- Block BF size {"size": 3021578}
- Block BF Test cost {"time": "55.407352ms"}
- Basic BF construct time {"time": "210.262183ms"}
- Basic BF size {"size": 2396308}
- Basic BF Test cost {"time": "192.596229ms"}
In multi Bloom filter test cases with a capacity of 100,000, an FPR of
0.001, and 100 Bloom filters, we reuse the primary key locations for all
Bloom filters to avoid repeated hash computations. As a result, the
blocked Bloom filter is also 5 times faster than the basic Bloom filter
in querying.
- Block BF TestLocation cost {"time": "529.97183ms"}
- Basic BF TestLocation cost {"time": "3.197430181s"}
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/33489
update knowhere version to latest. remove usage of `seed_ef` as it be
replaced by existing `ef`.
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
related: #33137
adding has_more_result_tag for various level's reduce to rectify
reduce_stop_for_best
Signed-off-by: MrPresent-Han <chun.han@zilliz.com>
See also #33442
This fix shall prevent group checker keep printing "some node(s) haven't
received input" err message after collection released
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #30040
This PR introduce two database level props:
1. database.replica.number
2. database.resource_groups
User can set those two database props by AlterDatabase API, then can
load collection without specified replica_num and resource groups. then
it will use database level load param when try to load collections.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
See also #32642
`LocationCache` used map to store different locations for different K
which may cause lots of CPU time when get locations many times.
This PR change the implementation of LocationCache to store only the
location for the largest K used to totally remove the map access
operation.
See pprof from test of @XuanYang-cn
![image](https://github.com/milvus-io/milvus/assets/84113973/ad17cff8-62ad-4d78-9bb0-f6df0512f4ea)
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #32165
There were some frequent scan in metacache:
- List all segments whose start positions not synced
- List compacted segments
Those scan shall cause lots of CPU time when flushed segment number is
large meanwhile `Flushed` segments can be skipped in those two scenarios
This PR make:
- Add segment state shortcut in metacache
- List start positions state before `Flushed`
- Make compacted segments state to be `Dropped` and use `Dropped` state
while scanning them
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #32901
pr #32814 introduce the compatible issue, when upgrade to milvus latest,
the query coord may skip update dist due to the lastModifyTs doesn't
changes. but for old version querynode, the lastModifyTs in
GetDataDistritbuionResponse is always 0, which makes qc skip update
dist. then qc will keep retry the task to watch channel again and again.
this PR add compatible with old version querynode, when lastModifyTs is
0, qc will update it's data distribution.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #33099#32837#32419
1. len(search result) may be nq * topk, we need return all rather than
topk
2. the in restful response payload keep the same with milvus error code
Signed-off-by: PowderLi <min.li@zilliz.com>
1. use a small warmup pool to reduce the impact of warmup
2. change the warmup pool to nonblocking mode
3. disable warmup by default
4. remove the maximum size limit of 16 for the load pool
issue: https://github.com/milvus-io/milvus/issues/32772
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Co-authored-by: xiaofanluan <xiaofan.luan@zilliz.com>