add freshness checker
insert/upsert --> query: Get the time when it can be queried
delete --> query: Get the time when it can not be queried
Signed-off-by: zhuwenxing <wenxing.zhu@zilliz.com>
issue: #25639
/kind improvement
When the number of vector columns increases, the number of rows per
segment will decrease. In order to reduce the impact on vector indexing
performance, it is necessary to increase the segment max limit.
If a collection has multiple vector fields with memory and disk indices
on different vector fields, the size limit after segment compaction is
the minimum of segment.maxSize and segment.diskSegmentMaxSize.
Signed-off-by: xige-16 <xi.ge@zilliz.com>
---------
Signed-off-by: xige-16 <xi.ge@zilliz.com>
issue: #30102#30225
we should read MetricType from SearchResult,
because query node never
1. read metricType from LoadMeta
2. store to collection
3. set SearchRequest.MetricType
Signed-off-by: PowderLi <min.li@zilliz.com>
Adjust config source for support config event which for dynamic config
could use paramtable and not deadlock.
relate: https://github.com/milvus-io/milvus/issues/29807
Signed-off-by: aoiasd <zhicheng.yue@zilliz.com>
issue: #29772
The shardLeaders cache does not actively expire, update the cache when
search/query fails.
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
See also #30211
After fix initialization problem, distributed components do no have
their role set. This will cause logger & tracing miss component service
info when recording information.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
After supporting control sync parallel in datanode globally, the shall
change default value to a more suitable value for most use cases.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
After #28873, PartitionID and CollectionID should be filled in
CompactionSegmentBinlog so that DataNode can compose
the correct logPath. However There're some places left forgotten to fill
in the information, causing Datanode downloading `xxx/0/0/xxxx/xxxx`
binlogs during compaction
See also: #30213
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
See also #30273
This PR:
- Rename confusing `LoadIndexInfo` to `UpdateIndexInfo` for LocalSegment
- Use `DynamicPool` instead of `LoadPool` for `UpdateSealedSegmentIndex`
- Fix cgo call missing pool control
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Allows proactive warming up of chunk cache. Original vector data will be
asynchronously loaded into the chunk cache during the load process. It
has the potential to significantly reduce query/search latency for a
certain duration after the load, albeit with a concurrent increase in
disk usage.
issue: https://github.com/milvus-io/milvus/issues/30181
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
before this, every time writting the index chunk data into the disk,
there are 4 I/O operations:
- open the file
- seek to the offset
- write the data
- close the file
this optimized this to open only once and continiously write all data.
This also makes it concurrent to load the files from object storage
Signed-off-by: yah01 <yang.cen@zilliz.com>
See also #30250
This PR add requery flag in query task. When reQuery flag is true, query
task shall skip partition name conversion and use pre-calculated
partitionIDs passed from search task.
TODO: hybrid search does not have partition id information. we shall
apply same logic for hybrid search later.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
1. modify test case: test_search_repeatedly_ivf_index_different_limit
2. update pymilvus version from 2.4.0rc19 to 2.4.0rc24
3. Before, insert will return a pk list. In the latest milvus client,
insert will return a number that is inserted successfully
4. In the latest milvus client, flush and num_entities have been removed
5. Before, the default consistency level of a new collection is strong.
In the latest milvus client, it becomes bounded. So related cases have
been modified correspondingly, or immediate search after insert will
return no results.
6. In the latest pymilvus, new data type FLOAT16_VECTOR and
BFLOAT16_VECTOR have been added.
Signed-off-by: nico <cheng.yuan@zilliz.com>
See also #30111
Segments could be "Flushed" only by `FlushSegments` grpc call from
datacoord by design. There are two possible reason to cause one segment
got flushed multiple times.
- Segment is in flushing state during multiple epoch in flowgraph
- Segment is flushed by flushTs & Flush segments
So this pr fix:
- Remove state change logic form FlushTs policy
- Change Flush segment into three stage way: Sealed->Flushing->Flushed
preventing multiple Flushed=true operations.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #30176
Move paramtable.Init after env setup in roles.Run. Also introduced a
flag for mixture run to set role correctly for mixture mode.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>