1. Remove the `compactTo` field in `SegmentInfo`.
2. Remove the target segment not match and its retry logic in
`SyncManager`.
issue: https://github.com/milvus-io/milvus/issues/32809
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #30111
Segments could be "Flushed" only by `FlushSegments` grpc call from
datacoord by design. There are two possible reason to cause one segment
got flushed multiple times.
- Segment is in flushing state during multiple epoch in flowgraph
- Segment is flushed by flushTs & Flush segments
So this pr fix:
- Remove state change logic form FlushTs policy
- Change Flush segment into three stage way: Sealed->Flushing->Flushed
preventing multiple Flushed=true operations.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #28736#28748
See also #27675
Previous PR: #28646
This PR fixes `SegmentNotFound` issue when compaction happens multiple
times and the buffer of first generation segment is sync due to stale
policy
Now the `CompactSegments` API of metacache shall update the compactTo
field of segmentInfo if the compactTo segment is also compacted to keep
the bloodline clean
Also, add the `CompactedSegment` SyncPolicy to sync the compacted
segment asap to keep metacache clean
Now the `SyncPolicy` is an interface instead of a function type so that
when it selects some segments to sync, we colud log the reason and
target segment
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
This PR make previously merged refactory of datanode go online
- Use write node to replace insert/delete node
- Use write buffer manager to control all buffers
- Use sync manager to control sync tasks instead of flush manager
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>