This PR introduces novel managerial roles for importv2:
1. ImportMeta: To manage all the import tasks;
2. ImportScheduler: To process tasks and modify their states;
3. ImportChecker: To ascertain the completion of all tasks and instigate
relevant operations.
issue: https://github.com/milvus-io/milvus/issues/28521
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
If DC restarted, those unkonwn compaction tasks
will never get call back in DN, so that the segments in the compaction
task will be locked, unable to sync and compaction again, blocking cp
advance and compaction executing.
See also: #30137
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
This pr decoups importing segment from flush process by:
1. Exclude the importing segment from the flush policy, this approch
avoids notifying the datanode to flush the importing segment, which may
not exist.
2. When RootCoord call Flush, DataCoord directly set the importing
segment state to `Flushed`.
issue: https://github.com/milvus-io/milvus/issues/30359
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
syncMgr.Block() will lock the segment when executing compaction.
Previous implementation was unable to Unblock thoese segments when
compaction failed. If next compaction of the same segments arrives,
it'll stuck forever and block all later compation tasks.
This PR makes sure compaction executor would Unblock these segments
after a failure compaction.
Apart form that, this PR also refines some logs and clean some codes of
compaction, compactor:
1. Log segment count instead of segmentIDs to avoid logging too many
segments
2. Flush RPC returns L1 segments only, skip L0 and L2
3. CompactionType is checked in `Compaction`, no need to check again
inside compactor
4. Use ligter method to replace `getSegmentMeta`
5. Log information for L0 compaction when encounters an error
See also: #30213
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
This PR introduces novel importv2 roles for datanode:
1. Executor: To execute tasks, a import task will be divided into the
following steps: read data -> hash data -> sync data;
2. Manager: To manage all the tasks;
issue: https://github.com/milvus-io/milvus/issues/28521
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #27675
This PR adds back MemoryHighSyncPolicy implementation. Also change
MinSegmentSize & CheckInterval to configurable param item.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #30167
After support open telemetry tracing, we want to have traceID as well,
this PR adds util functions to set traceID with span & propagate traceID
between different context.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Resolves#30167
This PR add tracing for all compaction from the task start in datacoord
and execution procedures in datanode.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
This PR discontinuing the subscription to the mq and, instead, employing
the channel checkpoint as the DML and starting position for the import
segments.
issue: https://github.com/milvus-io/milvus/issues/30106
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
This PR also fixes bugs in l0 compactor where
l0 results would never be removed from datanode
See also: #30099
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
don't store logPath in meta to reduce memory, when service get
segmentinfo, generate logpath from logid.
#28885
Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
See also #28924
The compaction task generated before datanode finish `SaveBinlogPath`
grpc call contains segments which are still in Growing state DataNode
shall verify each non-levelzero segments before submit compaction task
to executor
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
- Change flowgraphManager to fgManagerImpl
- Change close to stop
- change execute to controlMemWaterLevel
- Change method name of fgManager for readability
- Add mockery for fgmanager
Issue: #28853
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
This PR:
- Separates compaction scheduler and check results loop So that slow in
check-loop doesn't influence execution.
- Cleans compaction tasks when drop a vchannel so dropped-channel's
compaction tasks won't be checked over and over again.
- Skips meta change when meta's already changed, avoid panic
- Remove not inuse injectDone(bool) parameter
See also: #28628, #28209
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
remove some unnecessary assignments, for the reason that
commonpbutil.NewMsgBase has default value.
Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
See also #28628
Previous compaction task blocked the segment sync task and may block the
flowgraph when sync task is generated by auto sync policy This
`BlockAll` call will block forever and cause whole fg stuck
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
This PR make previously merged refactory of datanode go online
- Use write node to replace insert/delete node
- Use write buffer manager to control all buffers
- Use sync manager to control sync tasks instead of flush manager
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Compaction plan result contained one segment for one plan. For l0
compaction would write to multiple segments, this PR expand the segments
number in plan results and refactor some names for readibility.
- Name refactory: - CompactionStateResult -> CompactionPlanResult -
CompactionResult -> CompactionSegment
See also: #27606
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
Passing initCtx to all IO funcs in newDataSyncService,
so when ctx.Canceled, newDataSyncService would return.
See also: #25309
Signed-off-by: yangxuan <xuan.yang@zilliz.com>