fix: Modify the batchsize of writer to timely flushing binlogs (#37692)

issue: #37579 

If the schema includes large varchar fields, a few thousand rows can
reach hundreds of MB in size. Therefore, if the batch size of the
segment writer is large, it will produce relatively large `binlogs`,
which can cause datanode to run out of memory (OOM) during compaction.

Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
This commit is contained in:
cai.zhang 2024-11-15 10:14:31 +08:00 committed by GitHub
parent 5ae347aba0
commit b9357e4716
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -51,7 +51,7 @@ import (
var _ task = (*statsTask)(nil)
const statsBatchSize = 10000
const statsBatchSize = 100
type statsTask struct {
ident string