issue: #29672
the storage account need privileges of actions
`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/*` at
least
Signed-off-by: PowderLi <min.li@zilliz.com>
This PR defines the new import reader interfaces and implement a binlog
reader for import.
issue: https://github.com/milvus-io/milvus/issues/28521
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #29699
Querycoord panicked when tried to pop from an empty heap. We assume the
heap shall not be empty, but in some branch, the candidate is never
pushed back.
This PR put pop & push in a closure and adds a defer call to push item
back.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
`Allocator.Alloc` and `Allocator.AllocOne` might be invoked multiple
times if there were multiple blobs set in one sync task.
This PR add pre-fetch logic for all blobs and cache logIDs in sync task
so that at most only one call of the allocator is needed.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #29113
The collection schema is crucial when performing search/query but some
of the information is calculated for every request.
This PR change schema field of cached collection info into a utility
`schemaInfo` type to store some stable result, say pk field,
partitionKeyEnabled, etc. And provided field name to id map for
search/query services.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #29625
This PR:
- Add a new implemention of `DeleteBuffer`: listDeleteBuffer
- holds cacheBlock slice
- `Put` method append new delete data into last block
- when a block is full, append a new block into the list
- Add `TryDiscard` method for `DeleteBuffer` interface
- For doubleCacheBuffer, do nothing
- For listDeleteBuffer, try to evict "old" blocks, which are blocks
before the first block whose start ts is behind provided ts
- Add checkpoint field for `UpdateVersion` sync action, which shall be
used to discard old cache delete block
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
Fix logic problem introduced by #29413, which is serializer tries to
merge statslog list while level segments do not have statslog. This
shall result returning error. `writeBufferBase` ignores this error but
it shall only ignore `ErrSegmentNotFound`.
This PR add logic checking segment level before execution of merging
statslog list. And add error type check for getSyncTask failure.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
For `GetRecoveryInfo` & `GetRecoveryInfoV2`, Level zero segment ids
shall be specified in vchan info so that querycoord could re-fetch
current segment info during watch procedure without having all segment
info
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
The tests need to call a private method, Milvus uses `#define` to
replace private with public, the hack trick works but would be broken if
the including order changed.
This uses friend to make all things work well
Signed-off-by: yah01 <yang.cen@zilliz.com>
Signed-off-by: yah01 <yah2er0ne@outlook.com>
issue: #29494
1. link with install path's libblob-chunk-manager
2. performance of `ShouldBindWith` is better than `ShouldBindBodyWith`
3. the middleware shouldn't read the unrefreshed parameter repeatly
Signed-off-by: PowderLi <min.li@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/27704
Add inverted index for some data types in Milvus. This index type can
save a lot of memory compared to loading all data into RAM and speed up
the term query and range query.
Supported: `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT`, `DOUBLE`, `BOOL`
and `VARCHAR`.
Not supported: `ARRAY` and `JSON`.
Note:
- The inverted index for `VARCHAR` is not designed to serve full-text
search now. We will treat every row as a whole keyword instead of
tokenizing it into multiple terms.
- The inverted index don't support retrieval well, so if you create
inverted index for field, those operations which depend on the raw data
will fallback to use chunk storage, which will bring some performance
loss. For example, comparisons between two columns and retrieval of
output fields.
The inverted index is very easy to be used.
Taking below collection as an example:
```python
fields = [
FieldSchema(name="pk", dtype=DataType.VARCHAR, is_primary=True, auto_id=False, max_length=100),
FieldSchema(name="int8", dtype=DataType.INT8),
FieldSchema(name="int16", dtype=DataType.INT16),
FieldSchema(name="int32", dtype=DataType.INT32),
FieldSchema(name="int64", dtype=DataType.INT64),
FieldSchema(name="float", dtype=DataType.FLOAT),
FieldSchema(name="double", dtype=DataType.DOUBLE),
FieldSchema(name="bool", dtype=DataType.BOOL),
FieldSchema(name="varchar", dtype=DataType.VARCHAR, max_length=1000),
FieldSchema(name="random", dtype=DataType.DOUBLE),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=dim),
]
schema = CollectionSchema(fields)
collection = Collection("demo", schema)
```
Then we can simply create inverted index for field via:
```python
index_type = "INVERTED"
collection.create_index("int8", {"index_type": index_type})
collection.create_index("int16", {"index_type": index_type})
collection.create_index("int32", {"index_type": index_type})
collection.create_index("int64", {"index_type": index_type})
collection.create_index("float", {"index_type": index_type})
collection.create_index("double", {"index_type": index_type})
collection.create_index("bool", {"index_type": index_type})
collection.create_index("varchar", {"index_type": index_type})
```
Then, term query and range query on the field can be speed up
automatically by the inverted index:
```python
result = collection.query(expr='int64 in [1, 2, 3]', output_fields=["pk"])
result = collection.query(expr='int64 < 5', output_fields=["pk"])
result = collection.query(expr='int64 > 2997', output_fields=["pk"])
result = collection.query(expr='1 < int64 < 5', output_fields=["pk"])
```
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
Memory management is broken due caused by chunkmanager not getting
cleaned up.
Fixing the following error:
```
SIGSEGV: segmentation violation
PC=0x10fc4aa0c m=6 sigcode=2
```
relate: https://github.com/milvus-io/milvus/issues/29507
Signed-off-by: yiwangdr <yiwangdr@gmail.com>
See also #27675
For now, Level zero segments shall always be synced as `Flushed` ones.
This PR fixes when level zero segments selected by policies other than
flush ts policy will be synced as growing state.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #29575
Add `getCollectionTarget` method which is atomic when scope is
`CurrentTargetFirst` or `NextTargetFirst`
Also return error when executor finds no channel in target manager
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
since milvus's build procedure is in the responding docker enviornment,
this procedure does not bind with particular host os, in turn it allows
milvus build on any os , including latest ubuntu os
background:
our self hosted runner is only using ubuntu-latest os
benefit:
make this github workflow obtain the resource from the self-hosted
runner
Signed-off-by: Sammy Huang <sammy.huang@zilliz.com>
the array type can't be compacted, the system could continue with the
inserted segments, but these segments can be never compacted
fix#29503
---------
Signed-off-by: yah01 <yah2er0ne@outlook.com>
issue: #29523
readable shard leader should still be the old one during channel
balance, if the new shard leader is not ready.
This PR fixed that query coord choose wrong shard leader during balance
channel
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
See also #29526
Previous PR removed flushed segment info from request, which causes
pipeline failing to exclude flushed segment info
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>