Commit Graph

53 Commits

Author SHA1 Message Date
congqixia
b0bd290a6e
enhance: Use internal json(sonic) to replace std json lib (#37708)
Some checks are pending
Code Checker / Code Checker AMD64 Ubuntu 22.04 (push) Waiting to run
Code Checker / Code Checker Amazonlinux 2023 (push) Waiting to run
Code Checker / Code Checker rockylinux8 (push) Waiting to run
Mac Code Checker / Code Checker MacOS 12 (push) Waiting to run
Build and test / Build and test AMD64 Ubuntu 22.04 (push) Waiting to run
Build and test / UT for Cpp (push) Blocked by required conditions
Build and test / UT for Go (push) Blocked by required conditions
Build and test / Integration Test (push) Blocked by required conditions
Build and test / Upload Code Coverage (push) Blocked by required conditions
Related to #35020

Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
2024-11-18 10:46:31 +08:00
zhenshan.cao
63843dce33
fix: Fix conan gdal building problem (#37338)
issue:https://github.com/milvus-io/milvus/issues/27576

Signed-off-by: zhenshan.cao <zhenshan.cao@zilliz.com>
2024-10-31 21:04:16 +08:00
Hao Tan
67c4340565
feat: Geospatial Data Type and GIS Function Support for milvus server (#35990)
issue:https://github.com/milvus-io/milvus/issues/27576

# Main Goals
1. Create and describe collections with geospatial fields, enabling both
client and server to recognize and process geo fields.
2. Insert geospatial data as payload values in the insert binlog, and
print the values for verification.
3. Load segments containing geospatial data into memory.
4. Ensure query outputs can display geospatial data.
5. Support filtering on GIS functions for geospatial columns.

# Solution
1. **Add Type**: Modify the Milvus core by adding a Geospatial type in
both the C++ and Go code layers, defining the Geospatial data structure
and the corresponding interfaces.
2. **Dependency Libraries**: Introduce necessary geospatial data
processing libraries. In the C++ source code, use Conan package
management to include the GDAL library. In the Go source code, add the
go-geom library to the go.mod file.
3. **Protocol Interface**: Revise the Milvus protocol to provide
mechanisms for Geospatial message serialization and deserialization.
4. **Data Pipeline**: Facilitate interaction between the client and
proxy using the WKT format for geospatial data. The proxy will convert
all data into WKB format for downstream processing, providing column
data interfaces, segment encapsulation, segment loading, payload
writing, and cache block management.
5. **Query Operators**: Implement simple display and support for filter
queries. Initially, focus on filtering based on spatial relationships
for a single column of geospatial literal values, providing parsing and
execution for query expressions.
6. **Client Modification**: Enable the client to handle user input for
geospatial data and facilitate end-to-end testing.Check the modification
in pymilvus.

---------

Signed-off-by: tasty-gumi <1021989072@qq.com>
2024-10-31 20:58:20 +08:00
yihao.dai
b45cf2d49f
enhance: Add max length check for csv import (#37077)
1. Add max length check for csv import.
2. Tidy import options.
3. Tidy common import util functions.

issue: https://github.com/milvus-io/milvus/issues/34150

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-10-25 14:37:29 +08:00
OxalisCu
60e51f1076
fix: unicode replacement character (0xFFFD) are not supported as csv delimiter (#36310)
https://github.com/milvus-io/milvus/issues/36309

Signed-off-by: OxalisCu <2127298698@qq.com>
2024-10-17 14:45:40 +08:00
smellthemoon
463c47ced1
enhance: support default value in import (#36700)
https://github.com/milvus-io/milvus/issues/31728

Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-10-17 12:05:24 +08:00
Buqian Zheng
82c5cf2fa2
feat: add bulk insert support for Functions (#36715)
issue: https://github.com/milvus-io/milvus/issues/35853 and
https://github.com/milvus-io/milvus/issues/35856

Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
2024-10-12 17:19:20 +08:00
smellthemoon
b60164b882
enhance: support null in bulk insert of binlog to help backup null (#36526)
https://github.com/milvus-io/milvus/issues/36341

Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-09-26 14:35:14 +08:00
smellthemoon
89397d1e66
enhance: adjust parquet reader type check with null type (#36266)
#36252 
remove no need type check. if users use null type writer to write
parquet, hope it successfully.

Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-09-19 18:43:10 +08:00
smellthemoon
fc1bdd4c84
fix: to forbid bulk insert with nullable field in numpy files (#36246)
#36241

Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-09-14 15:35:07 +08:00
OxalisCu
3a381bc247
enhance: Bulkinsert supports null in csv formats (#35912)
see details in this issue
https://github.com/milvus-io/milvus/issues/35911

---------

Signed-off-by: OxalisCu <2127298698@qq.com>
2024-09-09 19:17:07 +08:00
cai.zhang
2c9bb4dfa3
feat: Support stats task to sort segment by PK (#35054)
issue: #33744 

This PR includes the following changes:
1. Added a new task type to the task scheduler in datacoord: stats task,
which sorts segments by primary key.
2. Implemented segment sorting in indexnode.
3. Added a new field `FieldStatsLog` to SegmentInfo to store token index
information.

---------

Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
2024-09-02 14:19:03 +08:00
Xiaofan
50fcfe8ef1
enhance: add nan and inf check (#35683)
fix #35594
add float check on files

Signed-off-by: xiaofanluan <xiaofan.luan@zilliz.com>
2024-08-25 15:22:57 +08:00
OxalisCu
ed4eaffc9d
enhance: add csv support for bulkinsert (#34938)
See this issue for details: #34937

---------

Signed-off-by: OxalisCu <2127298698@qq.com>
2024-08-21 17:47:01 +08:00
Ted Xu
41646c8439
feat: integrate new deltalog format (#35522)
See #34123

---------

Signed-off-by: Ted Xu <ted.xu@zilliz.com>
2024-08-20 19:06:56 +08:00
smellthemoon
80a7c78f28
enhance: import supports null in parquet and json formats (#35558)
#31728

---------

Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-08-20 16:50:55 +08:00
nish112022
3948bd4e79
fix: Added check for validating varchar,array max length (#35499)
issue : https://github.com/milvus-io/milvus/issues/34150

This is for numpy,parquet,json readers.

---------

Signed-off-by: Nischay Yadav <nischay.yadav@ibm.com>
2024-08-20 11:42:55 +08:00
yihao.dai
b71e058bc5
enhance: Add import option to skip disk quota check (#35274)
Add an option to skip the disk quota check for backup-restore import.

issue: https://github.com/milvus-io/milvus/issues/33775

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-08-05 16:40:16 +08:00
yihao.dai
678c0a708b
enhance: Refine import error log (#35067)
issue: https://github.com/milvus-io/milvus/issues/35060

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-08-05 15:30:16 +08:00
zhenshan.cao
aa247f192d
enhance: remove unused code for StorageV2 (#35132)
issue: https://github.com/milvus-io/milvus/issues/34168

Signed-off-by: zhenshan.cao <zhenshan.cao@zilliz.com>
2024-08-01 12:08:13 +08:00
shaoting-huang
88b373b024
enhance: binlog primary key turn off dict encoding (#34358)
issue: #34357 

Go Parquet uses dictionary encoding by default, and it will fall back to
plain encoding if the dictionary size exceeds the dictionary size page
limit. Users can specify custom fallback encoding by using
`parquet.WithEncoding(ENCODING_METHOD)` in writer properties. However,
Go Parquet [fallbacks to plain
encoding](e65c1e295d/go/parquet/file/column_writer_types.gen.go.tmpl (L238))
rather than custom encoding method users provide. Therefore, this patch
only turns off dictionary encoding for the primary key.

With a 5 million auto ID primary key benchmark, the parquet file size
improves from 13.93 MB to 8.36 MB when dictionary encoding is turned
off, reducing primary key storage space by 40%.

Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
2024-07-17 17:47:44 +08:00
congqixia
3333160b8d
enhance: Fix lint issues from recent PRs (#34482)
See also #34483
Some lint issues are introduced due to lack of static check run. This PR
fixes these problems.

---------

Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
2024-07-09 10:06:24 +08:00
congqixia
2f691f1e67
enhance: Unify DeleteLog parsing code (#34009)
See also #33787

The parsing delete log is distributed in lots of places, which is not
recommended and hard to maintain.

This PR abstract common parsing logic into `DeleteLog.Parse` method to
unify implementation and make it easier to replace json parsing lib.

Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>

---------

Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
2024-06-21 16:54:01 +08:00
smellthemoon
2a1356985d
enhance: support null in go payload (#32296)
#31728

---------

Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-06-19 17:08:00 +08:00
congqixia
512ea6be5f
enhance: Avoid merging insert data when buffering insert msgs (#33562)
See also #33561

This PR:
- Use zero copy when buffering insert messages
- Make `storage.InsertCodec` support serialize multiple insert data
chunk into same batch binlog files

Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>

---------

Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
2024-06-13 11:15:56 +08:00
Cai Yudong
9d4535ce0b
enhance: Handle Float16Vector/BFloat16Vector numpy bulk insert as same as BinaryVector (#33760)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-06-12 17:17:55 +08:00
yihao.dai
b1d46eb34b
fix: Fix multiple vector fields import (#33723)
1. Fix dim mismatch with multi-vector fields and JSON import
2. Enhance: do not display file ID in GetImportResponse.

issue: https://github.com/milvus-io/milvus/issues/33681,
https://github.com/milvus-io/milvus/issues/33682

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-06-10 21:57:54 +08:00
yihao.dai
3540eee977
enhance: Support L0 import (#33514)
issue: https://github.com/milvus-io/milvus/issues/33157

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-06-07 14:17:20 +08:00
Cai Yudong
4004e4c545
enhance: Optimize bulk insert unittest (#33224)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-24 10:23:41 +08:00
Cai Yudong
cb480d17c8
fix: Fix SparseFloatVector data parse error for parquet (#33187)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-21 15:09:39 +08:00
smellthemoon
89ad3eb0ca
enhance: reduce memory when read field (#33195)
Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
2024-05-20 22:25:38 +08:00
Cai Yudong
b560602885
enhance: Store SparseFloatVector into parquet as JSON string (#33101)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-17 15:01:37 +08:00
Cai Yudong
4ef163fb70
enhance: Support readable JSON file import for Float16/BFloat16/SparseFloat (#33064)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-16 14:47:35 +08:00
Cai Yudong
dc89c6f810
enhance: remove duplicated data generation APIs for bulk insert test (#32889)
Issue: #22837

including following changes:
1. Add API CreateInsertData() and BuildArrayData() in
internal/util/testutil
2. Remove duplicated test APIs from importutilv2 unittest and bulk
insert integration test

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-10 15:27:31 +08:00
Cai Yudong
8bb58d0460
enhance: optimize vector offsets handling for parquet (#32822)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-09 14:43:30 +08:00
Cai Yudong
bcdbd1966e
feat: Support sparse float vector bulk insert for binlog/json/parquet (#32649)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-05-07 18:43:30 +08:00
yihao.dai
4de063ae14
fix: Make the dynamic column optional in parquet import (#32738)
issue: https://github.com/milvus-io/milvus/issues/32729

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-05-07 11:21:29 +08:00
yihao.dai
1594122c0a
enhance: Make the dynamic field file optional during numpy import (#32596)
1. Make the dynamic field file optional during numpy import
2. Add integration importing test with dynamic
3. Disallow file of pk when autoID=true during numpy import

issue: https://github.com/milvus-io/milvus/issues/32542

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-04-28 19:39:25 +08:00
chyezh
2586c2f1b3
enhance: use WalkWithPrefix api for oss, enable piplined file gc (#31740)
issue: #19095,#29655,#31718

- Change `ListWithPrefix` to `WalkWithPrefix` of OOS into a pipeline
mode.

- File garbage collection is performed in other goroutine.

- Segment Index Recycle clean index file too.

---------

Signed-off-by: chyezh <chyezh@outlook.com>
2024-04-25 20:41:27 +08:00
Cai Yudong
5fc439c600
feat: Bulk insert support fp16/bf16 (#32157)
Issue: #22837

Signed-off-by: Cai Yudong <yudong.cai@zilliz.com>
2024-04-22 10:05:22 +08:00
yihao.dai
aa96843d31
fix: Fix import hanging and improve logging output (#32166)
Fix import hanging when the previous import task failed, and improve
parquet import logging outout.

issue: https://github.com/milvus-io/milvus/issues/31834

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-04-13 22:03:23 +08:00
yihao.dai
273df98e20
enhance: Add binlog import intergration test (#32112)
issue: https://github.com/milvus-io/milvus/issues/28521

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-04-11 10:31:18 +08:00
yihao.dai
1b5554c8cb
enhance: Support $meta key for json import (#32013)
During JSON import:
1. Allow the specification of the $meta key
2. Prohibit duplicated keys within the $meta field, for instance,
`{"id": 1, "vector": [], "x": 6, "$meta": {"x": 8}}`

issue: https://github.com/milvus-io/milvus/issues/31835

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-04-10 17:27:17 +08:00
yihao.dai
4e264003bf
enhance: Ensure ImportV2 waits for the index to be built and refine some logic (#31629)
Feature Introduced:
1. Ensure ImportV2 waits for the index to be built

Enhancements Introduced:
1. Utilization of local time for timeout ts instead of allocating ts
from rootcoord.
3. Enhanced input file length check for binlog import.
4. Removal of duplicated manager in datanode.
5. Renaming of executor to scheduler in datanode.
6. Utilization of a thread pool in the scheduler in datanode.

issue: https://github.com/milvus-io/milvus/issues/28521

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-04-01 20:09:13 +08:00
yihao.dai
31cf849f68
enhance: Support retriving file size from importutilv2.Reader (#31533)
To reduce the overhead caused by listing the S3 objects, add an
interface to importutil.Reader to retrieve file sizes.

issue: https://github.com/milvus-io/milvus/issues/31532,
https://github.com/milvus-io/milvus/issues/28521

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-03-25 20:29:07 +08:00
yihao.dai
9a13b9822f
enhance: Return more fields in import progress response (#31539)
Return more fields in import progress response, include importedRows and
totalRows. Additionally, ensure compatibility with the old import
progress response by retaining fields of create timestamp and row count.

issue: https://github.com/milvus-io/milvus/issues/31448
https://github.com/milvus-io/milvus/issues/31237
https://github.com/milvus-io/milvus/issues/28521

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-03-24 21:57:06 +08:00
yihao.dai
87b3c25b15
fix: Fix binlog import (#31205)
1. File type validation is omitted during binlog import.
2. System fields are appended to the schema during binlog import.

issue: https://github.com/milvus-io/milvus/issues/28521

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-03-13 10:35:04 +08:00
cai.zhang
de2c95d00c
enhance: Constraint dynamic field as key-value format (#31183)
issue: #31051

Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
2024-03-12 12:45:03 +08:00
yihao.dai
c5918290e6
feat: Add import executor and manager for datanode (#29438)
This PR introduces novel importv2 roles for datanode:
1. Executor: To execute tasks, a import task will be divided into the
following steps: read data -> hash data -> sync data;
2. Manager: To manage all the tasks;

issue: https://github.com/milvus-io/milvus/issues/28521

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-01-31 20:45:04 +08:00
yihao.dai
3d07b6682c
feat: Add import reader for numpy (#29253)
This PR implements a new numpy reader for import.

issue: https://github.com/milvus-io/milvus/issues/28521

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2024-01-08 19:42:49 +08:00