fix changlog conflict

This commit is contained in:
yhz 2019-11-25 11:40:40 +08:00
commit 35a25e01eb
127 changed files with 2348 additions and 2491 deletions

View File

@ -17,6 +17,10 @@ Please mark all change in change log and use the ticket from JIRA.
- \#399 - Create partition should be failed if partition tag existed
- \#412 - Message returned is confused when partition created with null partition name
- \#416 - Drop the same partition success repeatally
- \#440 - Query API in customization still uses old version
- \#440 - Server cannot startup with gpu_resource_config.enable=false in GPU version
- \#458 - Index data is not compatible between 0.5 and 0.6
- \#486 - gpu no usage during index building
## Feature
- \#12 - Pure CPU version for Milvus
@ -26,6 +30,7 @@ Please mark all change in change log and use the ticket from JIRA.
- \#227 - Support new index types SPTAG-KDT and SPTAG-BKT
- \#346 - Support build index with multiple gpu
- \#420 - Update shards merge part to match v0.5.3
- \#488 - Add log in scheduler/optimizer
## Improvement
- \#255 - Add ivfsq8 test report detailed version
@ -40,6 +45,9 @@ Please mark all change in change log and use the ticket from JIRA.
- \#358 - Add more information in build.sh and install.md
- \#404 - Add virtual method Init() in Pass abstract class
- \#409 - Add a Fallback pass in optimizer
- \#433 - C++ SDK query result is not easy to use
- \#449 - Add ShowPartitions example for C++ SDK
- \#470 - Small raw files should not be build index
## Task

View File

@ -5,10 +5,10 @@
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.1-yellowgreen)
![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen)
![Release_date](https://img.shields.io/badge/release%20date-November-yellowgreen)
[中文版](README_CN.md)
[中文版](README_CN.md) | [日本語版](README_JP.md)
## What is Milvus
@ -18,7 +18,7 @@ For more detailed introduction of Milvus and its architecture, see [Milvus overv
Milvus provides stable [Python](https://github.com/milvus-io/pymilvus), [Java](https://github.com/milvus-io/milvus-sdk-java) and [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIs.
Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.0/).
Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.3/).
## Get started
@ -52,11 +52,13 @@ We use [GitHub issues](https://github.com/milvus-io/milvus/issues) to track issu
To connect with other users and contributors, welcome to join our [Slack channel](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk).
## Thanks
## Contributors
We greatly appreciate the help of the following people.
Below is a list of Milvus contributors. We greatly appreciate your contributions!
- [akihoni](https://github.com/akihoni) found a broken link and a small typo in the README file.
- [akihoni](https://github.com/akihoni) provided the CN version of README, and found a broken link in the doc.
- [goodhamgupta](https://github.com/goodhamgupta) fixed a filename typo in the bootcamp doc.
- [erdustiggen](https://github.com/erdustiggen) changed from std::cout to LOG for error messages, and fixed a clang format issue as well as some grammatical errors.
## Resources
@ -64,6 +66,8 @@ We greatly appreciate the help of the following people.
- [Milvus bootcamp](https://github.com/milvus-io/bootcamp)
- [Milvus test reports](https://github.com/milvus-io/milvus/tree/master/docs)
- [Milvus Medium](https://medium.com/@milvusio)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
@ -74,6 +78,4 @@ We greatly appreciate the help of the following people.
## License
[Apache License 2.0](LICENSE)
[Apache License 2.0](LICENSE)

View File

@ -1,155 +1,35 @@
![Milvuslogo](https://raw.githubusercontent.com/milvus-io/docs/master/assets/milvus_logo.png)
[![Slack](https://img.shields.io/badge/Join-Slack-orange)](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.0-orange)
![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen)
![Release_date](https://img.shields.io/badge/release_date-October-yellowgreen)
- [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
- [Twitter](https://twitter.com/milvusio)
- [Facebook](https://www.facebook.com/io.milvus.5)
- [博客](https://www.milvus.io/blog/)
- [CSDN](https://zilliz.blog.csdn.net/)
- [中文官网](https://www.milvus.io/zh-CN/)
# 欢迎来到 Milvus
## Milvus 是什么
Milvus 是一款开源的、针对海量特征向量的相似性搜索引擎。基于异构众核计算框架设计,成本更低,性能更好。在有限的计算资源下,十亿向量搜索仅毫秒响应。
Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java](https://github.com/milvus-io/milvus-sdk-java) 以及 [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) 的 API 接口
若要了解 Milvus 详细介绍和整体架构,请访问 [Milvus 简介](https://www.milvus.io/docs/zh-CN/aboutmilvus/overview/)。
通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.0/) 获取最新发行版本的 Milvus
Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java](https://github.com/milvus-io/milvus-sdk-java) 以及 C++ 的 API 接口
- 异构众核
Milvus 基于异构众核计算框架设计,成本更低,性能更好。
- 多元化索引
Milvus 支持多种索引方式,使用量化索引、基于树的索引和图索引等算法。
- 资源智能管理
Milvus 根据实际数据规模和可利用资源,智能调节优化查询计算和索引构建过程。
- 水平扩容
Milvus 支持在线 / 离线扩容,仅需执行简单命令,便可弹性伸缩计算节点和存储节点。
- 高可用性
Milvus 集成了 Kubernetes 框架,能有效避免单点障碍情况的发生。
- 简单易用
Milvus 安装简单,使用方便,并可使您专注于特征向量。
- 可视化监控
您可以使用基于 Prometheus 的图形化监控,以便实时跟踪系统性能。
## 整体架构
![Milvus_arch](https://github.com/milvus-io/docs/blob/master/assets/milvus_arch.png)
通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.3/) 获取最新版本的功能和更新。
## 开始使用 Milvus
### 硬件要求
请参阅 [Milvus 安装指南](https://www.milvus.io/docs/zh-CN/userguide/install_milvus/) 使用 Docker 容器安装 Milvus。若要基于源码编译请访问 [源码安装](install.md)。
| 硬件设备 | 推荐配置 |
| -------- | ------------------------------------- |
| CPU | Intel CPU Haswell 及以上 |
| GPU | NVIDIA Pascal 系列及以上 |
| 内存 | 8 GB 或以上(取决于具体向量数据规模) |
| 硬盘 | SATA 3.0 SSD 及以上 |
### 使用 Docker
您可以方便地使用 Docker 安装 Milvus。具体请查看 [Milvus 安装指南](https://milvus.io/docs/zh-CN/userguide/install_milvus/)。
### 从源代码编译
#### 软件要求
- Ubuntu 18.04 及以上
- CMake 3.14 及以上
- CUDA 10.0 及以上
- NVIDIA driver 418 及以上
#### 编译
##### 第一步 安装依赖项
```shell
$ cd [Milvus sourcecode path]/core
$ ./ubuntu_build_deps.sh
```
##### 第二步 编译
```shell
$ cd [Milvus sourcecode path]/core
$ ./build.sh -t Debug
or
$ ./build.sh -t Release
```
当您成功编译后,所有 Milvus 必需组件将安装在`[Milvus root path]/core/milvus`路径下。
##### 启动 Milvus 服务
```shell
$ cd [Milvus root path]/core/milvus
```
`LD_LIBRARY_PATH` 中添加 `lib/` 目录:
```shell
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/milvus/lib
```
启动 Milvus 服务:
```shell
$ cd scripts
$ ./start_server.sh
```
若要停止 Milvus 服务,请使用如下命令:
```shell
$ ./stop_server.sh
```
若需要修改 Milvus 配置文件 `conf/server_config.yaml` 和`conf/log_config.conf`,请查看 [Milvus 配置](https://milvus.io/docs/zh-CN/reference/milvus_config/)。
若要更改 Milvus 设置,请参阅 [Milvus 配置](https://www.milvus.io/docs/zh-CN/reference/milvus_config/)。
### 开始您的第一个 Milvus 程序
#### 运行 Python 示例代码
您可以尝试用 [Python](https://www.milvus.io/docs/en/userguide/example_code/) 或 [Java example code](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples) 运行 Milvus 示例代码。
请确保系统的 Python 版本为 [Python 3.5](https://www.python.org/downloads/) 或以上。
安装 Milvus Python SDK。
```shell
# Install Milvus Python SDK
$ pip install pymilvus==0.2.3
```
创建 `example.py` 文件,并向文件中加入 [Python 示例代码](https://github.com/milvus-io/pymilvus/blob/master/examples/advanced_example.py)。
运行示例代码
```shell
# Run Milvus Python example
$ python3 example.py
```
#### 运行 C++ 示例代码
若要使用 C++ 示例代码,请使用以下命令:
```shell
# Run Milvus C++ example
@ -157,41 +37,44 @@ $ python3 example.py
$ ./sdk_simple
```
#### 运行 Java 示例代码
## 路线图
请确保系统的 Java 版本为 Java 8 或以上。
请从[此处](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples)获取 Java 示例代码。
请阅读我们的[路线图](https://milvus.io/docs/zh-CN/roadmap/)以了解更多即将开发的新功能。
## 贡献者指南
我们由衷欢迎您推送贡献。关于贡献流程的详细信息,请参阅 [贡献者指南](https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md)。本项目遵循 Milvus [行为准则](https://github.com/milvus-io/milvus/blob/master/CODE_OF_CONDUCT.md)。如果您希望参与本项目,请遵守该准则的内容。
我们由衷欢迎您推送贡献。关于贡献流程的详细信息,请参阅[贡献者指南](https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md)。本项目遵循 Milvus [行为准则](https://github.com/milvus-io/milvus/blob/master/CODE_OF_CONDUCT.md)。如果您希望参与本项目,请遵守该准则的内容。
我们使用 [GitHub issues](https://github.com/milvus-io/milvus/issues/new/choose) 追踪问题和补丁。若您希望提出问题或进行讨论,请加入我们的社区。
我们使用 [GitHub issues](https://github.com/milvus-io/milvus/issues) 追踪问题和补丁。若您希望提出问题或进行讨论,请加入我们的社区。
## 加入 Milvus 社区
欢迎加入我们的 [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk) 以便与其他用户和贡献者进行交流。
欢迎加入我们的 [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)以便与其他用户和贡献者进行交流。
## Milvus 路线图
## 贡献者
请阅读我们的[路线图](https://milvus.io/docs/zh-CN/roadmap/)以获得更多即将开发的新功能。
以下是 Milvus 贡献者名单,在此我们深表感谢:
- [akihoni](https://github.com/akihoni) 提供了中文版 README并发现了 README 中的无效链接。
- [goodhamgupta](https://github.com/goodhamgupta) 发现并修正了在线训练营文档中的文件名拼写错误。
- [erdustiggen](https://github.com/erdustiggen) 将错误信息里的 std::cout 修改为 LOG修正了一个 Clang 格式问题和一些语法错误。
## 相关链接
[Milvus 官方网站](https://www.milvus.io/)
- [Milvus.io](https://www.milvus.io)
[Milvus 文档](https://www.milvus.io/docs/en/userguide/install_milvus/)
- [Milvus 在线训练营](https://github.com/milvus-io/bootcamp)
[Milvus 在线训练营](https://github.com/milvus-io/bootcamp)
- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs)
[Milvus 博客](https://www.milvus.io/blog/)
- [Milvus Medium](https://medium.com/@milvusio)
[Milvus CSDN](https://zilliz.blog.csdn.net/)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
[Milvus 路线图](https://milvus.io/docs/en/roadmap/)
- [Milvus Twitter](https://twitter.com/milvusio)
- [Milvus Facebook](https://www.facebook.com/io.milvus.5)
## 许可协议
[Apache 许可协议2.0版](https://github.com/milvus-io/milvus/blob/master/LICENSE)

75
README_JP.md Normal file
View File

@ -0,0 +1,75 @@
![Milvuslogo](https://github.com/milvus-io/docs/blob/master/assets/milvus_logo.png)
[![Slack](https://img.shields.io/badge/Join-Slack-orange)](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen)
![Release_date](https://img.shields.io/badge/release%20date-November-yellowgreen)
# Milvus へようこそ
## 概要
Milvusは世界中一番早い特徴ベクトルにむかう類似性検索エンジンです。不均質な計算アーキテクチャーに基づいて効率を最大化出来ます。数十億のベクタの中に目標を検索できるまで数ミリ秒しかかからず、最低限の計算資源だけが必要です。
Milvusは安定的な[Python](https://github.com/milvus-io/pymilvus)、[Java](https://github.com/milvus-io/milvus-sdk-java)又は [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIsを提供します。
Milvus [リリースノート](https://milvus.io/docs/en/release/v0.5.3/)を読んで最新バージョンや更新情報を手に入れます。
## はじめに
DockerでMilvusをインストールすることは簡単です。[Milvusインストール案内](https://milvus.io/docs/en/userguide/install_milvus/) を参考してください。ソースからMilvusを構築するために、[ソースから構築する](install.md)を参考してください。
Milvusをコンフィグするために、[Milvusコンフィグ](https://github.com/milvus-io/docs/blob/master/reference/milvus_config.md)を読んでください。
### 初めてのMilvusプログラムを試す
[Python](https://www.milvus.io/docs/en/userguide/example_code/)や[Java](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples)などのサンプルコードを使ってMilvusプログラムを試す。
C++サンプルコードを実行するために、次のコマンドをつかってください。
```shell
# Run Milvus C++ example
$ cd [Milvus root path]/core/milvus/bin
$ ./sdk_simple
```
## Milvusロードマップ
[ロードマップ](https://milvus.io/docs/en/roadmap/)を読んで、追加する予定の特性が分かります。
## 貢献規約
本プロジェクトへの貢献に心より感謝いたします。 Milvusを貢献したいと思うなら、[貢献規約](CONTRIBUTING.md)を読んでください。 本プロジェクトはMilvusの[行動規範](CODE_OF_CONDUCT.md)に従います。プロジェクトに参加したい場合は、行動規範を従ってください。
[GitHub issues](https://github.com/milvus-io/milvus/issues) を使って問題やバッグなとを報告しでください。 一般てきな問題なら, Milvusコミュニティに参加してください。
## Milvusコミュニティを参加する
他の貢献者と交流したい場合は、Milvusの [slackチャンネル](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)に参加してください。
## 参考情報
- [Milvus.io](https://www.milvus.io)
- [Milvus](https://github.com/milvus-io/bootcamp)
- [Milvus テストレポート](https://github.com/milvus-io/milvus/tree/master/docs)
- [Milvus Medium](https://medium.com/@milvusio)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
- [Milvus ツイッター](https://twitter.com/milvusio)
- [Milvus Facebook](https://www.facebook.com/io.milvus.5)
## ライセンス
[Apache 2.0ライセンス](LICENSE)

View File

@ -50,37 +50,37 @@ pipeline {
}
stages {
stage("Run GPU Version Build") {
stage("Run Build") {
agent {
kubernetes {
label "${BINRARY_VERSION}-build"
label "${env.BINRARY_VERSION}-build"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml'
}
}
stages {
stage('GPU Version Build') {
stage('Build') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/build.groovy"
}
}
}
}
stage('GPU Version Code Coverage') {
stage('Code Coverage') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy"
}
}
}
}
stage('Upload GPU Version Package') {
stage('Upload Package') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/package.groovy"
}
@ -90,17 +90,17 @@ pipeline {
}
}
stage("Publish GPU Version docker images") {
stage("Publish docker images") {
agent {
kubernetes {
label "${BINRARY_VERSION}-publish"
label "${env.BINRARY_VERSION}-publish"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/docker-pod.yaml'
}
}
stages {
stage('Publish GPU Version') {
stage('Publish') {
steps {
container('publish-images'){
script {
@ -112,17 +112,22 @@ pipeline {
}
}
stage("Deploy GPU Version to Development") {
stage("Deploy to Development") {
environment {
FROMAT_SEMVER = "${env.SEMVER}".replaceAll(".", "-")
HELM_RELEASE_NAME = "${env.PIPELINE_NAME}-${env.FROMAT_SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}".toLowerCase()
}
agent {
kubernetes {
label "${BINRARY_VERSION}-dev-test"
label "${env.BINRARY_VERSION}-dev-test"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/testEnvironment.yaml'
}
}
stages {
stage("Deploy GPU Version to Dev") {
stage("Deploy to Dev") {
steps {
container('milvus-test-env') {
script {
@ -132,7 +137,7 @@ pipeline {
}
}
stage("GPU Version Dev Test") {
stage("Dev Test") {
steps {
container('milvus-test-env') {
script {
@ -147,7 +152,7 @@ pipeline {
}
}
stage ("Cleanup GPU Version Dev") {
stage ("Cleanup Dev") {
steps {
container('milvus-test-env') {
script {
@ -180,37 +185,37 @@ pipeline {
}
stages {
stage("Run CPU Version Build") {
stage("Run Build") {
agent {
kubernetes {
label "${BINRARY_VERSION}-build"
label "${env.BINRARY_VERSION}-build"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml'
}
}
stages {
stage('Build CPU Version') {
stage('Build') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/build.groovy"
}
}
}
}
stage('CPU Version Code Coverage') {
stage('Code Coverage') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy"
}
}
}
}
stage('Upload CPU Version Package') {
stage('Upload Package') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/package.groovy"
}
@ -220,17 +225,17 @@ pipeline {
}
}
stage("Publish CPU Version docker images") {
stage("Publish docker images") {
agent {
kubernetes {
label "${BINRARY_VERSION}-publish"
label "${env.BINRARY_VERSION}-publish"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/docker-pod.yaml'
}
}
stages {
stage('Publish CPU Version') {
stage('Publish') {
steps {
container('publish-images'){
script {
@ -242,17 +247,22 @@ pipeline {
}
}
stage("Deploy CPU Version to Development") {
stage("Deploy to Development") {
environment {
FROMAT_SEMVER = "${env.SEMVER}".replaceAll(".", "-")
HELM_RELEASE_NAME = "${env.PIPELINE_NAME}-${env.FROMAT_SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}".toLowerCase()
}
agent {
kubernetes {
label "${BINRARY_VERSION}-dev-test"
label "${env.BINRARY_VERSION}-dev-test"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/testEnvironment.yaml'
}
}
stages {
stage("Deploy CPU Version to Dev") {
stage("Deploy to Dev") {
steps {
container('milvus-test-env') {
script {
@ -262,7 +272,7 @@ pipeline {
}
}
stage("CPU Version Dev Test") {
stage("Dev Test") {
steps {
container('milvus-test-env') {
script {
@ -277,7 +287,7 @@ pipeline {
}
}
stage ("Cleanup CPU Version Dev") {
stage ("Cleanup Dev") {
steps {
container('milvus-test-env') {
script {

View File

@ -7,7 +7,7 @@ metadata:
componet: cpu-build-env
spec:
containers:
- name: milvus-build-env
- name: milvus-cpu-build-env
image: registry.zilliz.com/milvus/milvus-cpu-build-env:v0.6.0-ubuntu18.04
env:
- name: POD_IP

View File

@ -7,7 +7,7 @@ metadata:
componet: gpu-build-env
spec:
containers:
- name: milvus-build-env
- name: milvus-gpu-build-env
image: registry.zilliz.com/milvus/milvus-gpu-build-env:v0.6.0-ubuntu18.04
env:
- name: POD_IP

View File

@ -1,12 +1,12 @@
try {
def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true
def helmResult = sh script: "helm status ${env.HELM_RELEASE_NAME}", returnStatus: true
if (!helmResult) {
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}"
sh "helm del --purge ${env.HELM_RELEASE_NAME}"
}
} catch (exc) {
def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true
def helmResult = sh script: "helm status ${env.HELM_RELEASE_NAME}", returnStatus: true
if (!helmResult) {
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}"
sh "helm del --purge ${env.HELM_RELEASE_NAME}"
}
throw exc
}

View File

@ -3,7 +3,7 @@ sh 'helm repo update'
dir ('milvus-helm') {
checkout([$class: 'GitSCM', branches: [[name: "0.6.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.6.0:refs/remotes/origin/0.6.0"]]])
dir ("milvus") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
}
}

View File

@ -2,6 +2,7 @@ timeout(time: 5, unit: 'MINUTES') {
dir ("ci/jenkins/scripts") {
sh "pip3 install -r requirements.txt"
sh "./yaml_processor.py merge -f /opt/milvus/conf/server_config.yaml -m ../yaml/update_server_config.yaml -i && rm /opt/milvus/conf/server_config.yaml.bak"
sh "sed -i 's/\\/tmp\\/milvus/\\/opt\\/milvus/g' /opt/milvus/conf/log_config.conf"
}
sh "tar -zcvf ./${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz -C /opt/ milvus"
withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) {

View File

@ -1,47 +1,45 @@
container('publish-images') {
timeout(time: 15, unit: 'MINUTES') {
dir ("docker/deploy/${env.BINRARY_VERSION}/${env.OS_NAME}") {
def binaryPackage = "${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz"
timeout(time: 15, unit: 'MINUTES') {
dir ("docker/deploy/${env.BINRARY_VERSION}/${env.OS_NAME}") {
def binaryPackage = "${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz"
withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) {
def downloadStatus = sh(returnStatus: true, script: "curl -u${JFROG_USERNAME}:${JFROG_PASSWORD} -O ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage}")
withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) {
def downloadStatus = sh(returnStatus: true, script: "curl -u${JFROG_USERNAME}:${JFROG_PASSWORD} -O ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage}")
if (downloadStatus != 0) {
error("\" Download \" ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage} \" failed!")
}
if (downloadStatus != 0) {
error("\" Download \" ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage} \" failed!")
}
sh "tar zxvf ${binaryPackage}"
def imageName = "${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
sh "tar zxvf ${binaryPackage}"
def imageName = "${PROJECT_NAME}/engine:${DOCKER_VERSION}"
try {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
def customImage = docker.build("${imageName}")
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
docker.withRegistry("https://${params.DOKCER_REGISTRY_URL}", "${params.DOCKER_CREDENTIALS_ID}") {
customImage.push()
}
} catch (exc) {
throw exc
} finally {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
try {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
}
def customImage = docker.build("${imageName}")
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
docker.withRegistry("https://${params.DOKCER_REGISTRY_URL}", "${params.DOCKER_CREDENTIALS_ID}") {
customImage.push()
}
} catch (exc) {
throw exc
} finally {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
}
}
}

View File

@ -1,10 +1,10 @@
timeout(time: 90, unit: 'MINUTES') {
dir ("tests/milvus_python_test") {
sh 'python3 -m pip install -r requirements.txt'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-milvus-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
load "ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
@ -13,10 +13,10 @@ timeout(time: 90, unit: 'MINUTES') {
}
dir ("milvus-helm") {
dir ("milvus") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
}
}
dir ("tests/milvus_python_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-milvus-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
}
}

View File

@ -1,11 +1,11 @@
timeout(time: 60, unit: 'MINUTES') {
dir ("tests/milvus_python_test") {
sh 'python3 -m pip install -r requirements.txt'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-milvus-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
}
// mysql database backend test
// load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
// load "ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
// if (!fileExists('milvus-helm')) {
// dir ("milvus-helm") {
@ -14,10 +14,10 @@ timeout(time: 60, unit: 'MINUTES') {
// }
// dir ("milvus-helm") {
// dir ("milvus") {
// sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
// sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
// }
// }
// dir ("tests/milvus_python_test") {
// sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-milvus-engine.milvus.svc.cluster.local"
// sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
// }
}

View File

@ -1,13 +0,0 @@
try {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
}
} catch (exc) {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
}
throw exc
}

View File

@ -1,13 +0,0 @@
try {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster"
}
} catch (exc) {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster"
}
throw exc
}

View File

@ -1,24 +0,0 @@
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo add milvus https://registry.zilliz.com/chartrepo/milvus'
sh 'helm repo update'
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("milvus/milvus-cluster") {
sh "helm install --wait --timeout 300 --set roServers.image.tag=${DOCKER_VERSION} --set woServers.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP -f ci/values.yaml --name ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster --namespace milvus-cluster --version 0.5.0 . "
}
}
/*
timeout(time: 2, unit: 'MINUTES') {
waitUntil {
def result = sh script: "nc -z -w 3 ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster-milvus-cluster-proxy.milvus-cluster.svc.cluster.local 19530", returnStatus: true
return !result
}
}
*/
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster"
throw exc
}

View File

@ -1,12 +0,0 @@
timeout(time: 25, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements_cluster.txt'
sh "pytest . --alluredir=cluster_test_out --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster-milvus-cluster-proxy.milvus-cluster.svc.cluster.local"
}
} catch (exc) {
echo 'Milvus Cluster Test Failed !'
throw exc
}
}

View File

@ -1,16 +0,0 @@
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo add milvus https://registry.zilliz.com/chartrepo/milvus'
sh 'helm repo update'
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/values.yaml --namespace milvus-1 --version 0.5.0 ."
}
}
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
throw exc
}

View File

@ -1,16 +0,0 @@
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo add milvus https://registry.zilliz.com/chartrepo/milvus'
sh 'helm repo update'
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.repository=\"zilliz.azurecr.cn/milvus/engine\" --set engine.image.tag=${DOCKER_VERSION} --set expose.type=loadBalancer --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/values.yaml --namespace milvus-1 --version 0.5.0 ."
}
}
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
throw exc
}

View File

@ -1,28 +0,0 @@
timeout(time: 30, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt -i http://pypi.douban.com/simple --trusted-host pypi.douban.com'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local --internal=true"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
}
}
dir ("milvus-helm") {
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/db_backend/mysql_values.yaml --namespace milvus-2 --version 0.5.0 ."
}
}
dir ("${PROJECT_NAME}_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local --internal=true"
}
} catch (exc) {
echo 'Milvus Test Failed !'
throw exc
}
}

View File

@ -1,29 +0,0 @@
timeout(time: 60, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt -i http://pypi.douban.com/simple --trusted-host pypi.douban.com'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local --internal=true"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
}
}
dir ("milvus-helm") {
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/db_backend/mysql_values.yaml --namespace milvus-2 --version 0.4.0 ."
}
}
dir ("${PROJECT_NAME}_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local --internal=true"
}
} catch (exc) {
echo 'Milvus Test Failed !'
throw exc
}
}

View File

@ -1,30 +0,0 @@
container('milvus-build-env') {
timeout(time: 120, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Build Engine') {
dir ("milvus_engine") {
try {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption',disableSubmodules: false,parentCredentials: true,recursiveSubmodules: true,reference: '',trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("core") {
sh "git config --global user.email \"test@zilliz.com\""
sh "git config --global user.name \"test\""
withCredentials([usernamePassword(credentialsId: "${params.JFROG_USER}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh "./build.sh -l"
sh "rm -rf cmake_build"
sh "export JFROG_ARTFACTORY_URL='${params.JFROG_ARTFACTORY_URL}' \
&& export JFROG_USER_NAME='${USERNAME}' \
&& export JFROG_PASSWORD='${PASSWORD}' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.3.0/faiss-branch-0.3.0.tar.gz' \
&& ./build.sh -t ${params.BUILD_TYPE} -d /opt/milvus -j -u -c"
sh "./coverage.sh -u root -p 123456 -t \$POD_IP"
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Build Engine', state: 'failed'
throw exc
}
}
}
}
}

View File

@ -1,28 +0,0 @@
container('milvus-build-env') {
timeout(time: 120, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Build Engine') {
dir ("milvus_engine") {
try {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption',disableSubmodules: false,parentCredentials: true,recursiveSubmodules: true,reference: '',trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("core") {
sh "git config --global user.email \"test@zilliz.com\""
sh "git config --global user.name \"test\""
withCredentials([usernamePassword(credentialsId: "${params.JFROG_USER}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh "./build.sh -l"
sh "rm -rf cmake_build"
sh "export JFROG_ARTFACTORY_URL='${params.JFROG_ARTFACTORY_URL}' \
&& export JFROG_USER_NAME='${USERNAME}' \
&& export JFROG_PASSWORD='${PASSWORD}' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.3.0/faiss-branch-0.3.0.tar.gz' \
&& ./build.sh -t ${params.BUILD_TYPE} -j -d /opt/milvus"
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Build Engine', state: 'failed'
throw exc
}
}
}
}
}

View File

@ -1,38 +0,0 @@
container('publish-docker') {
timeout(time: 15, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Publish Engine Docker') {
try {
dir ("${PROJECT_NAME}_build") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:build/milvus_build.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("docker/deploy/ubuntu16.04/free_version") {
sh "curl -O -u anonymous: ftp://192.168.1.126/data/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
sh "tar zxvf ${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
try {
def customImage = docker.build("${PROJECT_NAME}/engine:${DOCKER_VERSION}")
docker.withRegistry('https://registry.zilliz.com', "${params.DOCKER_PUBLISH_USER}") {
customImage.push()
}
docker.withRegistry('https://zilliz.azurecr.cn', "${params.AZURE_DOCKER_PUBLISH_USER}") {
customImage.push()
}
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'success'
echo "Docker Pull Command: docker pull registry.zilliz.com/${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'canceled'
throw exc
} finally {
sh "docker rmi ${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'failed'
echo 'Publish docker failed!'
throw exc
}
}
}
}

View File

@ -1,44 +0,0 @@
container('milvus-build-env') {
timeout(time: 5, unit: 'MINUTES') {
dir ("milvus_engine") {
dir ("core") {
gitlabCommitStatus(name: 'Packaged Engine') {
if (fileExists('milvus')) {
try {
sh "tar -zcvf ./${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz ./milvus"
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz", "${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Download Milvus Engine Binary Viewer \"http://192.168.1.126:8080/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz\""
}
} catch (exc) {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
throw exc
}
} else {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
error("Milvus binary directory don't exists!")
}
}
gitlabCommitStatus(name: 'Packaged Engine lcov') {
if (fileExists('lcov_out')) {
try {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("lcov_out/", "${PROJECT_NAME}/lcov/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus lcov out Viewer \"http://192.168.1.126:8080/${PROJECT_NAME}/lcov/${JOB_NAME}-${BUILD_ID}/lcov_out/\""
}
} catch (exc) {
updateGitlabCommitStatus name: 'Packaged Engine lcov', state: 'failed'
throw exc
}
} else {
updateGitlabCommitStatus name: 'Packaged Engine lcov', state: 'failed'
error("Milvus lcov out directory don't exists!")
}
}
}
}
}
}

View File

@ -1,26 +0,0 @@
container('milvus-build-env') {
timeout(time: 5, unit: 'MINUTES') {
dir ("milvus_engine") {
dir ("core") {
gitlabCommitStatus(name: 'Packaged Engine') {
if (fileExists('milvus')) {
try {
sh "tar -zcvf ./${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz ./milvus"
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz", "${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Download Milvus Engine Binary Viewer \"http://192.168.1.126:8080/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz\""
}
} catch (exc) {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
throw exc
}
} else {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
error("Milvus binary directory don't exists!")
}
}
}
}
}
}

View File

@ -1,35 +0,0 @@
container('publish-docker') {
timeout(time: 15, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Publish Engine Docker') {
try {
dir ("${PROJECT_NAME}_build") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:build/milvus_build.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("docker/deploy/ubuntu16.04/free_version") {
sh "curl -O -u anonymous: ftp://192.168.1.126/data/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
sh "tar zxvf ${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
try {
def customImage = docker.build("${PROJECT_NAME}/engine:${DOCKER_VERSION}")
docker.withRegistry('https://registry.zilliz.com', "${params.DOCKER_PUBLISH_USER}") {
customImage.push()
}
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'success'
echo "Docker Pull Command: docker pull registry.zilliz.com/${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'canceled'
throw exc
} finally {
sh "docker rmi ${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'failed'
echo 'Publish docker failed!'
throw exc
}
}
}
}

View File

@ -1,31 +0,0 @@
timeout(time: 40, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt'
def service_ip = sh (script: "kubectl get svc --namespace milvus-1 ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine --template \"{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}\"",returnStdout: true).trim()
sh "pytest . --alluredir=\"test_out/staging/single/sqlite\" --ip ${service_ip}"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_staging.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
}
}
dir ("milvus-helm") {
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.repository=\"zilliz.azurecr.cn/milvus/engine\" --set engine.image.tag=${DOCKER_VERSION} --set expose.type=loadBalancer --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/db_backend/mysql_values.yaml --namespace milvus-2 --version 0.5.0 ."
}
}
dir ("${PROJECT_NAME}_test") {
def service_ip = sh (script: "kubectl get svc --namespace milvus-2 ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine --template \"{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}\"",returnStdout: true).trim()
sh "pytest . --alluredir=\"test_out/staging/single/mysql\" --ip ${service_ip}"
}
} catch (exc) {
echo 'Milvus Test Failed !'
throw exc
}
}

View File

@ -1,14 +0,0 @@
timeout(time: 5, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
if (fileExists('cluster_test_out')) {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("cluster_test_out/", "${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus Dev Test Out Viewer \"ftp://192.168.1.126/data/${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}\""
}
} else {
error("Milvus Dev Test Out directory don't exists!")
}
}
}

View File

@ -1,13 +0,0 @@
timeout(time: 5, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
if (fileExists('test_out/dev')) {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("test_out/dev/", "${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus Dev Test Out Viewer \"ftp://192.168.1.126/data/${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}\""
}
} else {
error("Milvus Dev Test Out directory don't exists!")
}
}
}

View File

@ -1,13 +0,0 @@
timeout(time: 5, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
if (fileExists('test_out/staging')) {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("test_out/staging/", "${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus Dev Test Out Viewer \"ftp://192.168.1.126/data/${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}\""
}
} else {
error("Milvus Dev Test Out directory don't exists!")
}
}
}

View File

@ -1,396 +0,0 @@
pipeline {
agent none
options {
timestamps()
}
environment {
PROJECT_NAME = "milvus"
LOWER_BUILD_TYPE = BUILD_TYPE.toLowerCase()
SEMVER = "${env.gitlabSourceBranch == null ? params.ENGINE_BRANCH.substring(params.ENGINE_BRANCH.lastIndexOf('/') + 1) : env.gitlabSourceBranch}"
GITLAB_AFTER_COMMIT = "${env.gitlabAfter == null ? null : env.gitlabAfter}"
SUFFIX_VERSION_NAME = "${env.gitlabAfter == null ? null : env.gitlabAfter.substring(0, 6)}"
DOCKER_VERSION_STR = "${env.gitlabAfter == null ? "${SEMVER}-${LOWER_BUILD_TYPE}" : "${SEMVER}-${LOWER_BUILD_TYPE}-${SUFFIX_VERSION_NAME}"}"
}
stages {
stage("Ubuntu 16.04") {
environment {
PACKAGE_VERSION = VersionNumber([
versionNumberString : '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, "yyyyMMdd"}'
]);
DOCKER_VERSION = VersionNumber([
versionNumberString : '${DOCKER_VERSION_STR}'
]);
}
stages {
stage("Run Build") {
agent {
kubernetes {
cloud 'build-kubernetes'
label 'build'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: milvus-build-env
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.13
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- cat
tty: true
resources:
limits:
memory: "28Gi"
cpu: "10.0"
nvidia.com/gpu: 1
requests:
memory: "14Gi"
cpu: "5.0"
- name: milvus-mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
"""
}
}
stages {
stage('Build') {
steps {
gitlabCommitStatus(name: 'Build') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/milvus_build.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/packaged_milvus.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Build', state: 'canceled'
echo "Milvus Build aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Build', state: 'failed'
echo "Milvus Build failure !"
}
}
}
}
stage("Publish docker and helm") {
agent {
kubernetes {
label 'publish'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Publish Docker') {
steps {
gitlabCommitStatus(name: 'Publish Docker') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/publish_docker.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'canceled'
echo "Milvus Publish Docker aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'failed'
echo "Milvus Publish Docker failure !"
}
}
}
}
stage("Deploy to Development") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: test-cluster-config
"""
}
}
stages {
stage("Deploy to Dev") {
steps {
gitlabCommitStatus(name: 'Deloy to Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2dev.groovy"
}
}
}
}
}
stage("Dev Test") {
steps {
gitlabCommitStatus(name: 'Deloy Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/dev_test.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Dev") {
steps {
gitlabCommitStatus(name: 'Cleanup Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
success {
script {
echo "Milvus Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Single Node CI/CD failure !"
}
}
}
}
// stage("Cluster") {
// agent {
// kubernetes {
// label 'dev-test'
// defaultContainer 'jnlp'
// yaml """
// apiVersion: v1
// kind: Pod
// metadata:
// labels:
// app: milvus
// componet: test
// spec:
// containers:
// - name: milvus-testframework
// image: registry.zilliz.com/milvus/milvus-test:v0.2
// command:
// - cat
// tty: true
// volumeMounts:
// - name: kubeconf
// mountPath: /root/.kube/
// readOnly: true
// volumes:
// - name: kubeconf
// secret:
// secretName: test-cluster-config
// """
// }
// }
// stages {
// stage("Deploy to Dev") {
// steps {
// gitlabCommitStatus(name: 'Deloy to Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_deploy2dev.groovy"
// }
// }
// }
// }
// }
// stage("Dev Test") {
// steps {
// gitlabCommitStatus(name: 'Deloy Test') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_dev_test.groovy"
// load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_cluster_test_out.groovy"
// }
// }
// }
// }
// }
// stage ("Cleanup Dev") {
// steps {
// gitlabCommitStatus(name: 'Cleanup Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// }
// }
// }
// post {
// always {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// success {
// script {
// echo "Milvus Cluster CI/CD success !"
// }
// }
// aborted {
// script {
// echo "Milvus Cluster CI/CD aborted !"
// }
// }
// failure {
// script {
// echo "Milvus Cluster CI/CD failure !"
// }
// }
// }
// }
}
}
}
}
}
post {
always {
script {
if (env.gitlabAfter != null) {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
}
}
success {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'success'
echo "Milvus CI/CD success !"
}
}
aborted {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'canceled'
echo "Milvus CI/CD aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'failed'
echo "Milvus CI/CD failure !"
}
}
}
}

View File

@ -1,396 +0,0 @@
pipeline {
agent none
options {
timestamps()
}
environment {
PROJECT_NAME = "milvus"
LOWER_BUILD_TYPE = BUILD_TYPE.toLowerCase()
SEMVER = "${env.gitlabSourceBranch == null ? params.ENGINE_BRANCH.substring(params.ENGINE_BRANCH.lastIndexOf('/') + 1) : env.gitlabSourceBranch}"
GITLAB_AFTER_COMMIT = "${env.gitlabAfter == null ? null : env.gitlabAfter}"
SUFFIX_VERSION_NAME = "${env.gitlabAfter == null ? null : env.gitlabAfter.substring(0, 6)}"
DOCKER_VERSION_STR = "${env.gitlabAfter == null ? "${SEMVER}-${LOWER_BUILD_TYPE}" : "${SEMVER}-${LOWER_BUILD_TYPE}-${SUFFIX_VERSION_NAME}"}"
}
stages {
stage("Ubuntu 16.04") {
environment {
PACKAGE_VERSION = VersionNumber([
versionNumberString : '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, "yyyyMMdd"}'
]);
DOCKER_VERSION = VersionNumber([
versionNumberString : '${DOCKER_VERSION_STR}'
]);
}
stages {
stage("Run Build") {
agent {
kubernetes {
cloud 'build-kubernetes'
label 'build'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: milvus-build-env
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.13
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- cat
tty: true
resources:
limits:
memory: "28Gi"
cpu: "10.0"
nvidia.com/gpu: 1
requests:
memory: "14Gi"
cpu: "5.0"
- name: milvus-mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
"""
}
}
stages {
stage('Build') {
steps {
gitlabCommitStatus(name: 'Build') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/milvus_build_no_ut.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/packaged_milvus_no_ut.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Build', state: 'canceled'
echo "Milvus Build aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Build', state: 'failed'
echo "Milvus Build failure !"
}
}
}
}
stage("Publish docker and helm") {
agent {
kubernetes {
label 'publish'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Publish Docker') {
steps {
gitlabCommitStatus(name: 'Publish Docker') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/publish_docker.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'canceled'
echo "Milvus Publish Docker aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'failed'
echo "Milvus Publish Docker failure !"
}
}
}
}
stage("Deploy to Development") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: test-cluster-config
"""
}
}
stages {
stage("Deploy to Dev") {
steps {
gitlabCommitStatus(name: 'Deloy to Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2dev.groovy"
}
}
}
}
}
stage("Dev Test") {
steps {
gitlabCommitStatus(name: 'Deloy Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/dev_test.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Dev") {
steps {
gitlabCommitStatus(name: 'Cleanup Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
success {
script {
echo "Milvus Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Single Node CI/CD failure !"
}
}
}
}
// stage("Cluster") {
// agent {
// kubernetes {
// label 'dev-test'
// defaultContainer 'jnlp'
// yaml """
// apiVersion: v1
// kind: Pod
// metadata:
// labels:
// app: milvus
// componet: test
// spec:
// containers:
// - name: milvus-testframework
// image: registry.zilliz.com/milvus/milvus-test:v0.2
// command:
// - cat
// tty: true
// volumeMounts:
// - name: kubeconf
// mountPath: /root/.kube/
// readOnly: true
// volumes:
// - name: kubeconf
// secret:
// secretName: test-cluster-config
// """
// }
// }
// stages {
// stage("Deploy to Dev") {
// steps {
// gitlabCommitStatus(name: 'Deloy to Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_deploy2dev.groovy"
// }
// }
// }
// }
// }
// stage("Dev Test") {
// steps {
// gitlabCommitStatus(name: 'Deloy Test') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_dev_test.groovy"
// load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_cluster_test_out.groovy"
// }
// }
// }
// }
// }
// stage ("Cleanup Dev") {
// steps {
// gitlabCommitStatus(name: 'Cleanup Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// }
// }
// }
// post {
// always {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// success {
// script {
// echo "Milvus Cluster CI/CD success !"
// }
// }
// aborted {
// script {
// echo "Milvus Cluster CI/CD aborted !"
// }
// }
// failure {
// script {
// echo "Milvus Cluster CI/CD failure !"
// }
// }
// }
// }
}
}
}
}
}
post {
always {
script {
if (env.gitlabAfter != null) {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
}
}
success {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'success'
echo "Milvus CI/CD success !"
}
}
aborted {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'canceled'
echo "Milvus CI/CD aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'failed'
echo "Milvus CI/CD failure !"
}
}
}
}

View File

@ -1,478 +0,0 @@
pipeline {
agent none
options {
timestamps()
}
environment {
PROJECT_NAME = "milvus"
LOWER_BUILD_TYPE = BUILD_TYPE.toLowerCase()
SEMVER = "${env.gitlabSourceBranch == null ? params.ENGINE_BRANCH.substring(params.ENGINE_BRANCH.lastIndexOf('/') + 1) : env.gitlabSourceBranch}"
GITLAB_AFTER_COMMIT = "${env.gitlabAfter == null ? null : env.gitlabAfter}"
SUFFIX_VERSION_NAME = "${env.gitlabAfter == null ? null : env.gitlabAfter.substring(0, 6)}"
DOCKER_VERSION_STR = "${env.gitlabAfter == null ? '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, \"yyyyMMdd\"}' : '${SEMVER}-${LOWER_BUILD_TYPE}-${SUFFIX_VERSION_NAME}'}"
}
stages {
stage("Ubuntu 16.04") {
environment {
PACKAGE_VERSION = VersionNumber([
versionNumberString : '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, "yyyyMMdd"}'
]);
DOCKER_VERSION = VersionNumber([
versionNumberString : '${DOCKER_VERSION_STR}'
]);
}
stages {
stage("Run Build") {
agent {
kubernetes {
cloud 'build-kubernetes'
label 'build'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: milvus-build-env
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.13
command:
- cat
tty: true
resources:
limits:
memory: "28Gi"
cpu: "10.0"
nvidia.com/gpu: 1
requests:
memory: "14Gi"
cpu: "5.0"
"""
}
}
stages {
stage('Build') {
steps {
gitlabCommitStatus(name: 'Build') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/milvus_build.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/packaged_milvus.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Build', state: 'canceled'
echo "Milvus Build aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Build', state: 'failed'
echo "Milvus Build failure !"
}
}
}
}
stage("Publish docker and helm") {
agent {
kubernetes {
label 'publish'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Publish Docker') {
steps {
gitlabCommitStatus(name: 'Publish Docker') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/nightly_publish_docker.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'canceled'
echo "Milvus Publish Docker aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'failed'
echo "Milvus Publish Docker failure !"
}
}
}
}
stage("Deploy to Development") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: test-cluster-config
"""
}
}
stages {
stage("Deploy to Dev") {
steps {
gitlabCommitStatus(name: 'Deloy to Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2dev.groovy"
}
}
}
}
}
stage("Dev Test") {
steps {
gitlabCommitStatus(name: 'Deloy Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/dev_test_all.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Dev") {
steps {
gitlabCommitStatus(name: 'Cleanup Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
success {
script {
echo "Milvus Deploy to Dev Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Deploy to Dev Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Deploy to Dev Single Node CI/CD failure !"
}
}
}
}
// stage("Cluster") {
// agent {
// kubernetes {
// label 'dev-test'
// defaultContainer 'jnlp'
// yaml """
// apiVersion: v1
// kind: Pod
// metadata:
// labels:
// app: milvus
// componet: test
// spec:
// containers:
// - name: milvus-testframework
// image: registry.zilliz.com/milvus/milvus-test:v0.2
// command:
// - cat
// tty: true
// volumeMounts:
// - name: kubeconf
// mountPath: /root/.kube/
// readOnly: true
// volumes:
// - name: kubeconf
// secret:
// secretName: test-cluster-config
// """
// }
// }
// stages {
// stage("Deploy to Dev") {
// steps {
// gitlabCommitStatus(name: 'Deloy to Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_deploy2dev.groovy"
// }
// }
// }
// }
// }
// stage("Dev Test") {
// steps {
// gitlabCommitStatus(name: 'Deloy Test') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_dev_test.groovy"
// load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_cluster_test_out.groovy"
// }
// }
// }
// }
// }
// stage ("Cleanup Dev") {
// steps {
// gitlabCommitStatus(name: 'Cleanup Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// }
// }
// }
// post {
// always {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// success {
// script {
// echo "Milvus Deploy to Dev Cluster CI/CD success !"
// }
// }
// aborted {
// script {
// echo "Milvus Deploy to Dev Cluster CI/CD aborted !"
// }
// }
// failure {
// script {
// echo "Milvus Deploy to Dev Cluster CI/CD failure !"
// }
// }
// }
// }
}
}
stage("Deploy to Staging") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: aks-gpu-cluster-config
"""
}
}
stages {
stage("Deploy to Staging") {
steps {
gitlabCommitStatus(name: 'Deloy to Staging') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2staging.groovy"
}
}
}
}
}
stage("Staging Test") {
steps {
gitlabCommitStatus(name: 'Staging Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/staging_test.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_staging_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Staging") {
steps {
gitlabCommitStatus(name: 'Cleanup Staging') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_staging.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_staging.groovy"
}
}
}
success {
script {
echo "Milvus Deploy to Staging Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Deploy to Staging Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Deploy to Staging Single Node CI/CD failure !"
}
}
}
}
}
}
}
}
}
post {
always {
script {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
}
success {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'success'
echo "Milvus CI/CD success !"
}
}
aborted {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'canceled'
echo "Milvus CI/CD aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'failed'
echo "Milvus CI/CD failure !"
}
}
}
}

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.9
command:
- cat
tty: true

View File

@ -1,22 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock

View File

@ -116,7 +116,7 @@ for test in `ls ${DIR_UNITTEST}`; do
if [ $? -ne 0 ]; then
echo ${args}
echo ${DIR_UNITTEST}/${test} "run failed"
exit -1
exit 1
fi
done
@ -143,7 +143,7 @@ ${LCOV_CMD} -r "${FILE_INFO_OUTPUT}" -o "${FILE_INFO_OUTPUT_NEW}" \
if [ $? -ne 0 ]; then
echo "gen ${FILE_INFO_OUTPUT_NEW} failed"
exit -2
exit 2
fi
# gen html report

View File

@ -74,7 +74,7 @@ function(ExternalProject_Use_Cache project_name package_file install_path)
${CMAKE_COMMAND} -E echo
"Extracting ${package_file} to ${install_path}"
COMMAND
${CMAKE_COMMAND} -E tar xzvf ${package_file} ${install_path}
${CMAKE_COMMAND} -E tar xzf ${package_file} ${install_path}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)

View File

@ -0,0 +1,3 @@
alter table Tables drop column owner_table;
alter table Tables drop column partition_tag;
alter table Tables drop column version;

View File

@ -0,0 +1,7 @@
CREATE TABLE 'TempTables' ( 'id' INTEGER PRIMARY KEY NOT NULL , 'table_id' TEXT UNIQUE NOT NULL , 'state' INTEGER NOT NULL , 'dimension' INTEGER NOT NULL , 'created_on' INTEGER NOT NULL , 'flag' INTEGER DEFAULT 0 NOT NULL , 'index_file_size' INTEGER NOT NULL , 'engine_type' INTEGER NOT NULL , 'nlist' INTEGER NOT NULL , 'metric_type' INTEGER NOT NULL);
INSERT INTO TempTables SELECT id, table_id, state, dimension, created_on, flag, index_file_size, engine_type, nlist, metric_type FROM Tables;
DROP TABLE Tables;
ALTER TABLE TempTables RENAME TO Tables;

View File

@ -25,6 +25,7 @@
namespace milvus {
namespace cache {
#ifdef MILVUS_GPU_VERSION
std::mutex GpuCacheMgr::mutex_;
std::unordered_map<uint64_t, GpuCacheMgrPtr> GpuCacheMgr::instance_;
@ -76,6 +77,7 @@ GpuCacheMgr::GetIndex(const std::string& key) {
DataObjPtr obj = GetItem(key);
return obj;
}
#endif
} // namespace cache
} // namespace milvus

View File

@ -25,6 +25,7 @@
namespace milvus {
namespace cache {
#ifdef MILVUS_GPU_VERSION
class GpuCacheMgr;
using GpuCacheMgrPtr = std::shared_ptr<GpuCacheMgr>;
@ -42,6 +43,7 @@ class GpuCacheMgr : public CacheMgr<DataObjPtr> {
static std::mutex mutex_;
static std::unordered_map<uint64_t, GpuCacheMgrPtr> instance_;
};
#endif
} // namespace cache
} // namespace milvus

View File

@ -838,6 +838,25 @@ DBImpl::BackgroundBuildIndex() {
// ENGINE_LOG_TRACE << "Background build index thread exit";
}
Status
DBImpl::GetFilesToBuildIndex(const std::string& table_id, const std::vector<int>& file_types,
meta::TableFilesSchema& files) {
files.clear();
auto status = meta_ptr_->FilesByType(table_id, file_types, files);
// only build index for files that row count greater than certain threshold
for (auto it = files.begin(); it != files.end();) {
if ((*it).file_type_ == static_cast<int>(meta::TableFileSchema::RAW) &&
(*it).row_count_ < meta::BUILD_INDEX_THRESHOLD) {
it = files.erase(it);
} else {
it++;
}
}
return Status::OK();
}
Status
DBImpl::GetFilesToSearch(const std::string& table_id, const std::vector<size_t>& file_ids, const meta::DatesT& dates,
meta::TableFilesSchema& files) {
@ -946,18 +965,18 @@ DBImpl::BuildTableIndexRecursively(const std::string& table_id, const TableIndex
}
// get files to build index
std::vector<std::string> file_ids;
auto status = meta_ptr_->FilesByType(table_id, file_types, file_ids);
meta::TableFilesSchema table_files;
auto status = GetFilesToBuildIndex(table_id, file_types, table_files);
int times = 1;
while (!file_ids.empty()) {
while (!table_files.empty()) {
ENGINE_LOG_DEBUG << "Non index files detected! Will build index " << times;
if (index.engine_type_ != (int)EngineType::FAISS_IDMAP) {
status = meta_ptr_->UpdateTableFilesToIndex(table_id);
}
std::this_thread::sleep_for(std::chrono::milliseconds(std::min(10 * 1000, times * 100)));
status = meta_ptr_->FilesByType(table_id, file_types, file_ids);
GetFilesToBuildIndex(table_id, file_types, table_files);
times++;
}

View File

@ -152,6 +152,10 @@ class DBImpl : public DB {
Status
MemSerialize();
Status
GetFilesToBuildIndex(const std::string& table_id, const std::vector<int>& file_types,
meta::TableFilesSchema& files);
Status
GetFilesToSearch(const std::string& table_id, const std::vector<size_t>& file_ids, const meta::DatesT& dates,
meta::TableFilesSchema& files);

View File

@ -151,6 +151,7 @@ ExecutionEngineImpl::HybridLoad() const {
return;
}
#ifdef MILVUS_GPU_VERSION
const std::string key = location_ + ".quantizer";
server::Config& config = server::Config::GetInstance();
@ -205,6 +206,7 @@ ExecutionEngineImpl::HybridLoad() const {
auto cache_quantizer = std::make_shared<CachedQuantizer>(quantizer);
cache::GpuCacheMgr::GetInstance(best_device_id)->InsertItem(key, cache_quantizer);
}
#endif
}
void
@ -342,6 +344,7 @@ ExecutionEngineImpl::CopyToGpu(uint64_t device_id, bool hybrid) {
}
#endif
#ifdef MILVUS_GPU_VERSION
auto index = std::static_pointer_cast<VecIndex>(cache::GpuCacheMgr::GetInstance(device_id)->GetIndex(location_));
bool already_in_cache = (index != nullptr);
if (already_in_cache) {
@ -364,16 +367,19 @@ ExecutionEngineImpl::CopyToGpu(uint64_t device_id, bool hybrid) {
if (!already_in_cache) {
GpuCache(device_id);
}
#endif
return Status::OK();
}
Status
ExecutionEngineImpl::CopyToIndexFileToGpu(uint64_t device_id) {
#ifdef MILVUS_GPU_VERSION
gpu_num_ = device_id;
auto to_index_data = std::make_shared<ToIndexData>(PhysicalSize());
cache::DataObjPtr obj = std::static_pointer_cast<cache::DataObj>(to_index_data);
milvus::cache::GpuCacheMgr::GetInstance(device_id)->InsertItem(location_, obj);
#endif
return Status::OK();
}
@ -584,15 +590,17 @@ ExecutionEngineImpl::Cache() {
Status
ExecutionEngineImpl::GpuCache(uint64_t gpu_id) {
#ifdef MILVUS_GPU_VERSION
cache::DataObjPtr obj = std::static_pointer_cast<cache::DataObj>(index_);
milvus::cache::GpuCacheMgr::GetInstance(gpu_id)->InsertItem(location_, obj);
#endif
return Status::OK();
}
// TODO(linxj): remove.
Status
ExecutionEngineImpl::Init() {
#ifdef MILVUS_GPU_VERSION
server::Config& config = server::Config::GetInstance();
std::vector<int64_t> gpu_ids;
Status s = config.GetGpuResourceConfigBuildIndexResources(gpu_ids);
@ -604,6 +612,9 @@ ExecutionEngineImpl::Init() {
std::string msg = "Invalid gpu_num";
return Status(SERVER_INVALID_ARGUMENT, msg);
#else
return Status::OK();
#endif
}
} // namespace engine

View File

@ -109,8 +109,7 @@ class Meta {
FilesToIndex(TableFilesSchema&) = 0;
virtual Status
FilesByType(const std::string& table_id, const std::vector<int>& file_types,
std::vector<std::string>& file_ids) = 0;
FilesByType(const std::string& table_id, const std::vector<int>& file_types, TableFilesSchema& table_files) = 0;
virtual Status
Size(uint64_t& result) = 0;

View File

@ -32,6 +32,13 @@ const size_t H_SEC = 60 * M_SEC;
const size_t D_SEC = 24 * H_SEC;
const size_t W_SEC = 7 * D_SEC;
// This value is to ignore small raw files when building index.
// The reason is:
// 1. The performance of brute-search for small raw files could be better than small index file.
// 2. And small raw files can be merged to larger files, thus reduce fragmented files count.
// We decide the value based on a testing for small size raw/index files.
const size_t BUILD_INDEX_THRESHOLD = 5000;
} // namespace meta
} // namespace engine
} // namespace milvus

View File

@ -959,6 +959,7 @@ MySQLMetaImpl::UpdateTableFilesToIndex(const std::string& table_id) {
updateTableFilesToIndexQuery << "UPDATE " << META_TABLEFILES
<< " SET file_type = " << std::to_string(TableFileSchema::TO_INDEX)
<< " WHERE table_id = " << mysqlpp::quote << table_id
<< " AND row_count >= " << std::to_string(meta::BUILD_INDEX_THRESHOLD)
<< " AND file_type = " << std::to_string(TableFileSchema::RAW) << ";";
ENGINE_LOG_DEBUG << "MySQLMetaImpl::UpdateTableFilesToIndex: " << updateTableFilesToIndexQuery.str();
@ -1527,13 +1528,13 @@ MySQLMetaImpl::FilesToIndex(TableFilesSchema& files) {
Status
MySQLMetaImpl::FilesByType(const std::string& table_id, const std::vector<int>& file_types,
std::vector<std::string>& file_ids) {
TableFilesSchema& table_files) {
if (file_types.empty()) {
return Status(DB_ERROR, "file types array is empty");
}
try {
file_ids.clear();
table_files.clear();
mysqlpp::StoreQueryResult res;
{
@ -1553,9 +1554,10 @@ MySQLMetaImpl::FilesByType(const std::string& table_id, const std::vector<int>&
mysqlpp::Query hasNonIndexFilesQuery = connectionPtr->query();
// since table_id is a unique column we just need to check whether it exists or not
hasNonIndexFilesQuery << "SELECT file_id, file_type"
<< " FROM " << META_TABLEFILES << " WHERE table_id = " << mysqlpp::quote << table_id
<< " AND file_type in (" << types << ");";
hasNonIndexFilesQuery
<< "SELECT id, engine_type, file_id, file_type, file_size, row_count, date, created_on"
<< " FROM " << META_TABLEFILES << " WHERE table_id = " << mysqlpp::quote << table_id
<< " AND file_type in (" << types << ");";
ENGINE_LOG_DEBUG << "MySQLMetaImpl::FilesByType: " << hasNonIndexFilesQuery.str();
@ -1566,9 +1568,18 @@ MySQLMetaImpl::FilesByType(const std::string& table_id, const std::vector<int>&
int raw_count = 0, new_count = 0, new_merge_count = 0, new_index_count = 0;
int to_index_count = 0, index_count = 0, backup_count = 0;
for (auto& resRow : res) {
std::string file_id;
resRow["file_id"].to_string(file_id);
file_ids.push_back(file_id);
TableFileSchema file_schema;
file_schema.id_ = resRow["id"];
file_schema.table_id_ = table_id;
file_schema.engine_type_ = resRow["engine_type"];
resRow["file_id"].to_string(file_schema.file_id_);
file_schema.file_type_ = resRow["file_type"];
file_schema.file_size_ = resRow["file_size"];
file_schema.row_count_ = resRow["row_count"];
file_schema.date_ = resRow["date"];
file_schema.created_on_ = resRow["created_on"];
table_files.emplace_back(file_schema);
int32_t file_type = resRow["file_type"];
switch (file_type) {

View File

@ -108,7 +108,7 @@ class MySQLMetaImpl : public Meta {
Status
FilesByType(const std::string& table_id, const std::vector<int>& file_types,
std::vector<std::string>& file_ids) override;
TableFilesSchema& table_files) override;
Status
Archive() override;

View File

@ -58,7 +58,7 @@ HandleException(const std::string& desc, const char* what = nullptr) {
} // namespace
inline auto
StoragePrototype(const std::string &path) {
StoragePrototype(const std::string& path) {
return make_storage(path,
make_table(META_TABLES,
make_column("id", &TableSchema::id_, primary_key()),
@ -160,7 +160,7 @@ SqliteMetaImpl::Initialize() {
}
Status
SqliteMetaImpl::CreateTable(TableSchema &table_schema) {
SqliteMetaImpl::CreateTable(TableSchema& table_schema) {
try {
server::MetricCollector metric;
@ -188,20 +188,20 @@ SqliteMetaImpl::CreateTable(TableSchema &table_schema) {
try {
auto id = ConnectorPtr->insert(table_schema);
table_schema.id_ = id;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when create table", e.what());
}
ENGINE_LOG_DEBUG << "Successfully create table: " << table_schema.table_id_;
return utils::CreateTablePath(options_, table_schema.table_id_);
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when create table", e.what());
}
}
Status
SqliteMetaImpl::DescribeTable(TableSchema &table_schema) {
SqliteMetaImpl::DescribeTable(TableSchema& table_schema) {
try {
server::MetricCollector metric;
@ -218,7 +218,7 @@ SqliteMetaImpl::DescribeTable(TableSchema &table_schema) {
&TableSchema::partition_tag_,
&TableSchema::version_),
where(c(&TableSchema::table_id_) == table_schema.table_id_
and c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
if (groups.size() == 1) {
table_schema.id_ = std::get<0>(groups[0]);
@ -236,7 +236,7 @@ SqliteMetaImpl::DescribeTable(TableSchema &table_schema) {
} else {
return Status(DB_NOT_FOUND, "Table " + table_schema.table_id_ + " not found");
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when describe table", e.what());
}
@ -244,20 +244,20 @@ SqliteMetaImpl::DescribeTable(TableSchema &table_schema) {
}
Status
SqliteMetaImpl::HasTable(const std::string &table_id, bool &has_or_not) {
SqliteMetaImpl::HasTable(const std::string& table_id, bool& has_or_not) {
has_or_not = false;
try {
server::MetricCollector metric;
auto tables = ConnectorPtr->select(columns(&TableSchema::id_),
where(c(&TableSchema::table_id_) == table_id
and c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
if (tables.size() == 1) {
has_or_not = true;
} else {
has_or_not = false;
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when lookup table", e.what());
}
@ -265,7 +265,7 @@ SqliteMetaImpl::HasTable(const std::string &table_id, bool &has_or_not) {
}
Status
SqliteMetaImpl::AllTables(std::vector<TableSchema> &table_schema_array) {
SqliteMetaImpl::AllTables(std::vector<TableSchema>& table_schema_array) {
try {
server::MetricCollector metric;
@ -281,8 +281,8 @@ SqliteMetaImpl::AllTables(std::vector<TableSchema> &table_schema_array) {
&TableSchema::owner_table_,
&TableSchema::partition_tag_,
&TableSchema::version_),
where(c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
for (auto &table : selected) {
where(c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
for (auto& table : selected) {
TableSchema schema;
schema.id_ = std::get<0>(table);
schema.table_id_ = std::get<1>(table);
@ -299,7 +299,7 @@ SqliteMetaImpl::AllTables(std::vector<TableSchema> &table_schema_array) {
table_schema_array.emplace_back(schema);
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when lookup all tables", e.what());
}
@ -307,7 +307,7 @@ SqliteMetaImpl::AllTables(std::vector<TableSchema> &table_schema_array) {
}
Status
SqliteMetaImpl::DropTable(const std::string &table_id) {
SqliteMetaImpl::DropTable(const std::string& table_id) {
try {
server::MetricCollector metric;
@ -317,13 +317,13 @@ SqliteMetaImpl::DropTable(const std::string &table_id) {
//soft delete table
ConnectorPtr->update_all(
set(
c(&TableSchema::state_) = (int) TableSchema::TO_DELETE),
c(&TableSchema::state_) = (int)TableSchema::TO_DELETE),
where(
c(&TableSchema::table_id_) == table_id and
c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
ENGINE_LOG_DEBUG << "Successfully delete table, table id = " << table_id;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when delete table", e.what());
}
@ -331,7 +331,7 @@ SqliteMetaImpl::DropTable(const std::string &table_id) {
}
Status
SqliteMetaImpl::DeleteTableFiles(const std::string &table_id) {
SqliteMetaImpl::DeleteTableFiles(const std::string& table_id) {
try {
server::MetricCollector metric;
@ -341,14 +341,14 @@ SqliteMetaImpl::DeleteTableFiles(const std::string &table_id) {
//soft delete table files
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::TO_DELETE,
c(&TableFileSchema::file_type_) = (int)TableFileSchema::TO_DELETE,
c(&TableFileSchema::updated_time_) = utils::GetMicroSecTimeStamp()),
where(
c(&TableFileSchema::table_id_) == table_id and
c(&TableFileSchema::file_type_) != (int) TableFileSchema::TO_DELETE));
c(&TableFileSchema::file_type_) != (int)TableFileSchema::TO_DELETE));
ENGINE_LOG_DEBUG << "Successfully delete table files, table id = " << table_id;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when delete table files", e.what());
}
@ -356,7 +356,7 @@ SqliteMetaImpl::DeleteTableFiles(const std::string &table_id) {
}
Status
SqliteMetaImpl::CreateTableFile(TableFileSchema &file_schema) {
SqliteMetaImpl::CreateTableFile(TableFileSchema& file_schema) {
if (file_schema.date_ == EmptyDate) {
file_schema.date_ = utils::GetDate();
}
@ -389,7 +389,7 @@ SqliteMetaImpl::CreateTableFile(TableFileSchema &file_schema) {
ENGINE_LOG_DEBUG << "Successfully create table file, file id = " << file_schema.file_id_;
return utils::CreateTableFilePath(options_, file_schema);
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when create table file", e.what());
}
@ -398,8 +398,8 @@ SqliteMetaImpl::CreateTableFile(TableFileSchema &file_schema) {
// TODO(myh): Delete single vecotor by id
Status
SqliteMetaImpl::DropDataByDate(const std::string &table_id,
const DatesT &dates) {
SqliteMetaImpl::DropDataByDate(const std::string& table_id,
const DatesT& dates) {
if (dates.empty()) {
return Status::OK();
}
@ -440,7 +440,7 @@ SqliteMetaImpl::DropDataByDate(const std::string &table_id,
}
ENGINE_LOG_DEBUG << "Successfully drop data by date, table id = " << table_schema.table_id_;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when drop partition", e.what());
}
@ -448,9 +448,9 @@ SqliteMetaImpl::DropDataByDate(const std::string &table_id,
}
Status
SqliteMetaImpl::GetTableFiles(const std::string &table_id,
const std::vector<size_t> &ids,
TableFilesSchema &table_files) {
SqliteMetaImpl::GetTableFiles(const std::string& table_id,
const std::vector<size_t>& ids,
TableFilesSchema& table_files) {
try {
table_files.clear();
auto files = ConnectorPtr->select(columns(&TableFileSchema::id_,
@ -463,7 +463,7 @@ SqliteMetaImpl::GetTableFiles(const std::string &table_id,
&TableFileSchema::created_on_),
where(c(&TableFileSchema::table_id_) == table_id and
in(&TableFileSchema::id_, ids) and
c(&TableFileSchema::file_type_) != (int) TableFileSchema::TO_DELETE));
c(&TableFileSchema::file_type_) != (int)TableFileSchema::TO_DELETE));
TableSchema table_schema;
table_schema.table_id_ = table_id;
auto status = DescribeTable(table_schema);
@ -472,7 +472,7 @@ SqliteMetaImpl::GetTableFiles(const std::string &table_id,
}
Status result;
for (auto &file : files) {
for (auto& file : files) {
TableFileSchema file_schema;
file_schema.table_id_ = table_id;
file_schema.id_ = std::get<0>(file);
@ -495,13 +495,13 @@ SqliteMetaImpl::GetTableFiles(const std::string &table_id,
ENGINE_LOG_DEBUG << "Get table files by id";
return result;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when lookup table files", e.what());
}
}
Status
SqliteMetaImpl::UpdateTableFlag(const std::string &table_id, int64_t flag) {
SqliteMetaImpl::UpdateTableFlag(const std::string& table_id, int64_t flag) {
try {
server::MetricCollector metric;
@ -512,7 +512,7 @@ SqliteMetaImpl::UpdateTableFlag(const std::string &table_id, int64_t flag) {
where(
c(&TableSchema::table_id_) == table_id));
ENGINE_LOG_DEBUG << "Successfully update table flag, table id = " << table_id;
} catch (std::exception &e) {
} catch (std::exception& e) {
std::string msg = "Encounter exception when update table flag: table_id = " + table_id;
return HandleException(msg, e.what());
}
@ -521,7 +521,7 @@ SqliteMetaImpl::UpdateTableFlag(const std::string &table_id, int64_t flag) {
}
Status
SqliteMetaImpl::UpdateTableFile(TableFileSchema &file_schema) {
SqliteMetaImpl::UpdateTableFile(TableFileSchema& file_schema) {
file_schema.updated_time_ = utils::GetMicroSecTimeStamp();
try {
server::MetricCollector metric;
@ -534,14 +534,14 @@ SqliteMetaImpl::UpdateTableFile(TableFileSchema &file_schema) {
//if the table has been deleted, just mark the table file as TO_DELETE
//clean thread will delete the file later
if (tables.size() < 1 || std::get<0>(tables[0]) == (int) TableSchema::TO_DELETE) {
if (tables.size() < 1 || std::get<0>(tables[0]) == (int)TableSchema::TO_DELETE) {
file_schema.file_type_ = TableFileSchema::TO_DELETE;
}
ConnectorPtr->update(file_schema);
ENGINE_LOG_DEBUG << "Update single table file, file id = " << file_schema.file_id_;
} catch (std::exception &e) {
} catch (std::exception& e) {
std::string msg = "Exception update table file: table_id = " + file_schema.table_id_
+ " file_id = " + file_schema.file_id_;
return HandleException(msg, e.what());
@ -550,7 +550,7 @@ SqliteMetaImpl::UpdateTableFile(TableFileSchema &file_schema) {
}
Status
SqliteMetaImpl::UpdateTableFiles(TableFilesSchema &files) {
SqliteMetaImpl::UpdateTableFiles(TableFilesSchema& files) {
try {
server::MetricCollector metric;
@ -558,13 +558,13 @@ SqliteMetaImpl::UpdateTableFiles(TableFilesSchema &files) {
std::lock_guard<std::mutex> meta_lock(meta_mutex_);
std::map<std::string, bool> has_tables;
for (auto &file : files) {
for (auto& file : files) {
if (has_tables.find(file.table_id_) != has_tables.end()) {
continue;
}
auto tables = ConnectorPtr->select(columns(&TableSchema::id_),
where(c(&TableSchema::table_id_) == file.table_id_
and c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
if (tables.size() >= 1) {
has_tables[file.table_id_] = true;
} else {
@ -573,7 +573,7 @@ SqliteMetaImpl::UpdateTableFiles(TableFilesSchema &files) {
}
auto commited = ConnectorPtr->transaction([&]() mutable {
for (auto &file : files) {
for (auto& file : files) {
if (!has_tables[file.table_id_]) {
file.file_type_ = TableFileSchema::TO_DELETE;
}
@ -589,7 +589,7 @@ SqliteMetaImpl::UpdateTableFiles(TableFilesSchema &files) {
}
ENGINE_LOG_DEBUG << "Update " << files.size() << " table files";
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when update table files", e.what());
}
return Status::OK();
@ -613,7 +613,7 @@ SqliteMetaImpl::UpdateTableIndex(const std::string& table_id, const TableIndex&
&TableSchema::partition_tag_,
&TableSchema::version_),
where(c(&TableSchema::table_id_) == table_id
and c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
if (tables.size() > 0) {
meta::TableSchema table_schema;
@ -639,11 +639,11 @@ SqliteMetaImpl::UpdateTableIndex(const std::string& table_id, const TableIndex&
//set all backup file to raw
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::RAW,
c(&TableFileSchema::file_type_) = (int)TableFileSchema::RAW,
c(&TableFileSchema::updated_time_) = utils::GetMicroSecTimeStamp()),
where(
c(&TableFileSchema::table_id_) == table_id and
c(&TableFileSchema::file_type_) == (int) TableFileSchema::BACKUP));
c(&TableFileSchema::file_type_) == (int)TableFileSchema::BACKUP));
ENGINE_LOG_DEBUG << "Successfully update table index, table id = " << table_id;
} catch (std::exception& e) {
@ -655,7 +655,7 @@ SqliteMetaImpl::UpdateTableIndex(const std::string& table_id, const TableIndex&
}
Status
SqliteMetaImpl::UpdateTableFilesToIndex(const std::string &table_id) {
SqliteMetaImpl::UpdateTableFilesToIndex(const std::string& table_id) {
try {
server::MetricCollector metric;
@ -664,13 +664,14 @@ SqliteMetaImpl::UpdateTableFilesToIndex(const std::string &table_id) {
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::TO_INDEX),
c(&TableFileSchema::file_type_) = (int)TableFileSchema::TO_INDEX),
where(
c(&TableFileSchema::table_id_) == table_id and
c(&TableFileSchema::file_type_) == (int) TableFileSchema::RAW));
c(&TableFileSchema::row_count_) >= meta::BUILD_INDEX_THRESHOLD and
c(&TableFileSchema::file_type_) == (int)TableFileSchema::RAW));
ENGINE_LOG_DEBUG << "Update files to to_index, table id = " << table_id;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when update table files to to_index", e.what());
}
@ -686,7 +687,7 @@ SqliteMetaImpl::DescribeTableIndex(const std::string& table_id, TableIndex& inde
&TableSchema::nlist_,
&TableSchema::metric_type_),
where(c(&TableSchema::table_id_) == table_id
and c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
if (groups.size() == 1) {
index.engine_type_ = std::get<0>(groups[0]);
@ -713,20 +714,20 @@ SqliteMetaImpl::DropTableIndex(const std::string& table_id) {
//soft delete index files
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::TO_DELETE,
c(&TableFileSchema::file_type_) = (int)TableFileSchema::TO_DELETE,
c(&TableFileSchema::updated_time_) = utils::GetMicroSecTimeStamp()),
where(
c(&TableFileSchema::table_id_) == table_id and
c(&TableFileSchema::file_type_) == (int) TableFileSchema::INDEX));
c(&TableFileSchema::file_type_) == (int)TableFileSchema::INDEX));
//set all backup file to raw
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::RAW,
c(&TableFileSchema::file_type_) = (int)TableFileSchema::RAW,
c(&TableFileSchema::updated_time_) = utils::GetMicroSecTimeStamp()),
where(
c(&TableFileSchema::table_id_) == table_id and
c(&TableFileSchema::file_type_) == (int) TableFileSchema::BACKUP));
c(&TableFileSchema::file_type_) == (int)TableFileSchema::BACKUP));
//set table index type to raw
ConnectorPtr->update_all(
@ -738,7 +739,7 @@ SqliteMetaImpl::DropTableIndex(const std::string& table_id) {
c(&TableSchema::table_id_) == table_id));
ENGINE_LOG_DEBUG << "Successfully drop table index, table id = " << table_id;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when delete table index files", e.what());
}
@ -746,7 +747,9 @@ SqliteMetaImpl::DropTableIndex(const std::string& table_id) {
}
Status
SqliteMetaImpl::CreatePartition(const std::string& table_id, const std::string& partition_name, const std::string& tag) {
SqliteMetaImpl::CreatePartition(const std::string& table_id,
const std::string& partition_name,
const std::string& tag) {
server::MetricCollector metric;
TableSchema table_schema;
@ -757,7 +760,7 @@ SqliteMetaImpl::CreatePartition(const std::string& table_id, const std::string&
}
// not allow create partition under partition
if(!table_schema.owner_table_.empty()) {
if (!table_schema.owner_table_.empty()) {
return Status(DB_ERROR, "Nested partition is not allowed");
}
@ -769,7 +772,7 @@ SqliteMetaImpl::CreatePartition(const std::string& table_id, const std::string&
// not allow duplicated partition
std::string exist_partition;
GetPartitionName(table_id, valid_tag, exist_partition);
if(!exist_partition.empty()) {
if (!exist_partition.empty()) {
return Status(DB_ERROR, "Duplicate partition is not allowed");
}
@ -805,16 +808,16 @@ SqliteMetaImpl::ShowPartitions(const std::string& table_id, std::vector<meta::Ta
server::MetricCollector metric;
auto partitions = ConnectorPtr->select(columns(&TableSchema::table_id_),
where(c(&TableSchema::owner_table_) == table_id
and c(&TableSchema::state_) != (int) TableSchema::TO_DELETE));
for(size_t i = 0; i < partitions.size(); i++) {
where(c(&TableSchema::owner_table_) == table_id
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
for (size_t i = 0; i < partitions.size(); i++) {
std::string partition_name = std::get<0>(partitions[i]);
meta::TableSchema partition_schema;
partition_schema.table_id_ = partition_name;
DescribeTable(partition_schema);
partiton_schema_array.emplace_back(partition_schema);
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when show partitions", e.what());
}
@ -832,14 +835,15 @@ SqliteMetaImpl::GetPartitionName(const std::string& table_id, const std::string&
server::StringHelpFunctions::TrimStringBlank(valid_tag);
auto name = ConnectorPtr->select(columns(&TableSchema::table_id_),
where(c(&TableSchema::owner_table_) == table_id
and c(&TableSchema::partition_tag_) == valid_tag));
where(c(&TableSchema::owner_table_) == table_id
and c(&TableSchema::partition_tag_) == valid_tag
and c(&TableSchema::state_) != (int)TableSchema::TO_DELETE));
if (name.size() > 0) {
partition_name = std::get<0>(name[0]);
} else {
return Status(DB_NOT_FOUND, "Table " + table_id + "'s partition " + valid_tag + " not found");
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when get partition name", e.what());
}
@ -1032,7 +1036,7 @@ SqliteMetaImpl::FilesToMerge(const std::string& table_id, DatePartionedTableFile
}
Status
SqliteMetaImpl::FilesToIndex(TableFilesSchema &files) {
SqliteMetaImpl::FilesToIndex(TableFilesSchema& files) {
files.clear();
try {
@ -1048,13 +1052,13 @@ SqliteMetaImpl::FilesToIndex(TableFilesSchema &files) {
&TableFileSchema::engine_type_,
&TableFileSchema::created_on_),
where(c(&TableFileSchema::file_type_)
== (int) TableFileSchema::TO_INDEX));
== (int)TableFileSchema::TO_INDEX));
std::map<std::string, TableSchema> groups;
TableFileSchema table_file;
Status ret;
for (auto &file : selected) {
for (auto& file : selected) {
table_file.id_ = std::get<0>(file);
table_file.table_id_ = std::get<1>(file);
table_file.file_id_ = std::get<2>(file);
@ -1090,48 +1094,66 @@ SqliteMetaImpl::FilesToIndex(TableFilesSchema &files) {
ENGINE_LOG_DEBUG << "Collect " << selected.size() << " to-index files";
}
return ret;
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when iterate raw files", e.what());
}
}
Status
SqliteMetaImpl::FilesByType(const std::string &table_id,
const std::vector<int> &file_types,
std::vector<std::string> &file_ids) {
SqliteMetaImpl::FilesByType(const std::string& table_id,
const std::vector<int>& file_types,
TableFilesSchema& table_files) {
if (file_types.empty()) {
return Status(DB_ERROR, "file types array is empty");
}
try {
file_ids.clear();
auto selected = ConnectorPtr->select(columns(&TableFileSchema::file_id_,
&TableFileSchema::file_type_),
table_files.clear();
auto selected = ConnectorPtr->select(columns(&TableFileSchema::id_,
&TableFileSchema::file_id_,
&TableFileSchema::file_type_,
&TableFileSchema::file_size_,
&TableFileSchema::row_count_,
&TableFileSchema::date_,
&TableFileSchema::engine_type_,
&TableFileSchema::created_on_),
where(in(&TableFileSchema::file_type_, file_types)
and c(&TableFileSchema::table_id_) == table_id));
if (selected.size() >= 1) {
int raw_count = 0, new_count = 0, new_merge_count = 0, new_index_count = 0;
int to_index_count = 0, index_count = 0, backup_count = 0;
for (auto &file : selected) {
file_ids.push_back(std::get<0>(file));
switch (std::get<1>(file)) {
case (int) TableFileSchema::RAW:raw_count++;
for (auto& file : selected) {
TableFileSchema file_schema;
file_schema.table_id_ = table_id;
file_schema.id_ = std::get<0>(file);
file_schema.file_id_ = std::get<1>(file);
file_schema.file_type_ = std::get<2>(file);
file_schema.file_size_ = std::get<3>(file);
file_schema.row_count_ = std::get<4>(file);
file_schema.date_ = std::get<5>(file);
file_schema.engine_type_ = std::get<6>(file);
file_schema.created_on_ = std::get<7>(file);
switch (file_schema.file_type_) {
case (int)TableFileSchema::RAW:raw_count++;
break;
case (int) TableFileSchema::NEW:new_count++;
case (int)TableFileSchema::NEW:new_count++;
break;
case (int) TableFileSchema::NEW_MERGE:new_merge_count++;
case (int)TableFileSchema::NEW_MERGE:new_merge_count++;
break;
case (int) TableFileSchema::NEW_INDEX:new_index_count++;
case (int)TableFileSchema::NEW_INDEX:new_index_count++;
break;
case (int) TableFileSchema::TO_INDEX:to_index_count++;
case (int)TableFileSchema::TO_INDEX:to_index_count++;
break;
case (int) TableFileSchema::INDEX:index_count++;
case (int)TableFileSchema::INDEX:index_count++;
break;
case (int) TableFileSchema::BACKUP:backup_count++;
case (int)TableFileSchema::BACKUP:backup_count++;
break;
default:break;
}
table_files.emplace_back(file_schema);
}
ENGINE_LOG_DEBUG << "Table " << table_id << " currently has raw files:" << raw_count
@ -1139,13 +1161,12 @@ SqliteMetaImpl::FilesByType(const std::string &table_id,
<< " new_index files:" << new_index_count << " to_index files:" << to_index_count
<< " index files:" << index_count << " backup files:" << backup_count;
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when check non index files", e.what());
}
return Status::OK();
}
// TODO(myh): Support swap to cloud storage
Status
SqliteMetaImpl::Archive() {
@ -1166,11 +1187,11 @@ SqliteMetaImpl::Archive() {
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::TO_DELETE),
c(&TableFileSchema::file_type_) = (int)TableFileSchema::TO_DELETE),
where(
c(&TableFileSchema::created_on_) < (int64_t) (now - usecs) and
c(&TableFileSchema::file_type_) != (int) TableFileSchema::TO_DELETE));
} catch (std::exception &e) {
c(&TableFileSchema::created_on_) < (int64_t)(now - usecs) and
c(&TableFileSchema::file_type_) != (int)TableFileSchema::TO_DELETE));
} catch (std::exception& e) {
return HandleException("Encounter exception when update table files", e.what());
}
@ -1218,15 +1239,15 @@ SqliteMetaImpl::CleanUp() {
std::lock_guard<std::mutex> meta_lock(meta_mutex_);
std::vector<int> file_types = {
(int) TableFileSchema::NEW,
(int) TableFileSchema::NEW_INDEX,
(int) TableFileSchema::NEW_MERGE
(int)TableFileSchema::NEW,
(int)TableFileSchema::NEW_INDEX,
(int)TableFileSchema::NEW_MERGE
};
auto files =
ConnectorPtr->select(columns(&TableFileSchema::id_), where(in(&TableFileSchema::file_type_, file_types)));
auto commited = ConnectorPtr->transaction([&]() mutable {
for (auto &file : files) {
for (auto& file : files) {
ENGINE_LOG_DEBUG << "Remove table file type as NEW";
ConnectorPtr->remove<TableFileSchema>(std::get<0>(file));
}
@ -1240,7 +1261,7 @@ SqliteMetaImpl::CleanUp() {
if (files.size() > 0) {
ENGINE_LOG_DEBUG << "Clean " << files.size() << " files";
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when clean table file", e.what());
}
@ -1265,7 +1286,7 @@ SqliteMetaImpl::CleanUpFilesWithTTL(uint16_t seconds) {
&TableFileSchema::date_),
where(
c(&TableFileSchema::file_type_) ==
(int) TableFileSchema::TO_DELETE
(int)TableFileSchema::TO_DELETE
and
c(&TableFileSchema::updated_time_)
< now - seconds * US_PS));
@ -1354,7 +1375,7 @@ SqliteMetaImpl::CleanUpFilesWithTTL(uint16_t seconds) {
}
Status
SqliteMetaImpl::Count(const std::string &table_id, uint64_t &result) {
SqliteMetaImpl::Count(const std::string& table_id, uint64_t& result) {
try {
server::MetricCollector metric;
@ -1414,14 +1435,14 @@ SqliteMetaImpl::DiscardFiles(int64_t to_discard_size) {
auto selected = ConnectorPtr->select(columns(&TableFileSchema::id_,
&TableFileSchema::file_size_),
where(c(&TableFileSchema::file_type_)
!= (int) TableFileSchema::TO_DELETE),
!= (int)TableFileSchema::TO_DELETE),
order_by(&TableFileSchema::id_),
limit(10));
std::vector<int> ids;
TableFileSchema table_file;
for (auto &file : selected) {
for (auto& file : selected) {
if (to_discard_size <= 0) break;
table_file.id_ = std::get<0>(file);
table_file.file_size_ = std::get<1>(file);
@ -1437,7 +1458,7 @@ SqliteMetaImpl::DiscardFiles(int64_t to_discard_size) {
ConnectorPtr->update_all(
set(
c(&TableFileSchema::file_type_) = (int) TableFileSchema::TO_DELETE,
c(&TableFileSchema::file_type_) = (int)TableFileSchema::TO_DELETE,
c(&TableFileSchema::updated_time_) = utils::GetMicroSecTimeStamp()),
where(
in(&TableFileSchema::id_, ids)));
@ -1448,7 +1469,7 @@ SqliteMetaImpl::DiscardFiles(int64_t to_discard_size) {
if (!commited) {
return HandleException("DiscardFiles error: sqlite transaction failed");
}
} catch (std::exception &e) {
} catch (std::exception& e) {
return HandleException("Encounter exception when discard table file", e.what());
}

View File

@ -108,7 +108,7 @@ class SqliteMetaImpl : public Meta {
Status
FilesByType(const std::string& table_id, const std::vector<int>& file_types,
std::vector<std::string>& file_ids) override;
TableFilesSchema& table_files) override;
Status
Size(uint64_t& result) override;

View File

@ -733,7 +733,16 @@ macro(build_faiss)
if (USE_JFROG_CACHE STREQUAL "ON")
string(MD5 FAISS_COMBINE_MD5 "${FAISS_MD5}${LAPACK_MD5}${OPENBLAS_MD5}")
set(FAISS_CACHE_PACKAGE_NAME "faiss_${FAISS_COMBINE_MD5}.tar.gz")
if (KNOWHERE_GPU_VERSION)
set(FAISS_COMPUTE_TYPE "gpu")
else ()
set(FAISS_COMPUTE_TYPE "cpu")
endif()
if (FAISS_WITH_MKL)
set(FAISS_CACHE_PACKAGE_NAME "faiss_${FAISS_COMPUTE_TYPE}_mkl_${FAISS_COMBINE_MD5}.tar.gz")
else ()
set(FAISS_CACHE_PACKAGE_NAME "faiss_${FAISS_COMPUTE_TYPE}_openblas_${FAISS_COMBINE_MD5}.tar.gz")
endif()
set(FAISS_CACHE_URL "${JFROG_ARTFACTORY_CACHE_URL}/${FAISS_CACHE_PACKAGE_NAME}")
set(FAISS_CACHE_PACKAGE_PATH "${THIRDPARTY_PACKAGE_CACHE}/${FAISS_CACHE_PACKAGE_NAME}")

View File

@ -33,7 +33,7 @@ FaissBaseIndex::SerializeImpl() {
try {
faiss::Index* index = index_.get();
SealImpl();
// SealImpl();
MemoryIOWriter writer;
faiss::write_index(index, &writer);
@ -60,6 +60,8 @@ FaissBaseIndex::LoadImpl(const BinarySet& index_binary) {
faiss::Index* index = faiss::read_index(&reader);
index_.reset(index);
SealImpl();
}
void

View File

@ -86,9 +86,6 @@ GPUIVF::SerializeImpl() {
faiss::Index* index = index_.get();
faiss::Index* host_index = faiss::gpu::index_gpu_to_cpu(index);
// TODO(linxj): support seal
// SealImpl();
faiss::write_index(host_index, &writer);
delete host_index;
}

View File

@ -97,7 +97,6 @@ IVF::Serialize() {
}
std::lock_guard<std::mutex> lk(mutex_);
Seal();
return SerializeImpl();
}

View File

@ -59,9 +59,9 @@ print_banner() {
#endif
<< " library." << std::endl;
#ifdef MILVUS_CPU_VERSION
std::cout << "You are using Milvus CPU version" << std::endl;
std::cout << "You are using Milvus CPU edition" << std::endl;
#else
std::cout << "You are using Milvus GPU version" << std::endl;
std::cout << "You are using Milvus GPU edition" << std::endl;
#endif
std::cout << std::endl;
}

View File

@ -54,36 +54,40 @@ load_simple_config() {
// get resources
#ifdef MILVUS_GPU_VERSION
bool enable_gpu = false;
server::Config& config = server::Config::GetInstance();
std::vector<int64_t> gpu_ids;
config.GetGpuResourceConfigSearchResources(gpu_ids);
std::vector<int64_t> build_gpu_ids;
config.GetGpuResourceConfigBuildIndexResources(build_gpu_ids);
auto pcie = Connection("pcie", 12000);
config.GetGpuResourceConfigEnable(enable_gpu);
if (enable_gpu) {
std::vector<int64_t> gpu_ids;
config.GetGpuResourceConfigSearchResources(gpu_ids);
std::vector<int64_t> build_gpu_ids;
config.GetGpuResourceConfigBuildIndexResources(build_gpu_ids);
auto pcie = Connection("pcie", 12000);
std::vector<int64_t> not_find_build_ids;
for (auto& build_id : build_gpu_ids) {
bool find_gpu_id = false;
for (auto& gpu_id : gpu_ids) {
if (gpu_id == build_id) {
find_gpu_id = true;
break;
std::vector<int64_t> not_find_build_ids;
for (auto& build_id : build_gpu_ids) {
bool find_gpu_id = false;
for (auto& gpu_id : gpu_ids) {
if (gpu_id == build_id) {
find_gpu_id = true;
break;
}
}
if (not find_gpu_id) {
not_find_build_ids.emplace_back(build_id);
}
}
if (not find_gpu_id) {
not_find_build_ids.emplace_back(build_id);
for (auto& gpu_id : gpu_ids) {
ResMgrInst::GetInstance()->Add(ResourceFactory::Create(std::to_string(gpu_id), "GPU", gpu_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(gpu_id), pcie);
}
}
for (auto& gpu_id : gpu_ids) {
ResMgrInst::GetInstance()->Add(ResourceFactory::Create(std::to_string(gpu_id), "GPU", gpu_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(gpu_id), pcie);
}
for (auto& not_find_id : not_find_build_ids) {
ResMgrInst::GetInstance()->Add(
ResourceFactory::Create(std::to_string(not_find_id), "GPU", not_find_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(not_find_id), pcie);
for (auto& not_find_id : not_find_build_ids) {
ResMgrInst::GetInstance()->Add(
ResourceFactory::Create(std::to_string(not_find_id), "GPU", not_find_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(not_find_id), pcie);
}
}
#endif
}

View File

@ -102,11 +102,35 @@ class OptimizerInst {
if (instance == nullptr) {
std::vector<PassPtr> pass_list;
#ifdef MILVUS_GPU_VERSION
pass_list.push_back(std::make_shared<BuildIndexPass>());
pass_list.push_back(std::make_shared<FaissFlatPass>());
pass_list.push_back(std::make_shared<FaissIVFFlatPass>());
pass_list.push_back(std::make_shared<FaissIVFSQ8Pass>());
pass_list.push_back(std::make_shared<FaissIVFSQ8HPass>());
bool enable_gpu = false;
server::Config& config = server::Config::GetInstance();
config.GetGpuResourceConfigEnable(enable_gpu);
if (enable_gpu) {
std::vector<int64_t> build_gpus;
std::vector<int64_t> search_gpus;
int64_t gpu_search_threshold;
config.GetGpuResourceConfigBuildIndexResources(build_gpus);
config.GetGpuResourceConfigSearchResources(search_gpus);
config.GetEngineConfigGpuSearchThreshold(gpu_search_threshold);
std::string build_msg = "Build index gpu:";
for (auto build_id : build_gpus) {
build_msg.append(" gpu" + std::to_string(build_id));
}
SERVER_LOG_DEBUG << build_msg;
std::string search_msg = "Search gpu:";
for (auto search_id : search_gpus) {
search_msg.append(" gpu" + std::to_string(search_id));
}
search_msg.append(". gpu_search_threshold:" + std::to_string(gpu_search_threshold));
SERVER_LOG_DEBUG << search_msg;
pass_list.push_back(std::make_shared<BuildIndexPass>());
pass_list.push_back(std::make_shared<FaissFlatPass>());
pass_list.push_back(std::make_shared<FaissIVFFlatPass>());
pass_list.push_back(std::make_shared<FaissIVFSQ8Pass>());
pass_list.push_back(std::make_shared<FaissIVFSQ8HPass>());
}
#endif
pass_list.push_back(std::make_shared<FallbackPass>());
instance = std::make_shared<Optimizer>(pass_list);

View File

@ -106,41 +106,6 @@ Action::SpecifiedResourceLabelTaskScheduler(const ResourceMgrPtr& res_mgr, Resou
std::shared_ptr<LoadCompletedEvent> event) {
auto task_item = event->task_table_item_;
auto task = event->task_table_item_->task;
// if (resource->type() == ResourceType::DISK) {
// // step 1: calculate shortest path per resource, from disk to compute resource
// auto compute_resources = res_mgr->GetComputeResources();
// std::vector<std::vector<std::string>> paths;
// std::vector<uint64_t> transport_costs;
// for (auto& res : compute_resources) {
// std::vector<std::string> path;
// uint64_t transport_cost = ShortestPath(resource, res, res_mgr, path);
// transport_costs.push_back(transport_cost);
// paths.emplace_back(path);
// }
// if (task->job_.lock()->type() == JobType::BUILD) {
// // step2: Read device id in config
// // get build index gpu resource
// server::Config& config = server::Config::GetInstance();
// int32_t build_index_gpu;
// Status stat = config.GetResourceConfigIndexBuildDevice(build_index_gpu);
//
// bool find_gpu_res = false;
// if (res_mgr->GetResource(ResourceType::GPU, build_index_gpu) != nullptr) {
// for (uint64_t i = 0; i < compute_resources.size(); ++i) {
// if (compute_resources[i]->name() ==
// res_mgr->GetResource(ResourceType::GPU, build_index_gpu)->name()) {
// find_gpu_res = true;
// Path task_path(paths[i], paths[i].size() - 1);
// task->path() = task_path;
// break;
// }
// }
// }
// if (not find_gpu_res) {
// task->path() = Path(paths[0], paths[0].size() - 1);
// }
// }
// }
if (resource->name() == task->path().Last()) {
resource->WakeupExecutor();

View File

@ -25,12 +25,13 @@ namespace scheduler {
void
BuildIndexPass::Init() {
#ifdef MILVUS_GPU_VERSION
server::Config& config = server::Config::GetInstance();
std::vector<int64_t> build_resources;
Status s = config.GetGpuResourceConfigBuildIndexResources(build_resources);
Status s = config.GetGpuResourceConfigBuildIndexResources(build_gpu_ids_);
if (!s.ok()) {
throw;
}
#endif
}
bool
@ -38,13 +39,16 @@ BuildIndexPass::Run(const TaskPtr& task) {
if (task->Type() != TaskType::BuildIndexTask)
return false;
if (build_gpu_ids_.empty())
if (build_gpu_ids_.empty()) {
SERVER_LOG_WARNING << "BuildIndexPass cannot get build index gpu!";
return false;
}
ResourcePtr res_ptr;
res_ptr = ResMgrInst::GetInstance()->GetResource(ResourceType::GPU, build_gpu_ids_[specified_gpu_id_]);
auto label = std::make_shared<SpecResLabel>(std::weak_ptr<Resource>(res_ptr));
task->label() = label;
SERVER_LOG_DEBUG << "Specify gpu" << specified_gpu_id_ << " to build index!";
specified_gpu_id_ = (specified_gpu_id_ + 1) % build_gpu_ids_.size();
return true;

View File

@ -45,7 +45,7 @@ class BuildIndexPass : public Pass {
private:
uint64_t specified_gpu_id_ = 0;
std::vector<int32_t> build_gpu_ids_;
std::vector<int64_t> build_gpu_ids_;
};
using BuildIndexPassPtr = std::shared_ptr<BuildIndexPass>;

View File

@ -29,6 +29,7 @@ namespace scheduler {
void
FaissFlatPass::Init() {
#ifdef MILVUS_GPU_VERSION
server::Config& config = server::Config::GetInstance();
Status s = config.GetEngineConfigGpuSearchThreshold(threshold_);
if (!s.ok()) {
@ -38,6 +39,7 @@ FaissFlatPass::Init() {
if (!s.ok()) {
throw;
}
#endif
}
bool
@ -54,9 +56,11 @@ FaissFlatPass::Run(const TaskPtr& task) {
auto search_job = std::static_pointer_cast<SearchJob>(search_task->job_.lock());
ResourcePtr res_ptr;
if (search_job->nq() < threshold_) {
SERVER_LOG_DEBUG << "FaissFlatPass: nq < gpu_search_threshold, specify cpu to search!";
res_ptr = ResMgrInst::GetInstance()->GetResource("cpu");
} else {
auto best_device_id = count_ % gpus.size();
SERVER_LOG_DEBUG << "FaissFlatPass: nq > gpu_search_threshold, specify gpu" << best_device_id << " to search!";
count_++;
res_ptr = ResMgrInst::GetInstance()->GetResource(ResourceType::GPU, best_device_id);
}

View File

@ -29,6 +29,7 @@ namespace scheduler {
void
FaissIVFFlatPass::Init() {
#ifdef MILVUS_GPU_VERSION
server::Config& config = server::Config::GetInstance();
Status s = config.GetEngineConfigGpuSearchThreshold(threshold_);
if (!s.ok()) {
@ -38,6 +39,7 @@ FaissIVFFlatPass::Init() {
if (!s.ok()) {
throw;
}
#endif
}
bool
@ -54,9 +56,12 @@ FaissIVFFlatPass::Run(const TaskPtr& task) {
auto search_job = std::static_pointer_cast<SearchJob>(search_task->job_.lock());
ResourcePtr res_ptr;
if (search_job->nq() < threshold_) {
SERVER_LOG_DEBUG << "FaissIVFFlatPass: nq < gpu_search_threshold, specify cpu to search!";
res_ptr = ResMgrInst::GetInstance()->GetResource("cpu");
} else {
auto best_device_id = count_ % gpus.size();
SERVER_LOG_DEBUG << "FaissIVFFlatPass: nq > gpu_search_threshold, specify gpu" << best_device_id
<< " to search!";
count_++;
res_ptr = ResMgrInst::GetInstance()->GetResource(ResourceType::GPU, best_device_id);
}

View File

@ -29,12 +29,14 @@ namespace scheduler {
void
FaissIVFSQ8HPass::Init() {
#ifdef MILVUS_GPU_VERSION
server::Config& config = server::Config::GetInstance();
Status s = config.GetEngineConfigGpuSearchThreshold(threshold_);
if (!s.ok()) {
threshold_ = std::numeric_limits<int64_t>::max();
}
s = config.GetGpuResourceConfigSearchResources(gpus);
#endif
}
bool
@ -51,9 +53,12 @@ FaissIVFSQ8HPass::Run(const TaskPtr& task) {
auto search_job = std::static_pointer_cast<SearchJob>(search_task->job_.lock());
ResourcePtr res_ptr;
if (search_job->nq() < threshold_) {
SERVER_LOG_DEBUG << "FaissIVFSQ8HPass: nq < gpu_search_threshold, specify cpu to search!";
res_ptr = ResMgrInst::GetInstance()->GetResource("cpu");
} else {
auto best_device_id = count_ % gpus.size();
SERVER_LOG_DEBUG << "FaissIVFSQ8HPass: nq > gpu_search_threshold, specify gpu" << best_device_id
<< " to search!";
count_++;
res_ptr = ResMgrInst::GetInstance()->GetResource(ResourceType::GPU, best_device_id);
}

View File

@ -29,6 +29,7 @@ namespace scheduler {
void
FaissIVFSQ8Pass::Init() {
#ifdef MILVUS_GPU_VERSION
server::Config& config = server::Config::GetInstance();
Status s = config.GetEngineConfigGpuSearchThreshold(threshold_);
if (!s.ok()) {
@ -38,6 +39,7 @@ FaissIVFSQ8Pass::Init() {
if (!s.ok()) {
throw;
}
#endif
}
bool
@ -54,9 +56,12 @@ FaissIVFSQ8Pass::Run(const TaskPtr& task) {
auto search_job = std::static_pointer_cast<SearchJob>(search_task->job_.lock());
ResourcePtr res_ptr;
if (search_job->nq() < threshold_) {
SERVER_LOG_DEBUG << "FaissIVFSQ8Pass: nq < gpu_search_threshold, specify cpu to search!";
res_ptr = ResMgrInst::GetInstance()->GetResource("cpu");
} else {
auto best_device_id = count_ % gpus.size();
SERVER_LOG_DEBUG << "FaissIVFSQ8Pass: nq > gpu_search_threshold, specify gpu" << best_device_id
<< " to search!";
count_++;
res_ptr = ResMgrInst::GetInstance()->GetResource(ResourceType::GPU, best_device_id);
}

View File

@ -33,6 +33,7 @@ FallbackPass::Run(const TaskPtr& task) {
return false;
}
// NEVER be empty
SERVER_LOG_DEBUG << "FallbackPass!";
auto cpu = ResMgrInst::GetInstance()->GetCpuResources()[0];
auto label = std::make_shared<SpecResLabel>(cpu);
task->label() = label;

View File

@ -85,7 +85,7 @@ XBuildIndexTask::Load(milvus::scheduler::LoadType type, uint8_t device_id) {
size_t file_size = to_index_engine_->PhysicalSize();
std::string info = "Load file id:" + std::to_string(file_->id_) +
std::string info = "Load file id:" + std::to_string(file_->id_) + " " + type_str +
" file type:" + std::to_string(file_->file_type_) + " size:" + std::to_string(file_size) +
" bytes from location: " + file_->location_ + " totally cost";
double span = rc.ElapseFromBegin(info);

View File

@ -93,6 +93,15 @@ ClientTest::Test(const std::string& address, const std::string& port) {
std::cout << "CreatePartition function call status: " << stat.message() << std::endl;
milvus_sdk::Utils::PrintPartitionParam(partition_param);
}
// show partitions
milvus::PartitionList partition_array;
stat = conn->ShowPartitions(TABLE_NAME, partition_array);
std::cout << partition_array.size() << " partitions created:" << std::endl;
for (auto& partition : partition_array) {
std::cout << "\t" << partition.partition_name << "\t tag = " << partition.partition_tag << std::endl;
}
}
{ // insert vectors
@ -148,6 +157,7 @@ ClientTest::Test(const std::string& address, const std::string& port) {
}
{ // wait unit build index finish
milvus_sdk::TimeRecorder rc("Create index");
std::cout << "Wait until create all index done" << std::endl;
milvus::IndexParam index1 = BuildIndexParam();
milvus_sdk::Utils::PrintIndexParam(index1);

View File

@ -150,6 +150,7 @@ ClientTest::Test(const std::string& address, const std::string& port) {
}
{ // wait unit build index finish
milvus_sdk::TimeRecorder rc("Create index");
std::cout << "Wait until create all index done" << std::endl;
milvus::IndexParam index1 = BuildIndexParam();
milvus_sdk::Utils::PrintIndexParam(index1);

View File

@ -157,18 +157,20 @@ void
Utils::PrintSearchResult(const std::vector<std::pair<int64_t, milvus::RowRecord>>& search_record_array,
const milvus::TopKQueryResult& topk_query_result) {
BLOCK_SPLITER
size_t nq = topk_query_result.row_num;
size_t topk = topk_query_result.ids.size() / nq;
std::cout << "Returned result count: " << nq * topk << std::endl;
std::cout << "Returned result count: " << topk_query_result.size() << std::endl;
int32_t index = 0;
for (size_t i = 0; i < nq; i++) {
auto search_id = search_record_array[index].first;
index++;
std::cout << "No." << index << " vector " << search_id << " top " << topk << " search result:" << std::endl;
if (topk_query_result.size() != search_record_array.size()) {
std::cout << "ERROR: Returned result count dones equal nq" << std::endl;
return;
}
for (size_t i = 0; i < topk_query_result.size(); i++) {
const milvus::QueryResult& one_result = topk_query_result[i];
size_t topk = one_result.ids.size();
auto search_id = search_record_array[i].first;
std::cout << "No." << i << " vector " << search_id << " top " << topk << " search result:" << std::endl;
for (size_t j = 0; j < topk; j++) {
size_t idx = i * topk + j;
std::cout << "\t" << topk_query_result.ids[idx] << "\t" << topk_query_result.distances[idx] << std::endl;
std::cout << "\t" << one_result.ids[j] << "\t" << one_result.distances[j] << std::endl;
}
}
BLOCK_SPLITER
@ -178,12 +180,11 @@ void
Utils::CheckSearchResult(const std::vector<std::pair<int64_t, milvus::RowRecord>>& search_record_array,
const milvus::TopKQueryResult& topk_query_result) {
BLOCK_SPLITER
size_t nq = topk_query_result.row_num;
size_t result_k = topk_query_result.ids.size() / nq;
int64_t index = 0;
size_t nq = topk_query_result.size();
for (size_t i = 0; i < nq; i++) {
auto result_id = topk_query_result.ids[i * result_k];
auto search_id = search_record_array[index++].first;
const milvus::QueryResult& one_result = topk_query_result[i];
auto search_id = search_record_array[i].first;
int64_t result_id = one_result.ids[0];
if (result_id != search_id) {
std::cout << "The top 1 result is wrong: " << result_id << " vs. " << search_id << std::endl;
} else {
@ -198,9 +199,7 @@ Utils::DoSearch(std::shared_ptr<milvus::Connection> conn, const std::string& tab
const std::vector<std::string>& partiton_tags, int64_t top_k, int64_t nprobe,
const std::vector<std::pair<int64_t, milvus::RowRecord>>& search_record_array,
milvus::TopKQueryResult& topk_query_result) {
topk_query_result.distances.clear();
topk_query_result.ids.clear();
topk_query_result.row_num = 0;
topk_query_result.clear();
std::vector<milvus::Range> query_range_array;
milvus::Range rg;

View File

@ -250,12 +250,17 @@ ClientProxy::Search(const std::string& table_name, const std::vector<std::string
Status status = client_ptr_->Search(result, search_param);
// step 4: convert result array
topk_query_result.row_num = result.row_num();
topk_query_result.ids.resize(result.ids().size());
memcpy(topk_query_result.ids.data(), result.ids().data(), result.ids().size() * sizeof(int64_t));
topk_query_result.distances.resize(result.distances().size());
memcpy(topk_query_result.distances.data(), result.distances().data(),
result.distances().size() * sizeof(float));
topk_query_result.reserve(result.row_num());
int64_t nq = result.row_num();
int64_t topk = result.ids().size() / nq;
for (int64_t i = 0; i < result.row_num(); i++) {
milvus::QueryResult one_result;
one_result.ids.resize(topk);
one_result.distances.resize(topk);
memcpy(one_result.ids.data(), result.ids().data() + topk * i, topk * sizeof(int64_t));
memcpy(one_result.distances.data(), result.distances().data() + topk * i, topk * sizeof(float));
topk_query_result.emplace_back(one_result);
}
return status;
} catch (std::exception& ex) {

View File

@ -81,11 +81,11 @@ struct RowRecord {
/**
* @brief TopK query result
*/
struct TopKQueryResult {
int64_t row_num;
struct QueryResult {
std::vector<int64_t> ids;
std::vector<float> distances;
};
using TopKQueryResult = std::vector<QueryResult>;
/**
* @brief index parameters

View File

@ -182,6 +182,7 @@ Config::ValidateConfig() {
return s;
}
#ifdef MILVUS_GPU_VERSION
int64_t engine_gpu_search_threshold;
s = GetEngineConfigGpuSearchThreshold(engine_gpu_search_threshold);
if (!s.ok()) {
@ -189,35 +190,36 @@ Config::ValidateConfig() {
}
/* gpu resource config */
#ifdef MILVUS_GPU_VERSION
bool gpu_resource_enable;
s = GetGpuResourceConfigEnable(gpu_resource_enable);
if (!s.ok()) {
return s;
}
int64_t resource_cache_capacity;
s = GetGpuResourceConfigCacheCapacity(resource_cache_capacity);
if (!s.ok()) {
return s;
}
if (gpu_resource_enable) {
int64_t resource_cache_capacity;
s = GetGpuResourceConfigCacheCapacity(resource_cache_capacity);
if (!s.ok()) {
return s;
}
float resource_cache_threshold;
s = GetGpuResourceConfigCacheThreshold(resource_cache_threshold);
if (!s.ok()) {
return s;
}
float resource_cache_threshold;
s = GetGpuResourceConfigCacheThreshold(resource_cache_threshold);
if (!s.ok()) {
return s;
}
std::vector<int64_t> search_resources;
s = GetGpuResourceConfigSearchResources(search_resources);
if (!s.ok()) {
return s;
}
std::vector<int64_t> search_resources;
s = GetGpuResourceConfigSearchResources(search_resources);
if (!s.ok()) {
return s;
}
std::vector<int64_t> index_build_resources;
s = GetGpuResourceConfigBuildIndexResources(index_build_resources);
if (!s.ok()) {
return s;
std::vector<int64_t> index_build_resources;
s = GetGpuResourceConfigBuildIndexResources(index_build_resources);
if (!s.ok()) {
return s;
}
}
#endif
@ -323,13 +325,13 @@ Config::ResetDefaultConfig() {
return s;
}
#ifdef MILVUS_GPU_VERSION
/* gpu resource config */
s = SetEngineConfigGpuSearchThreshold(CONFIG_ENGINE_GPU_SEARCH_THRESHOLD_DEFAULT);
if (!s.ok()) {
return s;
}
/* gpu resource config */
#ifdef MILVUS_GPU_VERSION
s = SetGpuResourceConfigEnable(CONFIG_GPU_RESOURCE_ENABLE_DEFAULT);
if (!s.ok()) {
return s;
@ -630,6 +632,7 @@ Config::CheckEngineConfigOmpThreadNum(const std::string& value) {
return Status::OK();
}
#ifdef MILVUS_GPU_VERSION
Status
Config::CheckEngineConfigGpuSearchThreshold(const std::string& value) {
if (!ValidationUtil::ValidateStringIsNumber(value).ok()) {
@ -759,6 +762,7 @@ Config::CheckGpuResourceConfigBuildIndexResources(const std::vector<std::string>
return Status::OK();
}
#endif
////////////////////////////////////////////////////////////////////////////////
ConfigNode&
@ -979,6 +983,7 @@ Config::GetEngineConfigOmpThreadNum(int64_t& value) {
return Status::OK();
}
#ifdef MILVUS_GPU_VERSION
Status
Config::GetEngineConfigGpuSearchThreshold(int64_t& value) {
std::string str =
@ -1095,6 +1100,7 @@ Config::GetGpuResourceConfigBuildIndexResources(std::vector<int64_t>& value) {
}
return Status::OK();
}
#endif
///////////////////////////////////////////////////////////////////////////////
/* server config */
@ -1282,6 +1288,8 @@ Config::SetEngineConfigOmpThreadNum(const std::string& value) {
return Status::OK();
}
#ifdef MILVUS_GPU_VERSION
/* gpu resource config */
Status
Config::SetEngineConfigGpuSearchThreshold(const std::string& value) {
Status s = CheckEngineConfigGpuSearchThreshold(value);
@ -1292,7 +1300,6 @@ Config::SetEngineConfigGpuSearchThreshold(const std::string& value) {
return Status::OK();
}
/* gpu resource config */
Status
Config::SetGpuResourceConfigEnable(const std::string& value) {
Status s = CheckGpuResourceConfigEnable(value);
@ -1346,6 +1353,7 @@ Config::SetGpuResourceConfigBuildIndexResources(const std::string& value) {
SetConfigValueInMem(CONFIG_GPU_RESOURCE, CONFIG_GPU_RESOURCE_BUILD_INDEX_RESOURCES, value);
return Status::OK();
} // namespace server
#endif
} // namespace server
} // namespace milvus

View File

@ -170,6 +170,8 @@ class Config {
CheckEngineConfigUseBlasThreshold(const std::string& value);
Status
CheckEngineConfigOmpThreadNum(const std::string& value);
#ifdef MILVUS_GPU_VERSION
Status
CheckEngineConfigGpuSearchThreshold(const std::string& value);
@ -184,6 +186,7 @@ class Config {
CheckGpuResourceConfigSearchResources(const std::vector<std::string>& value);
Status
CheckGpuResourceConfigBuildIndexResources(const std::vector<std::string>& value);
#endif
std::string
GetConfigStr(const std::string& parent_key, const std::string& child_key, const std::string& default_value = "");
@ -239,6 +242,8 @@ class Config {
GetEngineConfigUseBlasThreshold(int64_t& value);
Status
GetEngineConfigOmpThreadNum(int64_t& value);
#ifdef MILVUS_GPU_VERSION
Status
GetEngineConfigGpuSearchThreshold(int64_t& value);
@ -253,6 +258,7 @@ class Config {
GetGpuResourceConfigSearchResources(std::vector<int64_t>& value);
Status
GetGpuResourceConfigBuildIndexResources(std::vector<int64_t>& value);
#endif
public:
/* server config */
@ -300,6 +306,8 @@ class Config {
SetEngineConfigUseBlasThreshold(const std::string& value);
Status
SetEngineConfigOmpThreadNum(const std::string& value);
#ifdef MILVUS_GPU_VERSION
Status
SetEngineConfigGpuSearchThreshold(const std::string& value);
@ -314,6 +322,7 @@ class Config {
SetGpuResourceConfigSearchResources(const std::string& value);
Status
SetGpuResourceConfigBuildIndexResources(const std::string& value);
#endif
private:
std::unordered_map<std::string, std::unordered_map<std::string, std::string>> config_map_;

View File

@ -183,7 +183,11 @@ Server::Start() {
// print version information
SERVER_LOG_INFO << "Milvus " << BUILD_TYPE << " version: v" << MILVUS_VERSION << ", built at " << BUILD_TIME;
#ifdef MILVUS_CPU_VERSION
SERVER_LOG_INFO << "CPU edition";
#else
SERVER_LOG_INFO << "GPU edition";
#endif
server::Metrics::GetInstance().Init();
server::SystemInfo::GetInstance().Init();

View File

@ -90,8 +90,8 @@ GrpcBaseRequest::SetStatus(ErrorCode error_code, const std::string& error_msg) {
std::string
GrpcBaseRequest::TableNotExistMsg(const std::string& table_name) {
return "Table " + table_name +
" not exist. Use milvus.has_table to verify whether the table exists. You also can check if the table name "
"exists.";
" does not exist. Use milvus.has_table to verify whether the table exists. "
"You also can check whether the table name exists.";
}
Status

View File

@ -30,9 +30,13 @@ class StringHelpFunctions {
StringHelpFunctions() = default;
public:
// trim blanks from begin and end
// " a b c " => "a b c"
static void
TrimStringBlank(std::string& string);
// trim quotes from begin and end
// "'abc'" => "abc"
static void
TrimStringQuote(std::string& string, const std::string& qoute);
@ -46,6 +50,8 @@ class StringHelpFunctions {
static void
SplitStringByDelimeter(const std::string& str, const std::string& delimeter, std::vector<std::string>& result);
// merge strings with delimeter
// "a", "b", "c" => "a,b,c"
static void
MergeStringWithDelimeter(const std::vector<std::string>& strs, const std::string& delimeter, std::string& result);

View File

@ -218,10 +218,9 @@ ValidationUtil::ValidateGpuIndex(int32_t gpu_index) {
return Status::OK();
}
#ifdef MILVUS_GPU_VERSION
Status
ValidationUtil::GetGpuMemory(int32_t gpu_index, size_t& memory) {
#ifdef MILVUS_GPU_VERSION
cudaDeviceProp deviceProp;
auto cuda_err = cudaGetDeviceProperties(&deviceProp, gpu_index);
if (cuda_err) {
@ -232,10 +231,9 @@ ValidationUtil::GetGpuMemory(int32_t gpu_index, size_t& memory) {
}
memory = deviceProp.totalGlobalMem;
#endif
return Status::OK();
}
#endif
Status
ValidationUtil::ValidateIpAddress(const std::string& ip_address) {

View File

@ -64,8 +64,10 @@ class ValidationUtil {
static Status
ValidateGpuIndex(int32_t gpu_index);
#ifdef MILVUS_GPU_VERSION
static Status
GetGpuMemory(int32_t gpu_index, size_t& memory);
#endif
static Status
ValidateIpAddress(const std::string& ip_address);

View File

@ -37,6 +37,16 @@ constexpr int64_t M_BYTE = 1024 * 1024;
Status
KnowhereResource::Initialize() {
#ifdef MILVUS_GPU_VERSION
Status s;
bool enable_gpu = false;
server::Config& config = server::Config::GetInstance();
s = config.GetGpuResourceConfigEnable(enable_gpu);
if (!s.ok())
return s;
if (not enable_gpu)
return Status::OK();
struct GpuResourceSetting {
int64_t pinned_memory = 300 * M_BYTE;
int64_t temp_memory = 300 * M_BYTE;
@ -44,10 +54,8 @@ KnowhereResource::Initialize() {
};
using GpuResourcesArray = std::map<int64_t, GpuResourceSetting>;
GpuResourcesArray gpu_resources;
Status s;
// get build index gpu resource
server::Config& config = server::Config::GetInstance();
std::vector<int64_t> build_index_gpus;
s = config.GetGpuResourceConfigBuildIndexResources(build_index_gpus);
if (!s.ok())

View File

@ -305,24 +305,30 @@ TEST_F(DBTest, SEARCH_TEST) {
// test FAISS_IVFSQ8H optimizer
index.engine_type_ = (int)milvus::engine::EngineType::FAISS_IVFSQ8H;
db_->CreateIndex(TABLE_NAME, index); // wait until build index finish
std::vector<std::string> partition_tag;
milvus::engine::ResultIds result_ids;
milvus::engine::ResultDistances result_dists;
{
milvus::engine::QueryResults results;
stat = db_->Query(TABLE_NAME, k, nq, 10, xq.data(), results);
result_ids.clear();
result_dists.clear();
stat = db_->Query(TABLE_NAME, partition_tag, k, nq, 10, xq.data(), result_ids, result_dists);
ASSERT_TRUE(stat.ok());
}
{
milvus::engine::QueryResults large_nq_results;
stat = db_->Query(TABLE_NAME, k, 200, 10, xq.data(), large_nq_results);
result_ids.clear();
result_dists.clear();
stat = db_->Query(TABLE_NAME, partition_tag, k, 200, 10, xq.data(), result_ids, result_dists);
ASSERT_TRUE(stat.ok());
}
{ // search by specify index file
milvus::engine::meta::DatesT dates;
std::vector<std::string> file_ids = {"1", "2", "3", "4", "5", "6"};
milvus::engine::QueryResults results;
stat = db_->Query(TABLE_NAME, file_ids, k, nq, 10, xq.data(), dates, results);
result_ids.clear();
result_dists.clear();
stat = db_->QueryByFileID(TABLE_NAME, file_ids, k, nq, 10, xq.data(), dates, result_ids, result_dists);
ASSERT_TRUE(stat.ok());
}

View File

@ -306,9 +306,9 @@ TEST_F(MetaTest, TABLE_FILES_TEST) {
ASSERT_EQ(dated_files[table_file.date_].size(), 0);
std::vector<int> file_types;
std::vector<std::string> file_ids;
status = impl_->FilesByType(table.table_id_, file_types, file_ids);
ASSERT_TRUE(file_ids.empty());
milvus::engine::meta::TableFilesSchema table_files;
status = impl_->FilesByType(table.table_id_, file_types, table_files);
ASSERT_TRUE(table_files.empty());
ASSERT_FALSE(status.ok());
file_types = {
@ -317,11 +317,11 @@ TEST_F(MetaTest, TABLE_FILES_TEST) {
milvus::engine::meta::TableFileSchema::INDEX, milvus::engine::meta::TableFileSchema::RAW,
milvus::engine::meta::TableFileSchema::BACKUP,
};
status = impl_->FilesByType(table.table_id_, file_types, file_ids);
status = impl_->FilesByType(table.table_id_, file_types, table_files);
ASSERT_TRUE(status.ok());
uint64_t total_cnt = new_index_files_cnt + new_merge_files_cnt + backup_files_cnt + new_files_cnt + raw_files_cnt +
to_index_files_cnt + index_files_cnt;
ASSERT_EQ(file_ids.size(), total_cnt);
ASSERT_EQ(table_files.size(), total_cnt);
status = impl_->DeleteTableFiles(table_id);
ASSERT_TRUE(status.ok());

View File

@ -169,9 +169,9 @@ TEST_F(MySqlMetaTest, ARCHIVE_TEST_DAYS) {
std::vector<int> file_types = {
(int)milvus::engine::meta::TableFileSchema::NEW,
};
std::vector<std::string> file_ids;
status = impl.FilesByType(table_id, file_types, file_ids);
ASSERT_FALSE(file_ids.empty());
milvus::engine::meta::TableFilesSchema table_files;
status = impl.FilesByType(table_id, file_types, table_files);
ASSERT_FALSE(table_files.empty());
status = impl.UpdateTableFilesToIndex(table_id);
ASSERT_TRUE(status.ok());
@ -326,9 +326,9 @@ TEST_F(MySqlMetaTest, TABLE_FILES_TEST) {
ASSERT_EQ(dated_files[table_file.date_].size(), 0);
std::vector<int> file_types;
std::vector<std::string> file_ids;
status = impl_->FilesByType(table.table_id_, file_types, file_ids);
ASSERT_TRUE(file_ids.empty());
milvus::engine::meta::TableFilesSchema table_files;
status = impl_->FilesByType(table.table_id_, file_types, table_files);
ASSERT_TRUE(table_files.empty());
ASSERT_FALSE(status.ok());
file_types = {
@ -337,11 +337,11 @@ TEST_F(MySqlMetaTest, TABLE_FILES_TEST) {
milvus::engine::meta::TableFileSchema::INDEX, milvus::engine::meta::TableFileSchema::RAW,
milvus::engine::meta::TableFileSchema::BACKUP,
};
status = impl_->FilesByType(table.table_id_, file_types, file_ids);
status = impl_->FilesByType(table.table_id_, file_types, table_files);
ASSERT_TRUE(status.ok());
uint64_t total_cnt = new_index_files_cnt + new_merge_files_cnt + backup_files_cnt + new_files_cnt + raw_files_cnt +
to_index_files_cnt + index_files_cnt;
ASSERT_EQ(file_ids.size(), total_cnt);
ASSERT_EQ(table_files.size(), total_cnt);
status = impl_->DeleteTableFiles(table_id);
ASSERT_TRUE(status.ok());

View File

@ -132,8 +132,8 @@ BaseTest::SetUp() {
void
BaseTest::TearDown() {
milvus::cache::CpuCacheMgr::GetInstance()->ClearCache();
milvus::cache::GpuCacheMgr::GetInstance(0)->ClearCache();
#ifdef MILVUS_GPU_VERSION
milvus::cache::GpuCacheMgr::GetInstance(0)->ClearCache();
knowhere::FaissGpuResourceMgr::GetInstance().Free();
#endif
}

View File

@ -98,24 +98,25 @@ class SchedulerTest : public testing::Test {
protected:
void
SetUp() override {
res_mgr_ = std::make_shared<ResourceMgr>();
ResourcePtr disk = ResourceFactory::Create("disk", "DISK", 0, true, false);
ResourcePtr cpu = ResourceFactory::Create("cpu", "CPU", 0, true, false);
disk_resource_ = res_mgr_->Add(std::move(disk));
cpu_resource_ = res_mgr_->Add(std::move(cpu));
#ifdef MILVUS_GPU_VERSION
constexpr int64_t cache_cap = 1024 * 1024 * 1024;
cache::GpuCacheMgr::GetInstance(0)->SetCapacity(cache_cap);
cache::GpuCacheMgr::GetInstance(1)->SetCapacity(cache_cap);
ResourcePtr disk = ResourceFactory::Create("disk", "DISK", 0, true, false);
ResourcePtr cpu = ResourceFactory::Create("cpu", "CPU", 0, true, false);
ResourcePtr gpu_0 = ResourceFactory::Create("gpu0", "GPU", 0);
ResourcePtr gpu_1 = ResourceFactory::Create("gpu1", "GPU", 1);
res_mgr_ = std::make_shared<ResourceMgr>();
disk_resource_ = res_mgr_->Add(std::move(disk));
cpu_resource_ = res_mgr_->Add(std::move(cpu));
gpu_resource_0_ = res_mgr_->Add(std::move(gpu_0));
gpu_resource_1_ = res_mgr_->Add(std::move(gpu_1));
auto PCIE = Connection("IO", 11000.0);
res_mgr_->Connect("cpu", "gpu0", PCIE);
res_mgr_->Connect("cpu", "gpu1", PCIE);
#endif
scheduler_ = std::make_shared<Scheduler>(res_mgr_);
@ -138,17 +139,6 @@ class SchedulerTest : public testing::Test {
std::shared_ptr<Scheduler> scheduler_;
};
void
insert_dummy_index_into_gpu_cache(uint64_t device_id) {
MockVecIndex* mock_index = new MockVecIndex();
mock_index->ntotal_ = 1000;
engine::VecIndexPtr index(mock_index);
cache::DataObjPtr obj = std::static_pointer_cast<cache::DataObj>(index);
cache::GpuCacheMgr::GetInstance(device_id)->InsertItem("location", obj);
}
class SchedulerTest2 : public testing::Test {
protected:
void
@ -157,16 +147,13 @@ class SchedulerTest2 : public testing::Test {
ResourcePtr cpu0 = ResourceFactory::Create("cpu0", "CPU", 0, true, false);
ResourcePtr cpu1 = ResourceFactory::Create("cpu1", "CPU", 1, true, false);
ResourcePtr cpu2 = ResourceFactory::Create("cpu2", "CPU", 2, true, false);
ResourcePtr gpu0 = ResourceFactory::Create("gpu0", "GPU", 0, true, true);
ResourcePtr gpu1 = ResourceFactory::Create("gpu1", "GPU", 1, true, true);
res_mgr_ = std::make_shared<ResourceMgr>();
disk_ = res_mgr_->Add(std::move(disk));
cpu_0_ = res_mgr_->Add(std::move(cpu0));
cpu_1_ = res_mgr_->Add(std::move(cpu1));
cpu_2_ = res_mgr_->Add(std::move(cpu2));
gpu_0_ = res_mgr_->Add(std::move(gpu0));
gpu_1_ = res_mgr_->Add(std::move(gpu1));
auto IO = Connection("IO", 5.0);
auto PCIE1 = Connection("PCIE", 11.0);
auto PCIE2 = Connection("PCIE", 20.0);
@ -174,8 +161,15 @@ class SchedulerTest2 : public testing::Test {
res_mgr_->Connect("cpu0", "cpu1", IO);
res_mgr_->Connect("cpu1", "cpu2", IO);
res_mgr_->Connect("cpu0", "cpu2", IO);
#ifdef MILVUS_GPU_VERSION
ResourcePtr gpu0 = ResourceFactory::Create("gpu0", "GPU", 0, true, true);
ResourcePtr gpu1 = ResourceFactory::Create("gpu1", "GPU", 1, true, true);
gpu_0_ = res_mgr_->Add(std::move(gpu0));
gpu_1_ = res_mgr_->Add(std::move(gpu1));
res_mgr_->Connect("cpu1", "gpu0", PCIE1);
res_mgr_->Connect("cpu2", "gpu1", PCIE2);
#endif
scheduler_ = std::make_shared<Scheduler>(res_mgr_);

View File

@ -175,6 +175,7 @@ TEST(CacheTest, CPU_CACHE_TEST) {
cpu_mgr->PrintInfo();
}
#ifdef MILVUS_GPU_VERSION
TEST(CacheTest, GPU_CACHE_TEST) {
auto gpu_mgr = milvus::cache::GpuCacheMgr::GetInstance(0);
@ -202,6 +203,7 @@ TEST(CacheTest, GPU_CACHE_TEST) {
gpu_mgr->ClearCache();
ASSERT_EQ(gpu_mgr->ItemCount(), 0);
}
#endif
TEST(CacheTest, INVALID_TEST) {
{

View File

@ -25,6 +25,8 @@
#include "utils/StringHelpFunctions.h"
#include "utils/ValidationUtil.h"
#include <limits>
namespace {
static constexpr uint64_t KB = 1024;
@ -63,9 +65,21 @@ TEST_F(ConfigTest, CONFIG_TEST) {
int64_t port = server_config.GetInt64Value("port");
ASSERT_NE(port, 0);
server_config.SetValue("test", "2.5");
double test = server_config.GetDoubleValue("test");
ASSERT_EQ(test, 2.5);
server_config.SetValue("float_test", "2.5");
double dbl = server_config.GetDoubleValue("float_test");
ASSERT_LE(abs(dbl - 2.5), std::numeric_limits<double>::epsilon());
float flt = server_config.GetFloatValue("float_test");
ASSERT_LE(abs(flt - 2.5), std::numeric_limits<float>::epsilon());
server_config.SetValue("bool_test", "true");
bool blt = server_config.GetBoolValue("bool_test");
ASSERT_TRUE(blt);
server_config.SetValue("int_test", "34");
int32_t it32 = server_config.GetInt32Value("int_test");
ASSERT_EQ(it32, 34);
int64_t it64 = server_config.GetInt64Value("int_test");
ASSERT_EQ(it64, 34);
milvus::server::ConfigNode fake;
server_config.AddChild("fake", fake);
@ -236,6 +250,7 @@ TEST_F(ConfigTest, SERVER_CONFIG_VALID_TEST) {
ASSERT_TRUE(s.ok());
ASSERT_TRUE(int64_val == engine_omp_thread_num);
#ifdef MILVUS_GPU_VERSION
int64_t engine_gpu_search_threshold = 800;
s = config.SetEngineConfigGpuSearchThreshold(std::to_string(engine_gpu_search_threshold));
ASSERT_TRUE(s.ok());
@ -251,7 +266,6 @@ TEST_F(ConfigTest, SERVER_CONFIG_VALID_TEST) {
ASSERT_TRUE(s.ok());
ASSERT_TRUE(bool_val == resource_enable_gpu);
#ifdef MILVUS_GPU_VERSION
int64_t gpu_cache_capacity = 1;
s = config.SetGpuResourceConfigCacheCapacity(std::to_string(gpu_cache_capacity));
ASSERT_TRUE(s.ok());
@ -389,6 +403,7 @@ TEST_F(ConfigTest, SERVER_CONFIG_INVALID_TEST) {
s = config.SetEngineConfigOmpThreadNum("10000");
ASSERT_FALSE(s.ok());
#ifdef MILVUS_GPU_VERSION
s = config.SetEngineConfigGpuSearchThreshold("-1");
ASSERT_FALSE(s.ok());
@ -396,7 +411,6 @@ TEST_F(ConfigTest, SERVER_CONFIG_INVALID_TEST) {
s = config.SetGpuResourceConfigEnable("ok");
ASSERT_FALSE(s.ok());
#ifdef MILVUS_GPU_VERSION
s = config.SetGpuResourceConfigCacheCapacity("a");
ASSERT_FALSE(s.ok());
s = config.SetGpuResourceConfigCacheCapacity("128");

View File

@ -313,6 +313,9 @@ TEST_F(RpcHandlerTest, TABLES_TEST) {
std::vector<std::vector<float>> record_array;
BuildVectors(0, VECTOR_COUNT, record_array);
::milvus::grpc::VectorIds vector_ids;
for (int64_t i = 0; i < VECTOR_COUNT; i++) {
vector_ids.add_vector_id_array(i);
}
// Insert vectors
// test invalid table name
handler->Insert(&context, &request, &vector_ids);

View File

@ -120,7 +120,13 @@ TEST(UtilTest, STRINGFUNCTIONS_TEST) {
milvus::server::StringHelpFunctions::SplitStringByDelimeter(str, ",", result);
ASSERT_EQ(result.size(), 3UL);
std::string merge_str;
milvus::server::StringHelpFunctions::MergeStringWithDelimeter(result, ",", merge_str);
ASSERT_EQ(merge_str, "a,b,c");
result.clear();
milvus::server::StringHelpFunctions::MergeStringWithDelimeter(result, ",", merge_str);
ASSERT_TRUE(merge_str.empty());
auto status = milvus::server::StringHelpFunctions::SplitStringByQuote(str, ",", "\"", result);
ASSERT_TRUE(status.ok());
ASSERT_EQ(result.size(), 3UL);
@ -211,6 +217,11 @@ TEST(UtilTest, STATUS_TEST) {
str = status.ToString();
ASSERT_FALSE(str.empty());
status = milvus::Status(milvus::DB_INVALID_PATH, "mistake");
ASSERT_EQ(status.code(), milvus::DB_INVALID_PATH);
str = status.ToString();
ASSERT_FALSE(str.empty());
status = milvus::Status(milvus::DB_META_TRANSACTION_FAILED, "mistake");
ASSERT_EQ(status.code(), milvus::DB_META_TRANSACTION_FAILED);
str = status.ToString();
@ -261,6 +272,10 @@ TEST(ValidationUtilTest, VALIDATE_TABLENAME_TEST) {
table_name = std::string(10000, 'a');
status = milvus::server::ValidationUtil::ValidateTableName(table_name);
ASSERT_EQ(status.code(), milvus::SERVER_INVALID_TABLE_NAME);
table_name = "";
status = milvus::server::ValidationUtil::ValidatePartitionName(table_name);
ASSERT_EQ(status.code(), milvus::SERVER_INVALID_TABLE_NAME);
}
TEST(ValidationUtilTest, VALIDATE_DIMENSION_TEST) {

View File

@ -16,25 +16,25 @@
### 软硬件环境
操作系统: CentOS Linux release 7.6.1810 (Core)
操作系统CentOS Linux release 7.6.1810 (Core)
CPU: Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
CPUIntel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
GPU0: GeForce GTX 1080
GPU0GeForce GTX 1080
GPU1: GeForce GTX 1080
GPU1GeForce GTX 1080
内存: 503GB
内存503GB
Docker版本: 18.09
Docker版本18.09
NVIDIA Driver版本: 430.34
NVIDIA Driver版本430.34
Milvus版本: 0.5.3
Milvus版本0.5.3
SDK接口: Python 3.6.8
SDK接口Python 3.6.8
pymilvus版本: 0.2.5
pymilvus版本0.2.5
@ -51,7 +51,7 @@ pymilvus版本: 0.2.5
### 测试指标
- Query Elapsed Time: 数据库查询所有向量的时间以秒计。影响Query Elapsed Time的变量:
- Query Elapsed Time数据库查询所有向量的时间以秒计。影响Query Elapsed Time的变量
- nq (被查询向量的数量)
@ -59,7 +59,7 @@ pymilvus版本: 0.2.5
>
> 被查询向量的数量nq将按照 [1, 5, 10, 200, 400, 600, 800, 1000]的数量分组。
- Recall: 实际返回的正确结果占总数之比 . 影响Recall的变量:
- Recall实际返回的正确结果占总数之比。影响Recall的变量
- nq (被查询向量的数量)
- topk (单条查询中最相似的K个结果)
@ -76,7 +76,7 @@ pymilvus版本: 0.2.5
### 测试环境
数据集: sift1b-1,000,000,000向量, 128维
数据集sift1b-1,000,000,000向量128维
表格属性:
@ -143,7 +143,7 @@ search_resources: cpu, gpu0
| nq=800 | 23.24 |
| nq=1000 | 27.41 |
当nq为1000时GPU模式下查询一条128维向量需要耗时约27毫秒。
当nq为1000时CPU模式下查询一条128维向量需要耗时约27毫秒。

View File

@ -139,7 +139,7 @@ topk = 100
**总结**
当nq小于1200时查询耗时随nq的增长快速增大当nq大于1200时查询耗时的增大则缓慢许多。这是因为gpu_search_threshold这一参数的值被设为1200当nq<1200时选择CPU进行操作否则选择GPU进行操作与CPU
当nq小于1200时查询耗时随nq的增长快速增大当nq大于1200时查询耗时的增大则缓慢许多。这是因为gpu_search_threshold这一参数的值被设为1200当nq小于1200时选择CPU进行操作否则选择GPU进行操作。
在GPU模式下的查询耗时由两部分组成1索引从CPU到GPU的拷贝时间2所有分桶的查询时间。当nq小于500时索引从CPU到GPU 的拷贝时间无法被有效均摊此时CPU模式时一个更优的选择当nq大于500时选择GPU模式更合理。和CPU相比GPU具有更多的核数和更强的算力。当nq较大时GPU在计算上的优势能被更好地被体现。

View File

@ -54,7 +54,7 @@ Follow below steps to start a standalone Milvus instance with Mishards from sour
3. Start Milvus server.
```shell
$ sudo nvidia-docker run --rm -d -p 19530:19530 -v /tmp/milvus/db:/opt/milvus/db milvusdb/milvus:0.5.0-d102119-ede20b
$ sudo nvidia-docker run --rm -d -p 19530:19530 -v /tmp/milvus/db:/opt/milvus/db milvusdb/milvus
```
4. Update path permissions.

View File

@ -48,7 +48,7 @@ Python 版本为3.6及以上。
3. 启动 Milvus 服务。
```shell
$ sudo nvidia-docker run --rm -d -p 19530:19530 -v /tmp/milvus/db:/opt/milvus/db milvusdb/milvus:0.5.0-d102119-ede20b
$ sudo nvidia-docker run --rm -d -p 19530:19530 -v /tmp/milvus/db:/opt/milvus/db milvusdb/milvus
```
4. 更改目录权限。

View File

@ -3,14 +3,15 @@ services:
milvus_wr:
runtime: nvidia
restart: always
image: milvusdb/milvus:0.5.0-d102119-ede20b
image: milvusdb/milvus
volumes:
- /tmp/milvus/db:/opt/milvus/db
- ./wr_server.yml:/opt/milvus/conf/server_config.yaml
milvus_ro:
runtime: nvidia
restart: always
image: milvusdb/milvus:0.5.0-d102119-ede20b
image: milvusdb/milvus
volumes:
- /tmp/milvus/db:/opt/milvus/db
- ./ro_server.yml:/opt/milvus/conf/server_config.yaml

View File

@ -12,7 +12,7 @@ db_config:
# Keep 'dialect://:@:/', and replace other texts with real values
# Replace 'dialect' with 'mysql' or 'sqlite'
insert_buffer_size: 4 # GB, maximum insert buffer size allowed
insert_buffer_size: 1 # GB, maximum insert buffer size allowed
# sum of insert_buffer_size and cpu_cache_capacity cannot exceed total memory
preload_table: # preload data at startup, '*' means load all tables, empty value means no preload
@ -25,14 +25,14 @@ metric_config:
port: 8080 # port prometheus uses to fetch metrics
cache_config:
cpu_cache_capacity: 16 # GB, CPU memory used for cache
cpu_cache_capacity: 4 # GB, CPU memory used for cache
cpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
gpu_cache_capacity: 4 # GB, GPU memory used for cache
gpu_cache_capacity: 1 # GB, GPU memory used for cache
gpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
cache_insert_data: false # whether to load inserted data into cache
engine_config:
use_blas_threshold: 20 # if nq < use_blas_threshold, use SSE, faster with fluctuated response times
use_blas_threshold: 800 # if nq < use_blas_threshold, use SSE, faster with fluctuated response times
# if nq >= use_blas_threshold, use OpenBlas, slower with stable response times
resource_config:

View File

@ -0,0 +1,41 @@
server_config:
address: 0.0.0.0 # milvus server ip address (IPv4)
port: 19530 # port range: 1025 ~ 65534
deploy_mode: cluster_writable # deployment type: single, cluster_readonly, cluster_writable
time_zone: UTC+8
db_config:
primary_path: /opt/milvus # path used to store data and meta
secondary_path: # path used to store data only, split by semicolon
backend_url: sqlite://:@:/ # URI format: dialect://username:password@host:port/database
# Keep 'dialect://:@:/', and replace other texts with real values
# Replace 'dialect' with 'mysql' or 'sqlite'
insert_buffer_size: 2 # GB, maximum insert buffer size allowed
# sum of insert_buffer_size and cpu_cache_capacity cannot exceed total memory
preload_table: # preload data at startup, '*' means load all tables, empty value means no preload
# you can specify preload tables like this: table1,table2,table3
metric_config:
enable_monitor: false # enable monitoring or not
collector: prometheus # prometheus
prometheus_config:
port: 8080 # port prometheus uses to fetch metrics
cache_config:
cpu_cache_capacity: 2 # GB, CPU memory used for cache
cpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
gpu_cache_capacity: 2 # GB, GPU memory used for cache
gpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
cache_insert_data: false # whether to load inserted data into cache
engine_config:
use_blas_threshold: 800 # if nq < use_blas_threshold, use SSE, faster with fluctuated response times
# if nq >= use_blas_threshold, use OpenBlas, slower with stable response times
resource_config:
search_resources: # define the GPUs used for search computation, valid value: gpux
- gpu0
index_build_device: gpu0 # GPU used for building index

View File

@ -1,7 +1,7 @@
DEBUG=True
WOSERVER=tcp://127.0.0.1:19530
SERVER_PORT=19532
SERVER_PORT=19535
SERVER_TEST_PORT=19888
#SQLALCHEMY_DATABASE_URI=mysql+pymysql://root:root@127.0.0.1:3306/milvus?charset=utf8mb4
@ -19,7 +19,7 @@ TRACER_CLASS_NAME=jaeger
TRACING_SERVICE_NAME=fortest
TRACING_SAMPLER_TYPE=const
TRACING_SAMPLER_PARAM=1
TRACING_LOG_PAYLOAD=True
TRACING_LOG_PAYLOAD=False
#TRACING_SAMPLER_TYPE=probabilistic
#TRACING_SAMPLER_PARAM=0.5

Some files were not shown because too many files have changed in this diff Show More