### Engine configuration Includes CPU computing configurations for PaddlePaddle, Pytorch, MxNet, and Tensorflow (automatic configuration and manual configuration). During the first run, automatic configuration will download the native engine from Amazon servers, which may be slow. Manual configuration, on the other hand, will download from a maven server (which can point to a domestic server), which is much faster. Additionally, local libraries will be bundled together, so they will not be loaded during runtime after deployment. #### Architecture:
#### Supported engines and operating systems: - MXNet - full support - PyTorch - full support - TensorFlow - supports inference and NDArray operations - ONNX Runtime - supports basic inference - PaddlePaddle - supports basic inference - TFLite - supports basic inference - TensorRT - supports basic inference - DLR - supports basic inference
- Additional information: The following versions can be directly supported CentOS Linux release 7.9.2009 (Core) and above ubuntu 18.04 and above #### Detailed configuration information (including CPU, GPU) reference links: - [PaddlePaddle](http://docs.djl.ai/engines/paddlepaddle/paddlepaddle-engine/index.html) - [Pytorch](http://docs.djl.ai/engines/pytorch/pytorch-engine/index.html) - [MxNet](https://docs.djl.ai/engines/mxnet/mxnet-engine/index.html) - [Tensorflow](https://docs.djl.ai/engines/tensorflow/tensorflow-engine/index.html) - [ONNX](https://docs.djl.ai/engines/onnxruntime/onnxruntime-engine/index.html) - [TensorFlow Lite](https://docs.djl.ai/engines/tflite/tflite-engine/index.html) - [DLR](https://docs.djl.ai/engines/dlr/index.html) - [TensorRT](https://docs.djl.ai/engines/tensorrt/index.html) TensorRT driver version can be found in the TRT_VERSION of the following Dockerfile: https://github.com/deepjavalibrary/djl/blob/master/docker/tensorrt/Dockerfile #### Below are the configuration reference examples: #### version 0.20.0 (versions may not be updated in time, please refer to the above documentation) #### 1. PaddlePaddle engine 1). Automatic configuration ```text ai.djl.paddlepaddle paddlepaddle-engine 0.20.0 runtime ``` 2). macOS - CPU ```text ai.djl.paddlepaddle paddlepaddle-native-cpu osx-x86_64 2.2.2 runtime ``` 3). Linux - CPU ```text ai.djl.paddlepaddle paddlepaddle-native-cpu linux-x86_64 2.2.2 runtime ``` 4). Linux - GPU Need to set the environment variable LD_LIBRARY_PATH, such as setting it in /etc/profile. ```text LD_LIBRARY_PATH=$HOME/.djl.ai/paddle/x.x.x--linux-x86_64 ``` ```text ai.djl.paddlepaddle paddlepaddle-native-cu102 linux-x86_64 2.2.2 runtime ai.djl.paddlepaddle paddlepaddle-native-cu112 linux-x86_64 2.2.2 runtime ``` - CUDA supported versions ```text - paddlepaddle-native-cuXXX X.X.X ``` ```text # Search for ai.djl.paddlepaddle: # or CUDA 11.2: paddlepaddle-native-cu112 CUDA 11.0: paddlepaddle-native-cu110 CUDA 10.2: pytorch-native-cu102 CUDA 10.1: pytorch-native-cu101 ``` 5). Windows - CPU ```text ai.djl.paddlepaddle paddlepaddle-native-cpu win-x86_64 2.2.2 runtime ``` 6). Windows GPU ```text ai.djl.paddlepaddle paddlepaddle-native-cu112 win-x86_64 2.2.2 runtime ``` - CUDA supported versions ```text - paddlepaddle-native-cuXXX X.X.X ``` ```text # Search for ai.djl.paddlepaddle: # or CUDA 11.2: paddlepaddle-native-cu112 CUDA 11.0: paddlepaddle-native-cu110 CUDA 10.2: pytorch-native-cu102 CUDA 10.1: pytorch-native-cu101 ``` #### 2. Pytorch engine 1). Automatic configuration ```text ai.djl.pytorch pytorch-engine 0.20.0 runtime ``` #####If you need to specify the PyTorch version explicitly, you can use the following configuration method: - http://docs.djl.ai/engines/pytorch/pytorch-engine/index.html
PyTorch engine version
PyTorch native library version
pytorch-engine:0.20.0
1.11.0, 1.12.1, 1.13.0
pytorch-engine:0.19.0
1.10.0, 1.11.0, 1.12.1
pytorch-engine:0.18.0
1.9.1, 1.10.0, 1.11.0
pytorch-engine:0.17.0
1.9.1, 1.10.0, 1.11.0
...... 2). macOS - CPU ```text ai.djl.pytorch pytorch-native-cpu osx-x86_64 1.13.0 runtime ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` 3). macOS - M1 ```text ai.djl.pytorch pytorch-native-cpu osx-aarch64 1.13.0 runtime ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` 4). Linux - CPU ```text ai.djl.pytorch pytorch-native-cpu linux-x86_64 runtime 1.13.0 ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` 5). For aarch64 build ```text ai.djl.pytorch pytorch-native-cpu-precxx11 linux-aarch64 runtime 1.13.0 ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` 6). For Pre-CXX11 build ```text # We also provide packages for the system like CentOS 7/Ubuntu 14.04 with GLIBC >= 2.17. # All the package were built with GCC 7, we provided a newer libstdc++.so.6.24 in the package that # contains CXXABI_1.3.9 to use the package successfully. ai.djl.pytorch:pytorch-jni:1.11.0-0.17.0 ai.djl.pytorch:pytorch-native-cu113-precxx11:1.11.0:linux-x86_64 - CUDA 11.3 ai.djl.pytorch:pytorch-native-cpu-precxx11:1.11.0:linux-x86_64 - CPU ai.djl.pytorch pytorch-native-cu117-precxx11 linux-x86_64 1.13.0 runtime ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime #---------------------------------------------------------------# ai.djl.pytorch pytorch-native-cpu-precxx11 linux-x86_64 1.13.0 runtime ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` 7). Linux - GPU ```text ai.djl.pytorch pytorch-native-cu117 linux-x86_64 1.13.0 runtime ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` - CUDA Supported Versions ```text - pytorch-native-cuXXX X.X.X ``` ```text # Search ai.djl.pytorch query: # Or CUDA 11.7: pytorch-native-cu117 CUDA 11.6: pytorch-native-cu116 CUDA 11.3: pytorch-native-cu113 CUDA 11.1: pytorch-native-cu111 CUDA 10.2: pytorch-native-cu102 CUDA 10.1: pytorch-native-cu101 CUDA 10.0: pytorch-native-cu110 CUDA 9.2: pytorch-native-cu92 ``` 8). Windows - CPU ```text ai.djl.pytorch pytorch-native-cpu win-x86_64 runtime 1.13.0 ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` 9). Windows - GPU ```text ai.djl.pytorch pytorch-native-cu117 win-x86_64 1.13.0 runtime ai.djl.pytorch pytorch-jni 1.13.0-0.20.0 runtime ``` - CUDA Supported Versions ```text - pytorch-native-cuXXX X.X.X ``` ```text # Search ai.djl.pytorch query: # Or CUDA 11.7: pytorch-native-cu117 CUDA 11.6: pytorch-native-cu116 CUDA 11.3: pytorch-native-cu113 CUDA 11.1: pytorch-native-cu111 CUDA 10.2: pytorch-native-cu102 CUDA 10.1: pytorch-native-cu101 CUDA 10.0: pytorch-native-cu110 CUDA 9.2: pytorch-native-cu92 ``` #### 3. MxNet engine 1). Auto configuration ```text ai.djl.mxnet mxnet-engine 0.20.0 runtime ``` 2). macOS - CPU ```text ai.djl.mxnet mxnet-native-mkl osx-x86_64 1.9.1 runtime ``` 3). Linux - CPU ```text ai.djl.mxnet mxnet-native-mkl linux-x86_64 runtime 1.9.1 ``` 4). Linux - GPU ```text ai.djl.mxnet mxnet-native-cu112mkl linux-x86_64 1.9.1 runtime ``` - CUDA Supported Versions ```text mxnet-native-cuXXXmkl X.X.X ``` ```text # Search ai.djl.mxnet query: # Or CUDA 11.2: mxnet-native-cu112mkl CUDA 11.0: mxnet-native-cu101mkl CUDA 10.2: mxnet-native-cu101mkl CUDA 10.1: mxnet-native-cu102mkl CUDA 9.2: mxnet-native-cu102mkl ``` 5). Windows - CPU ```text ai.djl.mxnet mxnet-native-mkl win-x86_64 runtime 1.9.1 ``` 6). Windows - GPU ```text ai.djl.mxnet mxnet-native-cu112mkl win-x86_64 1.9.1 runtime ``` - CUDA Supported Versions ```text mxnet-native-cuXXXmkl X.X.X ``` ```text # Search ai.djl.mxnet query: # Or CUDA 11.2: mxnet-native-cu112mkl CUDA 11.0: mxnet-native-cu101mkl CUDA 10.2: mxnet-native-cu101mkl CUDA 10.1: mxnet-native-cu102mkl CUDA 9.2: mxnet-native-cu102mkl ``` #### 4. Tensorflow engine 1). Auto configuration ```text ai.djl.tensorflow tensorflow-engine 0.20.0 runtime ``` 2). macOS - CPU ```text ai.djl.tensorflow tensorflow-native-cpu osx-x86_64 2.7.0 runtime ``` 3). Linux - CPU ```text ai.djl.tensorflow tensorflow-native-cpu linux-x86_64 runtime 2.7.0 ``` 4). Linux - GPU ```text ai.djl.tensorflow tensorflow-native-cu113 linux-x86_64 2.7.0 runtime ``` - CUDA Supported Versions ```text tensorflow-native-cuXXX X.X.X ``` ```text # Search ai.djl.tensorflow query: # Or CUDA 11.3: tensorflow-native-cu113 CUDA 11.0: tensorflow-native-cu110 CUDA 10.1: tensorflow-native-cu101 ``` 5). Windows - CPU ```text ai.djl.tensorflow tensorflow-native-cpu win-x86_64 runtime 2.7.0 ``` 6). Windows - GPU ```text ai.djl.tensorflow tensorflow-native-cu113 win-x86_64 2.7.0 runtime ``` - CUDA Supported Versions ```text tensorflow-native-cuXXX X.X.X ``` ```text # Search ai.djl.tensorflow query: # Or CUDA 11.3: tensorflow-native-cu113 CUDA 11.0: tensorflow-native-cu110 CUDA 10.1: tensorflow-native-cu101 ```