diff --git a/README.md b/README.md index f7d9deb05..f00d4144d 100644 --- a/README.md +++ b/README.md @@ -24,18 +24,19 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz --- -## 支持功能 +### 支持功能 当前ModelLink支撑大模型使用功能: * [制作预训练数据集](#jump11)/[制作指令微调数据集](#jump12) * [预训练](#jump13)/[全参微调](#jump14)/[低参微调](#jump15) -* [推理(人机对话)](#jump16) -* [评估基线数据集(Benchmark)](#jump17) -* [使用加速特性(加速算法+融合算子)](#jump18) +* [流式推理/人机对话](#jump16) +* [评估基线数据集](#jump17) +* [加速算法/融合算子/并行策略](#jump18) * [基于昇腾芯片采集Profiling数据](#jump19) +* [Huggingface与Megatron-LM权重转换](#jump20) 强化学习等特性持续研发中.... -## 支持模型 +### 支持模型 当前ModelLink支持下列模型的预训练以及微调: @@ -43,10 +44,12 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - + + + + - @@ -54,240 +57,294 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + - - + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + - - - - - + + + + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + +
模型 参数微调预训练 推理LoRASFT对话 评估数据集 贡献方
Aquila 7B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
Aquila2 7B pretrain generate -- 对话 评估 alpaca_data.json 【社区贡献模型】 -- -- eval 【社区贡献】
Baichuan 7B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
13B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
Baichuan2 7B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
13B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
Bloom 7B1 pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
176B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
CodeLlama 34B pretrain generate -- 对话 评估 alpaca_data.json 【社区贡献模型】 -- -- eval 【社区贡献】
InternLM 7B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
65B pretrain -- -- -- alpaca_data.json 【昇腾贡献模型】 -- -- 【昇腾贡献】
LLaMA 7B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
13B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
33B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
65B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
LLaMA2 7B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
13B lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 pretrain generate lora -- -- eval 【昇腾贡献】
34B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
70B pretrain generate lora 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
LLaMA3 8B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- chat eval 【社区贡献】
70B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【社区贡献】
Qwen 7B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
14B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
72B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
Yi 34B pretrain generate -- 对话 评估 alpaca_data.json 【社区贡献模型】 -- -- eval 【社区贡献】
Mixtral 8x7B pretrain generate -- 对话 评估 alpaca_data.json 【昇腾贡献模型】 -- -- eval 【昇腾贡献】
-## 脚本命名规则 -| 脚本 | 规则 | -|:-------------------------:|:--------:| -| pretrain_xxx.sh | 预训练脚本 | -| tune_xxx.sh | 微调脚本 | -| generate_xxx.sh | 推理脚本 | -| evaluation_xxx.sh | 评估脚本 | +### 脚本命名规则 + +| 脚本 | 规则 | +|:-----------------:|:------:| +| pretrain_xxx.sh | 预训练脚本 | +| tune_xxx.sh | LoRA脚本 | +| generate_xxx.sh | 推理脚本 | +| xxxx_chat_xx.sh | 对话脚本 | +| evaluation_xxx.sh | 评估脚本 | --- -# 模型使用指导与版本说明 +## 模型版本与性能说明 上述列表中支持的模型,我们在[examples](./examples/)文件夹中提供了各模型的训练脚本和readme说明,里面有详细的模型训练、推理、评估流程。 @@ -296,14 +353,14 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz | 软件 | [版本](https://www.hiascend.com/zh/) | | :-----------------------: |:----------------------------------:| | Python | 3.8 | -| driver | Ascend HDK 23.0.0 | -| firmware | Ascend HDK 23.0.0 | -| CANN | CANN 7.0.0 | -| torch | 2.1.0 | -| torch_npu | release v5.0.0 | +| driver | Ascend HDK 23.0.0 | +| firmware | Ascend HDK 23.0.0 | +| CANN | CANN 7.0.0 | +| torch | 2.1.0、2.2.0 | +| torch_npu | release v5.0.0 | -【基于现版本megatron我们实测的性能情况统计如下】 +【基于现版本我们实测的性能情况统计如下】 @@ -314,7 +371,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -325,7 +381,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -334,7 +389,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -343,7 +397,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -351,7 +404,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -360,7 +412,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -368,7 +419,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -377,7 +427,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -385,7 +434,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -394,7 +442,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -403,7 +450,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -411,7 +457,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -420,7 +465,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -428,7 +472,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -436,7 +479,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -446,7 +488,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -455,7 +496,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -463,7 +503,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -471,7 +510,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -479,7 +517,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -488,7 +525,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -496,7 +532,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -505,7 +540,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -513,7 +547,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -521,7 +554,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -530,7 +562,6 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz - @@ -539,494 +570,13 @@ ModelLink旨在为华为 [昇腾芯片](https://open.codehub.huawei.com/OpenBaiz -
精度模式 性能 参考性能 脚本
BF16 2849 2874 训练
Aquila2 FP16 3323 2673 训练
Baichuan FP16 2685 2036 训练
13B FP16 1213 862 训练
Baichuan2 BF16 2664 3969 训练
13B BF16 1668 2062 训练
Bloom FP16 2034 2525 训练
176B BF16 100 107 训练
CodeLlama BF16 837 762 训练
InternLMBF16 2776 2854 训练
65B BF16 341 414 训练
LLaMAFP16 3600 3804 训练
13BFP16 1895 2012 训练
33BFP16 621 776训练
65BBF16 348 426 训练
LLaMA2BF16 4200 3850 训练
13BBF16 1990 1920 训练
34BBF16 690 796 训练
70BBF16 350 339 训练
LLaMA3BF16 2483 2674 训练
70BBF16 283 -- 训练
QwenBF16 2499 2867 训练
14BBF16 1560 1578 训练
72BBF16 285 345 训练
YiBF16 809 730 训练
MixtralBF16 1054 1139 训练
+--- - - -# 功能使用指导 - -## 制作预训练数据集/制作指令微调数据集 - -#### 快速开始 -使用[preprocess_data.py](tools/preprocess_data.py)数据预处理工具将raw数据处理为用于训练的二进制格式数据,下面是一个处理alpaca数据集的样例: - -```bash -# 对于llama, 可以下载alpaca数据集, 比如 -wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet - -# 下载 tokenizer 配置, 地址: -# https://huggingface.co/yahma/llama-7b-hf/tree/main -# 这里要将tokenizer_config.json中的"LLaMATokenizer"修改为"LlamaTokenizer"(这是huggingface的一个bug) -mkdir dataset -python tools/preprocess_data.py --input train-00000-of-00001-a09b74b3ef9c3b56.parquet \ - --output-prefix dataset/alpaca \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path llama-7b-hf \ - --tokenizer-not-use-fast \ - --handler-name GeneralInstructionHandler -``` -输出将是两个文件,在本例中名为alpaca_packed_input_ids_document.bin和alpaca_packed_input_ids_document.idx,后面的训练中指定--data-path的是完整路径和新文件名,但不带文件扩展名。使用--tokenizer-type指定模型对应的数据预处理方法,使用--tokenizer-name-or-path指定tokenizer模型路径,通常是与开源项目中的预训练权重一起下载,--handler-name指定数据集的指令数据构造方法。 - -#### 制作预训练数据集 - -##### wikipedia 数据集 - -+ 下载 [wikipedia](https://huggingface.co/datasets/wikipedia/tree/main) 数据集到 WORKSPACE/wikipedia 目录 -+ 下载 [llama tokenizer](https://huggingface.co/yahma/llama-7b-hf/tree/main) 配置到 WORKSPACE/llama-7b-hf 目录 -+ 再使用如下脚本处理数据集 - -```shell -# 这里认为 数据集 和 tokenizer 已经下载放到了 WORKSPACE. -cd WORKSPACE -mkdir wikipedia_preprocessed - -hf_config_json="./hf_config_json.json" -cat < $hf_config_json -{ - "path": "WORKSPACE/wikipedia", - "name": "20220301.en", - "streaming: True, - "split": "train" -} -EOT - -python tools/preprocess_data.py \ - --input "WORKSPACE/wikipedia" \ - --hf-datasets-params ${hf_config_json} \ - --output-prefix WORKSPACE/wikipedia_preprocessed/wikipedia \ - --dataset-impl mmap \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path WORKSPACE/llama-7b-hf \ - --tokenizer-not-use-fast \ - --streaming \ - --workers 8 -``` - -处理完后, `WORKSPACE/wikipedia_preprocessed` 文件夹下会有 `wikipedia_text_document.bin` 和 `wikipedia_text_document.idx` 文件, 我们便可以使用 `--data-path WORKSPACE/wikipedia_preprocessed/wikipedia_text_document` 标志训练模型了 - -请注意huggingface中的数据集格式是[这样](https://huggingface.co/datasets/wikipedia/viewer/20220301.en/train)的. 我们处理数据时利用的数据列可以通过 `--json-key` 标志设置,默认为 `text`, -比如,wikipedia数据集有四列, 包括 `id`, `url`, `title` 和 `text`, 我们就可以通过 `--json-key` 标志选择一列处理该数据集 - -##### alpaca 数据集 - -此外, 我们也可以使用 [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet) 数据集用于预训练如下: - -```shell -python tools/preprocess_data.py --input WORKSPACE/train-00000-of-00001-a09b74b3ef9c3b56.parquet \ - --output-prefix WORKSPACE/alpaca_preprocessed/alpaca \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path WORKSPACE/llama-7b-hf \ - --tokenizer-not-use-fast \ - --json-key text -``` - - -#### 制作指令微调数据集 -##### alpaca 数据集 -```bash -# 数据集:wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet - -cd WORKSPACE -mkdir alpaca_preprocessed -python tools/preprocess_data.py --input WORKSPACE/alpaca/train-00000-of-00001-a09b74b3ef9c3b56.parquet \ - --output-prefix WORKSPACE/alpaca_preprocessed/alpaca \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path WORKSPACE/llama-7b-hf \ - --tokenizer-not-use-fast \ - --handler-name GeneralInstructionHandler \ - --append-eod -``` - -在处理后,`WORKSPACE/alpaca_preprocessed` 文件夹下会有3个 `bin` 文件 和 3个 `idx` 文件,我们便可以通过添加 `--data-path WORKSPACE/alpaca_preprocessed/alpaca` 和 `--is-instruction-dataset` 标志来进行指令微调。 -此外,基于指令数据集,我们还可以通过加上 `--variable-seq-lengths` 标志使用动态序列长度训练模型。 - -请注意,使用 `--handler-name GeneralInstructionHandler` 标志的指令数据集,在处理时会从 `modellink/data/data_handler.py` 中选择 `GeneralInstructionHandler` 类来制作prompt。如果你处理的是 alpaca 格式风格的数据集,即包含 `instruction`, `input` 和 `output` 列的数据集,可以直接使用 `--handler-name GeneralInstructionHandler` 标志。 -此外,`BelleMultiTurnInstructionHandler` 可以被用于处理 [belle](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) 格式的数据集,`MOSSInstructionHandler` 可以被用于处理 [MOSS](https://huggingface.co/datasets/fnlp/moss-003-sft-data) 格式的数据集,`LeetcodePythonInstructionHandler` 可以被用于处理 [Leetcode](https://huggingface.co/datasets/mhhmm/leetcode-solutions-python) 风格的数据集 - -### 预训练 -```shell - # 配置LLaMA-7B 预训练脚本: pretrain_llama_7b.sh - # 根据实际情况配置词表、数据集、模型参数保存路径 - TOKENIZER_PATH=WORKSPACE/llama-7b-hf/tokenizer.model #词表路径 - DATA_PATH=WORKSPACE/alpaca_preprocessed/alpaca_text_document #预训练数据集路径 -``` - -启动 LLaMA-7B 预训练脚本: examples/llama/pretrain_llama_7b_ptd.sh -```shell - bash examples/llama2/pretrain_llama_7b_ptd.sh -``` - -### 全参微调 -```shell - # 在预训练脚本的基础上,给出预训练权重路径,数据集使用指令数据集路径,使能微调开关--finetune - LOAD_CHECKPOINT_PATH="your init model weight load path" - DATA_PATH=WORKSPACE/alpaca_preprocessed/alpaca_text_document #指令微调数据集路径 - - torchrun $DISTRIBUTED_ARGS pretrain_gpt.py \ - --load ${LOAD_CHECKPOINT_PATH} \ - --finetune \ - ... \ - ... -``` - - -### 低参微调 -#### Lora - -当前 ModelLink基于 peft 仓库支持对大模型的 Lora 微调功能: - -```shell -pip install peft==0.4.0 -``` -当torch==1.11.0的时候,你也可以选择直接从它Github仓库的 [源码安装](https://github.com/huggingface/peft/archive/refs/tags/v0.4.0.tar.gz), 通过修改它的setup.py文件来回避一些依赖问题。 - -之后,你仅仅只需要在启动脚本中使能如下标志便可以启动lora微调训练: - -```shell -# Llama example ---lora-target-modules query_key_value dense gate_proj dense_h_to_4h dense_4h_to_h \ -``` - -Lora有一些相关参数,在 [PEFT](https://github.com/huggingface/peft) 仓库中有详细介绍,比如: - -```shell -# Llama example ---lora-r 64 \ ---lora-alpha 128 \ ---lora-modules-to-save word_embeddings output_layer \ ---lora-register-forward-hook word_embeddings input_layernorm \ -``` - -在这些参数中,标志 `--lora-register-forward-hook` 被用于修复由PP造成的梯度链中断,它仅仅只需要在每一个PP阶段的输入层设置,并不会增加训练参数。 标志 `--lora-modules-to-save` 被用于扩展词表时的微调,若没此需求则无需传入此参数。 - -最后,Lora微调后保存的权重仅仅只会包含新增的Lora权重。相似的,当你加载一个Lora模型时,除了原始权重路径需要设置,还需要设置一个加载Lora权重的路径,如下: - -```shell ---load ${ORIGIN_CHECKPOINT} \ ---lora-load ${LORA_CHECKPOINT} \ -``` - -这个 [例子](examples/llama/tune_llama_ptd_13b.sh) 可以用于参考。 - -在使用 Lora 微调 Llama 模型以后,指令对话的效果如下: - -```shell -You >> Give three tips for staying healthy. - -ModelLink: - -- Start exercising regularly and eat healthy food. -- Get a good eight hours of sleep each night. -- Take medications regularly. -``` - -### 推理( 人机对话) -当前,我们支持使用如下策略训练的模型进行推理: -当前,我们支持使用如下并行策略训练的模型进行推理: -- 仅仅使用 PTD 策略训练的模型 -- 使用 Lora 策略微调的模型 - -【同时对于已经支持的模型,我们提供了样例,请参考下列快速开始】 - -#### 快速开始 - -1. 如果你尝试使用 huggingface 的模型权重,请首先进行权重转换, 以 Llama-7B 为例: - - PTD 策略的转换 - ```bash - python tools/checkpoint/convert_ckpt.py --model-type GPT \ - --loader llama2_hf \ - --saver megatron \ - --target-tensor-parallel-size 1 \ - --target-pipeline-parallel-size 8 \ - --load-dir ./model_from_hf/llama-7b-hf \ - --save-dir ./model_weights/llama-7b-tp1-pp8 \ - --tokenizer-model ./model_from_hf/llama-7b-hf/tokenizer.model - ``` - - -5. 下面脚本中的一些路径需要修改,比如:模型权重路径 和 词表路径. - - - 仅仅使用 PTD 策略训练的模型:在这种模式下,模型以 Megatron-LM 的风格被 流水并行 和 张量并行 切分 - ```bash - sh examples/llama/generate_llama_7b_ptd.sh - ``` - - 如果你仅仅使用 Lora, 可以参考: - ```bash - sh examples/llama/generate_llama_7b_lora_ptd.sh - ``` - -#### 使用手册 -这里列举了一些使用 [Chinese-LLaMA-Alpaca-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) 权重进行推理的例子, -同时依据下列步骤可以写出你自己的推理例子: - -##### 初始化分布式环境 -```python -initialize_megatron(args_defaults={'no_load_rng': True, 'no_load_optim': True}) -``` -##### 初始化模型和权重 - -```python -from modellink import get_args -from modellink.model import GPTModel -from modellink.arguments import core_transformer_config_from_args - - -def model_provider(pre_process=True, post_process=True): - """Build the model.""" - config = core_transformer_config_from_args(get_args()) - init_model = GPTModel( - config, - num_tokentypes=0, - parallel_output=False, - return_moe_loss=False, - pre_process=pre_process, - post_process=post_process - ) - return init_model - - -model = GPTModel.from_pretrained( - model_provider=model_provider, - pretrained_model_name_or_path="your model weight path" -) - -""" -This is an API for initializing model and loading weight. - -Parameters: ----------- -model_provider(`func`): - Function used to generate model objects which is similar to the training define. -pretrained_model_name_or_path(`str`, *optional*, defaults to None): - File path of Model weight in megatron format (TP, PP may be used). - If it is None, the random initialized weights will be used. -""" -``` -##### 以 HuggingFace 的风格生成文本 - -- Greedy Search - ```python - responses = model.generate( - "Write quick sort code in python", - max_new_tokens=512 - ) - ``` - - -- Do sample with top-k and top-p - ```python - responses = model.generate( - "Write quick sort code in python", - do_sample=True, - temperature=1.0, - top_k=50, - top_p=0.95, - max_new_tokens=512 - ) - ``` - - -- Beam search with top-k and top-p - ```python - responses = model.generate( - "Write quick sort code in python", - num_beams=4, - top_k=50, - top_p=0.95, - max_new_tokens=512 - ) - ``` - - -- Beam search with top-k and top-p sampling - ```python - responses = model.generate( - "Write quick sort code in python", - do_sample=True, - temperature=0.6, - num_beams=4, - top_k=50, - top_p=0.95, - max_new_tokens=512 - ) - ``` - - -### 评估基线数据集(Benchmark) - - - -#### 数据集评估结果参考 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
任务验证集模型昇腾值参考值社区值
BBHtestLlama7b0.3340.3330.335
AGIEvaltestLlama7b0.2100.2100.206
HumanEvaltestLlama7b0.1280.1280.128
BoolQtestLlama7b0.7420.7420.754
GSM8KtestLlama7b0.1020.1030.100
CEvalvalLlama7b0.4080.404/
MMLUtestLlama7b0.3330.3240.351
- -#### 快速开始 -```bash -# 配置模型和词表路径 -# 词表路径地址:https://huggingface.co/yahma/llama-7b-hf -CHECKPOINT=../models/llama-7b-tp2-pp4/ -VOCAB_FILE=../models/llama7b-hf/ -# 配置任务和数据路径 -DATA_PATH="dataset/boolq/test" -TASK="boolq" -# 配置生成参数 -python -m torch.distributed.launch $DISTRIBUTED_ARGS evaluation.py \ - --task-data-path $DATA_PATH \ - --task $TASK\ - --seq-length 512 \ - --max-new-tokens 1 \ - --evaluation-batch-size 1 \ - --max-position-embeddings 512 \ - --tensor-model-parallel-size 2 \ - --pipeline-model-parallel-size 4 \ - --num-layers 32 \ - --hidden-size 4096 \ - --ffn-hidden-size 11008 \ - --load ${CHECKPOINT[images](sources%2Fimages)} \ - --num-attention-heads 32 \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path $VOCAB_FILE \ - --tokenizer-not-use-fast \ - --fp16 \ - --micro-batch-size 1 \ - --seed 42 | tee logs/train.log -# 开启评估 -bash examples/llama/evaluate_llama_7B_ptd.sh -``` - -最重要的评估参数是 `--max-new-tokens`, 它表示模型输出的生成长度,比如,多项选择问题的输出长度就会明显比编码任务的输出长度小,该参数也很大程度上影响了模型的评估性能。通过--evaluation-batch-size参数可以设置多batch推理,提升模型评估性能。 - -```bash -python -m torch.distributed.launch $DISTRIBUTED_ARGS evaluation.py \ - --task-data-path $DATA_PATH \ - --task $TASK\ - --seq-length 512 \ - --max-new-tokens 1 \ - --evaluation-batch-size 1 \ - --max-position-embeddings 512 \ - --tensor-model-parallel-size 2 \ - --pipeline-model-parallel-size 4 \ - --num-layers 32 \ - --hidden-size 4096 \ - --ffn-hidden-size 11008 \ - --load ${CHECKPOINT} \ - --num-attention-heads 32 \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path $VOCAB_FILE \ - --tokenizer-not-use-fast \ - --fp16 \ - --micro-batch-size 1 \ - --seed 42 | tee logs/train.log -``` -#### 评估脚本说明 - -#### 基线数据集介绍 - - -##### AGIEval -AGIEval 是一个用于评估大模型在人类认知和问题解决能力方面生成能力的基准数据集,它源于20个面向普通考生的官方、公开和高标准的入学和资格考试,相关参数可以设置为 `TASK="agieval"`, `--max-new-token=5`。 - -##### HumanEval -HumanEval 是一个用于挑战代码生成问题的数据集,具有164个编程问题,包含函数签名,文档,函数主体和单元测试等。该数据的所有问题都是手写的,以确保它们不在训练集中,由于答案包含长代码,相关参数可以设置为 `TASK="human_eval"`, `--max-new-token=200`。 - - -##### BoolQ - -BoolQ 是一个 yes/no 的问答数据集, 每一个问题包含了一个(问题,文章,答案)三元组,同时有文章的标题作为额外的选择性输入。BoolQ 数据集的评估相对简单,只需要配置 `TASK="boolq"`, `--max-new-token=1`。 -零样本评估的结果通常会被给定的 prompt 影响,可以尝试通过在 `evaluation.py` 中设置合适的 prompt 得到更高的分数, - -```bash -# 通过修改 template 更新prompt -template = {instruction} -``` - -##### Big-Bench-Hard -Big-bench-hard 数据集是 BIG-Bench 的一个子集,专注于有挑战性的23个 BIG-Bench 任务, 涵盖文本理解、推理、逻辑推理、数学推理和常识推理等多个领域,相关参数可以设置为 `TASK="bbh"`, `--max-new-token=32`,`--evaluation-batch-size=4`。 - -##### GSM8K -GSM8K 是一个有8.5k高质量小学数学应用题文本的数据集,每一个问题的回答是具体的数字。由于该数据集通常采用 few-shot 的形式进行评估,GSM8K的问题长度相对是比较长的,输出答案包含一整个思维链路,相关入参应该设置为 `TASK="gsm8k"`, `--max-new-token=200`. - -##### CEval -如 [C-Eval](https://cevalbenchmark.com/) 展示的, C-Eval 是一个针对大模型的综合中文评估数据集, 它由13948道多项选择题组成,涵盖52个不同学科和4个难度级别,划分为验证和测试集,验证集包含标签用于个人评估,测试集合的标签没有公开,如果想要知道模型得分,需要将结果 邮件发送给 [C-Eval](https://cevalbenchmark.com/),相关参数可以设置为 `TASK="ceval"`, `--max-new-token=1`。 - -##### MMLU -由于 MMLU 是一项多学科任务,并且需要进行 5-shot 评估,因此每个学科问题的长度差异很大。如果你想同时跑57个学科任务,可以尝试设置 `TASK="mmlu"`, `--max-new-token=1` 。 -在很多网站,MMLU 的精度会依据学科进行评估,57个学科主要属于四个大类, 因此该数据集也可以基于四个大类进行打分,[网站](https://github.com/hendrycks/test/blob/master/categories.py) 给出了具体的57个类别。 - -### 使用加速特性 +## 加速算法与融合算子 ModelLink支持张量并行、流水线并行、序列并行、重计算、分布式优化器等多种加速算法和融合算子,下表为各种加速特性对应的使能开关: | 加速特性 | 使能参数 | @@ -1067,6 +617,7 @@ torchrun $DISTRIBUTED_ARGS pretrain_gpt.py \ ... ``` +--- ## 基于昇腾芯片采集Profiling数据 Modellink支持基于昇腾芯片采集profiling数据,以提供对模型运行情况的分析,主要API如下: diff --git a/README_en.md b/README_en.md index 451d9e392..c4adc21aa 100644 --- a/README_en.md +++ b/README_en.md @@ -23,7 +23,7 @@ ModelLink provides end-to-end solutions for large language models on Ascend chip --- -## supported features +### Supported features Current ModelLink supported features for large model usage: * [Dataset Preparation for Pre-training](#jump11)/[Fine-tuning Instruction Dataset Preparation](#jump12) * [Pre-training](#jump13)/[Full-parameter Fine-tuning](#jump14)/[Low-parameter Fine-tuning](#jump15) @@ -31,22 +31,24 @@ Current ModelLink supported features for large model usage: * [Evaluation with numerous benchmarks](#jump17) * [Utilizing Acceleration Features (Acceleration Algorithms + Fusion Operators)](#jump18) * [Profiling data based on Ascend chips](#jump19) +* [convert ckpt between huggingface and megatron](#jump19) More novel and useful features are developing for LLMs training on Ascend ... -## Supported Models +### Supported Models Current ModelLink supports pre-training and fine-tuning for the following models: - - - + + + + + - @@ -54,235 +56,288 @@ Current ModelLink supports pre-training and fine-tuning for the following models + + - - - - + + + + - - + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + - - + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + - - - - - + + + + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + + + - - - - + + + + - + + + - - - - + + + + + + - - - - + + + +
ModelParametersFine-tuningScalePretrain InferenceLoRASFTChat EvaluationDataset Support Contributor
Aquila 7B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
Aquila2 7B Aquila27B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Community】 -- -- eval 【Community】
Baichuan 7B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
13B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
Baichuan2 7B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
13B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
Bloom 7B1 pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
176B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
CodeLlama 34B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Community】 -- -- eval 【Community】
InternLM 7B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
65B pretrain -- -- -- alpaca_data.json 【Model contributed by Ascend】 -- -- 【Ascend】
LLaMA 7B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
13B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
33B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
65B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
LLaMA2 7B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
13B lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 pretrain generate lora -- -- eval 【Ascend】
34B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
70B pretrain generate lora inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
LLaMA3 8B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- chat eval 【Community】
70B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Community】
Qwen 7B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
14B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
72B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
YiYi 34B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Community】 -- -- eval 【Community】
Mixtral 8x7B pretrain generate -- inference evaluation alpaca_data.json 【Model contributed by Ascend】 -- -- eval 【Ascend】
-## Script Naming Rules -| Script | Rule | -|:-------------------------:|:--------:| -| pretrain_xxx.sh | Pre-training Script | -| tune_xxx.sh | Fine-tuning Script | -| generate_xxx.sh | Inference Script | -| evaluation_xxx.sh | Evaluation Script | +### Script Naming Rules +| Script | Rule | +|:-----------------:|:-------------------:| +| pretrain_xxx.sh | Pre-training Script | +| tune_xxx.sh | Fine-tuning Script | +| generate_xxx.sh | Inference Script | +| xxx_chat_xxx.sh | Chat Script | +| evaluation_xxx.sh | Evaluation Script | --- @@ -295,13 +350,13 @@ For the supported models listed above, we provide training scripts and readme in 【Please note the corresponding environment versions for model usage, as follows】 | Software | [Version](https://www.hiascend.com/zh/) | -| :-----------------------: |:----------------------------------:| -| Python | 3.8 | -| driver | Ascend HDK 23.0.0 | -| firmware | Ascend HDK 23.0.0 | -| CANN | CANN 7.0.0 | -| torch | 2.1.0 | -| torch_npu | release v5.0.0 | +| :-----------------------: |:---------------------------------------:| +| Python | 3.8 | +| driver | Ascend HDK 23.0.0 | +| firmware | Ascend HDK 23.0.0 | +| CANN | CANN 7.0.0 | +| torch | 2.1.0、2.2.0 | +| torch_npu | release v5.0.0 | 【Based on the current version of megatron, the performance statistics from our testing are as follows】 @@ -315,7 +370,6 @@ For the supported models listed above, we provide training scripts and readme in Precision Mode Performance Reference Performance - Scripts @@ -326,7 +380,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 2849 2874 - train Aquila2 @@ -335,7 +388,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 3323 2673 - train Baichuan @@ -344,7 +396,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 2685 2036 - train 13B @@ -352,7 +403,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 1213 862 - train Baichuan2 @@ -361,7 +411,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 2664 3969 - train 13B @@ -369,7 +418,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 1668 2062 - train Bloom @@ -378,7 +426,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 2034 2525 - train 176B @@ -386,7 +433,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 100 107 - train CodeLlama @@ -395,7 +441,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 837 762 - train InternLM @@ -404,7 +449,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 2776 2854 - train 65B @@ -412,7 +456,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 341 414 - train LLaMA @@ -421,7 +464,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 3600 3804 - train 13B @@ -429,7 +471,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 1895 2012 - train 33B @@ -437,7 +478,6 @@ For the supported models listed above, we provide training scripts and readme in FP16 621 776 - train 65B @@ -447,7 +487,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 348 426 - train LLaMA2 @@ -456,7 +495,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 4200 3850 - train 13B @@ -464,7 +502,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 1990 1920 - train 34B @@ -472,7 +509,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 690 796 - train 70B @@ -480,7 +516,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 350 339 - train LLaMA3 @@ -489,7 +524,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 2483 2674 - train 70B @@ -497,7 +531,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 283 -- - train Qwen @@ -506,7 +539,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 2499 2867 - train 14B @@ -514,7 +546,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 1560 1578 - train 72B @@ -522,7 +553,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 285 345 - train Yi @@ -531,7 +561,6 @@ For the supported models listed above, we provide training scripts and readme in BF16 809 730 - train Mixtral @@ -540,499 +569,13 @@ For the supported models listed above, we provide training scripts and readme in BF16 1054 1139 - train +--- - -# Function Usage Guide - -## Instruction/Pretraining dataset support - -#### Quick Start -Use the [preprocess_data.py](tools/preprocess_data.py) data preprocessing tool to process raw data into binary format data for training. Below is an example of processing the Alpaca dataset: - -```bash -# for llama, download alpaca dataset, like -wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet - -# download tokenizer configs and (selective) weights from -# https://huggingface.co/yahma/llama-7b-hf/tree/main -# revise "LLaMATokenizer" as "LlamaTokenizer" in tokenizer_config.json (This is a bug of huggingface) -mkdir dataset -python tools/preprocess_data.py --input train-00000-of-00001-a09b74b3ef9c3b56.parquet \ - --output-prefix dataset/alpaca \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path llama-7b-hf \ - --tokenizer-not-use-fast \ - --handler-name GeneralInstructionHandler -``` - -The output will be two files, named alpaca_packed_input_ids_document.bin and alpaca_packed_input_ids_document.idx. In subsequent training, specify --data-path with the full path and new filename, but without the file extension. Use --tokenizer-type to specify the data preprocessing method corresponding to the model, --tokenizer-name-or-path to specify the tokenizer model path, usually downloaded along with the pre-trained weights in the open-source project, and --handler-name to specify the data set's instruction data construction method. - -#### reprocessing pretraining dataset - -##### wikipedia dataset - -+ download [wikipedia data](https://huggingface.co/datasets/wikipedia/tree/main) from huggingface to WORKSPACE/wikipedia -+ download [llama tokenizer model and config](https://huggingface.co/yahma/llama-7b-hf/tree/main) from huggingface to WORKSPACE/llama-7b-hf -+ use preprocessing script to preprocess wikipedia data - -```shell -# We assume that data and tokenizer has already been downloaded to WORKSPACE. -cd WORKSPACE -mkdir wikipedia_preprocessed - -# specify huggingface load_dataset parameters.(--input param will be ignored) -# these params will just be feed into datasets.load_dataset function -hf_config_json="./hf_config_json.json" -cat < $hf_config_json -{ - "path": "WORKSPACE/wikipedia", - "name": "20220301.en", - "streaming: True, - "split": "train" -} -EOT - -python tools/preprocess_data.py \ - --input "WORKSPACE/wikipedia" \ - --hf-datasets-params ${hf_config_json} \ - --output-prefix WORKSPACE/wikipedia_preprocessed/wikipedia \ - --dataset-impl mmap \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path WORKSPACE/llama-7b-hf \ - --tokenizer-not-use-fast \ - --streaming \ - --workers 8 -``` - -After preprocessing, there will be a `wikipedia_text_document.bin` and a `wikipedia_text_document.idx` in the `WORKSPACE/wikipedia_preprocessed` dictionary. -Then, we can train a model with `--data-path WORKSPACE/wikipedia_preprocessed/wikipedia_text_document` flag. - -Note that datasets in huggingface have a format like [this](https://huggingface.co/datasets/wikipedia/viewer/20220301.en/train). The name of the text field of the dataset can be changed by using the `--json-key` flag which default is `text`. -In wikipedia dataset, it has four columns, including `id`, `url`, `title` and `text`, where we can choose a column used for training by `--json-key` flag. - -##### alpaca dataset - -Besides, we can also use [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet) dataset for pretraining as below. - -```shell -python tools/preprocess_data.py --input WORKSPACE/train-00000-of-00001-a09b74b3ef9c3b56.parquet \ - --output-prefix WORKSPACE/alpaca_preprocessed/alpaca \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path WORKSPACE/llama-7b-hf \ - --tokenizer-not-use-fast \ - --json-key text -``` - - -#### Preprocessing instruction dataset -##### alpaca dataset -```bash -# for llama, download alpaca dataset, like -# wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet - -# download tokenizer configs and (selective) weights from -# https://huggingface.co/yahma/llama-7b-hf/tree/main -# revise "LLaMATokenizer" as "LlamaTokenizer" in tokenizer_config.json (This is a bug of huggingface) - -cd WORKSPACE -mkdir alpaca_preprocessed -python tools/preprocess_data.py --input WORKSPACE/alpaca/train-00000-of-00001-a09b74b3ef9c3b56.parquet \ - --output-prefix WORKSPACE/alpaca_preprocessed/alpaca \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path WORKSPACE/llama-7b-hf \ - --tokenizer-not-use-fast \ - --handler-name GeneralInstructionHandler \ - --append-eod -``` - -After preprocessing, there will be three `bin` files and three `idx` files in the `WORKSPACE/alpaca_preprocessed` dictionary. Then, we can train a model with `--data-path WORKSPACE/alpaca_preprocessed/alpaca` and `--is-instruction-dataset` flags. -In addition, we have developed the dynamic padding function based on the instruction dataset, which can be implemented using the `--variable-seq-lengths` flag. - -Note that instruction dataset has a `--handler-name GeneralInstructionHandler` flag which will choose `GeneralInstructionHandler` class to create prompt in `modellink/data/data_handler.py`. -If you have an alpaca-style dataset which have `instruction`, `input` and `output` columns, just use `GeneralInstructionHandler`. -In addition, `BelleMultiTurnInstructionHandler` is used to handle [belle dataset](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M), -`MOSSInstructionHandler` is used to handle [MOSS dataset](https://huggingface.co/datasets/fnlp/moss-003-sft-data) and `LeetcodePythonInstructionHandler` is used to handle [Leetcode dataset](https://huggingface.co/datasets/mhhmm/leetcode-solutions-python). -### Pre-training -```shell - # Configure LLaMA-7B pre-training script: pretrain_llama_7b.sh - # Configure vocabulary, dataset, and model parameter saving path according to actual conditions - TOKENIZER_PATH=WORKSPACE/llama-7b-hf/tokenizer.model # Path to the vocabulary - DATA_PATH=WORKSPACE/alpaca_preprocessed/alpaca_text_document # Path to pre-training dataset -``` - -Launch LLaMA-7B pre-training script: examples/llama/pretrain_llama_7b_ptd.sh -```shell - bash examples/llama2/pretrain_llama_7b_ptd.sh -``` - -### Full-parameter Fine-tuning -```shell - # Based on the pre-training script, provide the pre-training weight path, use instruction dataset path, and enable fine-tuning switch --finetune - LOAD_CHECKPOINT_PATH="your init model weight load path" - DATA_PATH=WORKSPACE/alpaca_preprocessed/alpaca_text_document # Instruction fine-tuning dataset path - - torchrun $DISTRIBUTED_ARGS pretrain_gpt.py \ - --load ${LOAD_CHECKPOINT_PATH} \ - --finetune \ - ... \ - ... -``` - - -### Low-parameter fine-tuning -#### Lora - -Now, we support Lora to fine-tune your models. - -First, you need to install version 0.4.0 of the peft library, like this: -```shell -pip install peft==0.4.0 -``` -When torch==1.11.0, You can also choose to install from [the source package in the GitHub repository](https://github.com/huggingface/peft/archive/refs/tags/v0.4.0.tar.gz), so you can modify the setup.py file to avoid some dependency issues. - -Next, you just need to add this argument in your script to open Lora: - -```shell -# Llama example ---lora-target-modules query_key_value dense gate_proj dense_h_to_4h dense_4h_to_h \ -``` - -There are other Lora related arguments here, you can find their definitions in the [PEFT](https://github.com/huggingface/peft) library. - -```shell -# Llama example ---lora-r 64 \ ---lora-alpha 128 \ ---lora-modules-to-save word_embeddings output_layer \ ---lora-register-forward-hook word_embeddings input_layernorm \ -``` - -Among them, the argument `--lora-register-forward-hook` is used to repair the gradient chain break caused by PP. It only needs to be set to the input layer of each PP stage, and the repair will not increase the trainable parameters. The argument `--lora-modules-to-save` is used for fine-tuning when expanding the vocabulary. If there is no need for this, there is no need to pass in this argument. - -Finally, only Lora's parameters are saved after turning on Lora. Similarly, when loading a model, you need to specify the original model weight path and the Lora weight path. Parameters such as the optimizer are subject to those in the Lora weight path. - -```shell ---load ${ORIGIN_CHECKPOINT} \ ---lora-load ${LORA_CHECKPOINT} \ -``` - -There is an [example](examples/llama/tune_llama_ptd_13b.sh) could be referred. - -After using Lora to fine-tune the Llama model, the instruction dialogue effect is as follows: - -```shell -You >> Give three tips for staying healthy. - -ModelLink: - -- Start exercising regularly and eat healthy food. -- Get a good eight hours of sleep each night. -- Take medications regularly. -``` - -### Inference: human-machine dialogue -Currently, we support the following four cases of inference: -- PTD -- Model fine-tuned with lora - -【For supported models, we also provide examples. Please refer to the following quick start】 - -#### Quick Start - -***Please Note that:*** -1. If you want to use the weight from huggingface, please run the weight conversion script first. - Take Llama-7B, for example: - - - PTD only - ```bash - python tools/checkpoint/convert_ckpt.py --model-type GPT \ - --loader llama2_hf \ - --saver megatron \ - --target-tensor-parallel-size 1 \ - --target-pipeline-parallel-size 8 \ - --load-dir ./model_from_hf/llama-7b-hf \ - --save-dir ./model_weights/llama-7b-tp1-pp8 \ - --tokenizer-model ./model_from_hf/llama-7b-hf/tokenizer.model - ``` - -5. You need to modify some variables in the shell script such as **model weight path** and **vocab path**. - - - **PTD only:** In this mode, the model is split by pipeline parallel and tensor parallel mode in megatron ways. - ```bash - sh examples/llama/generate_llama_7B_tp2_pp2.sh - ``` - - **If you want to use lora model**, for details, refer to: - ```bash - sh examples/llama/generate_llama_7b_lora_ptd.sh - ``` - -#### Usage Guide -***Some examples with [Chinese-LLaMA-Alpaca-13B weights](https://github.com/ymcui/Chinese-LLaMA-Alpaca) is as below***: - -##### Initializing the Distributed Environment -```python -initialize_megatron(args_defaults={'no_load_rng': True, 'no_load_optim': True}) -``` -##### Initializing model and loading weights - -```python -from modellink import get_args -from modellink.model import GPTModel -from modellink.arguments import core_transformer_config_from_args - - -def model_provider(pre_process=True, post_process=True): - """Build the model.""" - config = core_transformer_config_from_args(get_args()) - init_model = GPTModel( - config, - num_tokentypes=0, - parallel_output=False, - return_moe_loss=False, - pre_process=pre_process, - post_process=post_process - ) - return init_model - - -model = GPTModel.from_pretrained( - model_provider=model_provider, - pretrained_model_name_or_path="your model weight path" -) - -""" -This is an API for initializing model and loading weight. - -Parameters: ----------- -model_provider(`func`): - Function used to generate model objects which is similar to the training define. -pretrained_model_name_or_path(`str`, *optional*, defaults to None): - File path of Model weight in megatron format (TP, PP may be used). - If it is None, the random initialized weights will be used. -""" -``` -##### Generate text in HuggingFace-like ways - -- Greedy Search - ```python - responses = model.generate( - "Write quick sort code in python", - max_new_tokens=512 - ) - ``` - - -- Do sample with top-k and top-p - ```python - responses = model.generate( - "Write quick sort code in python", - do_sample=True, - temperature=1.0, - top_k=50, - top_p=0.95, - max_new_tokens=512 - ) - ``` - - -- Beam search with top-k and top-p - ```python - responses = model.generate( - "Write quick sort code in python", - num_beams=4, - top_k=50, - top_p=0.95, - max_new_tokens=512 - ) - ``` - - -- Beam search with top-k and top-p sampling - ```python - responses = model.generate( - "Write quick sort code in python", - do_sample=True, - temperature=0.6, - num_beams=4, - top_k=50, - top_p=0.95, - max_new_tokens=512 - ) - ``` - - -### Evaluation with Numerous Benchmarks - - - -#### Dataset Evaluation Results - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TaskSubsetModelAscendReferenceBenchmark
BBHtestLlama7b0.3340.3330.335
AGIEvaltestLlama7b0.2100.2100.206
HumanEvaltestLlama7b0.1280.1280.128
BoolQtestLlama7b0.7420.7420.754
GSM8KtestLlama7b0.1020.1030.100
CEvalvalLlama7b0.4080.404/
MMLUtestLlama7b0.3330.3240.351
- -#### Quick Start -```bash -# Configure model path and vocab_file path -# Vocab file can be downloaded from https://huggingface.co/yahma/llama-7b-hf -CHECKPOINT=../models/llama-7b-tp2-pp4/ -VOCAB_FILE=../models/llama7b-hf/ -# configure task and data path -DATA_PATH="dataset/boolq/test" -TASK="boolq" -# configure generation parameters -python -m torch.distributed.launch $DISTRIBUTED_ARGS evaluation.py \ - --task-data-path $DATA_PATH \ - --task $TASK\ - --seq-length 512 \ - --max-new-tokens 1 \ - --max-position-embeddings 512 \ - --tensor-model-parallel-size 2 \ - --pipeline-model-parallel-size 4 \ - --num-layers 32 \ - --hidden-size 4096 \ - --ffn-hidden-size 11008 \ - --load ${CHECKPOINT[images](sources%2Fimages)} \ - --num-attention-heads 32 \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path $VOCAB_FILE \ - --tokenizer-not-use-fast \ - --fp16 \ - --micro-batch-size 1 \ - --seed 42 | tee logs/train.log -# start evaluation -bash examples/llama/evaluate_llama_7B_ptd.sh -``` - -#### Task Introduction -The most important evaluation parameters must be `--max-new-tokens`, which means the output length of model generation. For example, multiple-choice -questions' output length is obviously shorter than coding tasks. Besides, this parameter largely decides the speed of model generation. - -```bash -python -m torch.distributed.launch $DISTRIBUTED_ARGS evaluation.py \ - --task-data-path $DATA_PATH \ - --task $TASK\ - --seq-length 512 \ - --max-new-tokens 1 \ - --evaluation-batch-size 1 \ - --max-position-embeddings 512 \ - --tensor-model-parallel-size 2 \ - --pipeline-model-parallel-size 4 \ - --num-layers 32 \ - --hidden-size 4096 \ - --ffn-hidden-size 11008 \ - --load ${CHECKPOINT} \ - --num-attention-heads 32 \ - --tokenizer-type PretrainedFromHF \ - --tokenizer-name-or-path $VOCAB_FILE \ - --tokenizer-not-use-fast \ - --fp16 \ - --micro-batch-size 1 \ - --seed 42 | tee logs/train.log -``` -#### Evaluation Script Instructions - -#### Baseline Dataset Introduction - - -##### MMLU -Since MMLU is a multidisciplinary task and 5 shots are performed, the length of each subject question varies greatly. If you want to run 57 subjects at the same time, you need to set `TASK="mmlu"`, `--seq-length=2048`, `--max-position-embeddings=2048`, `--max-new-token=2`. (`--max-new-tokens` can be set to between 2-4). -On many websites, the accuracy of the MMLU is evaluated according to disciplines. The 57 categories of single subjects belong to four main categories. Therefore, the statistics should be summarized according to the major categories of the subjects. The [website](https://github.com/hendrycks/test/blob/master/categories.py) gives the major categories of subjects for 57 categories of subjects. - - -##### GSM8K -GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. The answer of each question is a specific number. Since few shots are performed, the question length is relatively long in GSM8K, and the output answer contains a chain of thoughts, it is necessary to configure `TASK="gsm8k"`, `--seq-length=2048`, `--max-position-embeddings=2048`, `--max-new-token=128`. (`--max-new-tokens` can be set between 256-512). - -##### HumanEval -HumanEval dataset is a handcrafted set of 164 programming problems designed to challenge code generation models. The problems include a function signature, docstring, body, and several unit tests, all handwritten to ensure they're not included in the training set of code generation models. -Since the answer of HumanEval dataset contains long codes, it is necessary to configure `TASK="human_eval"`, `--seq-length=2048`, `--max-position-embeddings=2048`, `--max-new-token=1024`. - -##### AGIEval -AGIEval is a human-centric benchmark specifically designed to evaluate the general -abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.Since the length of answers to different type of questions varies, we have to configure `TASK="agieval"`, `--seq-length=2048`, `--max-position-embeddings=2048`, `--max-new-token=1024` to fit the longest answer. - -##### Big-Bench-Hard -Big-bench-hard dataset is a subset of big bench, which is a diverse evaluation suite that focuses on a suite of 23 challenging BIG-Bench tasks. These are the task for which prior language model evaluations did not outperform the average human-rater. This dataset covers multiple areas including text understanding, reasoning, logical reasoning, mathematical reasoning, and common sense reasoning. -Except word_sorting, all datasets are multiple-choice questions. So we can set `TASK="bbh"`, `--seq-length=2048`, `--max-position-embeddings=2048`, `--max-new-token=32`. (`--max-new-tokens` can be set between 32-64). - -##### CEval -As [C-Eval](https://cevalbenchmark.com/) shows, C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels, as shown below. You may explore our dataset examples at Explore, or check our paper for more details. The dataset contains validation and test data, however, only validation data has label for auto-evaluation. If -you want to evaluate on test data, you should email your results to [C-Eval](https://cevalbenchmark.com/). - ### Acceleration Features ModelLink supports various acceleration algorithms such as tensor parallelism, pipeline parallelism, sequence parallelism, recomputation, distributed optimizer, and more. The table below shows the enable switches corresponding to each acceleration feature: diff --git a/examples/mixtral/README.md b/examples/mixtral/README.md index a48947a0c..805a493c5 100644 --- a/examples/mixtral/README.md +++ b/examples/mixtral/README.md @@ -24,9 +24,9 @@ 训练的最低硬件配置: -| 硬件 | 配置 | -| :--: | :--------------: | -| NPU | 16 x Ascend NPUs | +| 硬件 | 配置 | +| :--: |:----------------:| +| NPU | 32 x Ascend NPUs | 推理的推荐硬件配置: @@ -104,15 +104,15 @@ --loader mixtral_hf \ --saver mixtral \ --load-dir ./model_from_hf/Mixtral-8x7B/ \ - --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep2/ \ + --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp4-ep1/ \ --tokenizer-model ./model_from_hf/Mixtral-8x7B/tokenizer.model \ - --target-tensor-parallel-size 1 \ - --target-pipeline-parallel-size 8 \ - --target-expert-parallel-size 2 + --target-tensor-parallel-size 8 \ + --target-pipeline-parallel-size 4 \ + --target-expert-parallel-size 1 ``` 任意并行切分策略的Megatron权重 --> 任意并行切分策略的Megatron权重 - ***(该场景一般用于重新配置切分后模型的权重,比如在双机16卡 EP2-PP8策略下训练完了,想在单机8卡 TP8上进行推理)*** + ***(该场景一般用于重新配置切分后模型的权重,比如在四机32卡 TP8-PP4策略下训练完了,想在单机8卡 TP8上进行推理)*** ```bash # 修改 ascend-toolkit 路径 @@ -123,10 +123,10 @@ --model-type GPT \ --loader mixtral_mg \ --saver mixtral \ - --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep2/ \ - --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep1/ \ - --target-tensor-parallel-size 1 \ - --target-pipeline-parallel-size 8 \ + --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp4-ep1/ \ + --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp1-ep1/ \ + --target-tensor-parallel-size 8 \ + --target-pipeline-parallel-size 1 \ --target-expert-parallel-size 1 ``` @@ -143,7 +143,7 @@ --loader mixtral_mg \ --saver mixtral \ --save-model-type huggingface \ - --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep2/ \ + --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp4-ep1/ \ --save-dir ./model_from_hf/Mixtral-8x7B/ # <-- 需要填入原始HF模型路径,新权重会存于./model_from_hf/Mixtral-8x7B/mg2hg/ ``` @@ -184,14 +184,14 @@ GPUS_PER_NODE=8 MASTER_ADDR="your master node IP" MASTER_PORT=6000 - NNODES=2 + NNODES=4 NODE_RANK="current node id" WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) # 训练并行策略 - TP=1 - PP=8 - EP=2 + TP=8 + PP=4 + EP=1 ``` 启动 Mixtral-8x7B 预训练脚本: ***examples/pretrain_mixtral_8x7b_ptd.sh*** @@ -245,13 +245,12 @@ ### 吞吐 -Mixtral-8x7B 在双机16卡上(ep2 pp8) **昇腾芯片** 和 **参考芯片** 上的性能对比: -*(当节点够多的情况下,ep越大吞吐越大,这里并非为最佳性能,仅供参考)* +Mixtral-8x7B 在四机32卡上(tp8 pp4) **昇腾芯片** 和 **参考芯片** 上的性能对比: | 设备 | 模型 | 迭代数 | 样本吞吐 (samples/step) | tokens吞吐 (tokens/s/p) | 单步迭代时间 (s/step) | -| :--: | :----------: | :----: | :---------------------: | :---------------------: | :-------------------: | -| NPUs | Mixtral-8x7B | 1000 | 4.11 | 1053.6 | 31.13 | -| 参考 | Mixtral-8x7B | 1000 | 4.45 | 1139.3 | 28.76 | +| :--: | :----------: | :----: |:-------------------:|:---------------------:|:---------------:| +| NPUs | Mixtral-8x7B | 1000 | 0.47 | 487 | 16.81 | +| 参考 | Mixtral-8x7B | 1000 | 0.59 | 610 | 13.41 | ## 模型推理 @@ -301,7 +300,7 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh # 修改模型参数路径和词表路径 TOKENIZER_PATH="./model_from_hf/Mixtral-8x7B/" #词表路径 -CHECKPOINT="./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep1" #模型路径 +CHECKPOINT="./model_weights/Mixtral-8x7B-v0.1-tp8-pp1-ep1" #模型路径 # 配置任务和数据集路径 DATA_PATH="./mmlu/test/" TASK="mmlu" diff --git a/examples/mixtral/README_en.md b/examples/mixtral/README_en.md index 80a0787ad..f5c9eb805 100644 --- a/examples/mixtral/README_en.md +++ b/examples/mixtral/README_en.md @@ -24,9 +24,9 @@ Minimum hardware requirements for training: -| Hardware | Configuration | -| :------: | :--------------: | -| NPU | 16 x Ascend NPUs | +| Hardware | Configuration | +| :------: |:----------------:| +| NPU | 32 x Ascend NPUs | Recommended hardware configuration for inference: @@ -105,11 +105,11 @@ Recommended hardware configuration for inference: --loader mixtral_hf \ --saver mixtral \ --load-dir ./model_from_hf/Mixtral-8x7B/ \ - --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep2/ \ + --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp4-ep1/ \ --tokenizer-model ./model_from_hf/Mixtral-8x7B/tokenizer.model \ - --target-tensor-parallel-size 1 \ - --target-pipeline-parallel-size 8 \ - --target-expert-parallel-size 2 + --target-tensor-parallel-size 8 \ + --target-pipeline-parallel-size 4 \ + --target-expert-parallel-size 1 ``` Any Megatron weights with parallel slicing strategy --> Any Megatron weights with parallel slicing strategy @@ -124,10 +124,10 @@ Recommended hardware configuration for inference: --model-type GPT \ --loader mixtral_mg \ --saver mixtral \ - --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep2/ \ - --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep1/ \ - --target-tensor-parallel-size 1 \ - --target-pipeline-parallel-size 8 \ + --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp4-ep1/ \ + --save-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp1-ep1/ \ + --target-tensor-parallel-size 8 \ + --target-pipeline-parallel-size 1 \ --target-expert-parallel-size 1 ``` @@ -144,7 +144,7 @@ Recommended hardware configuration for inference: --loader mixtral_mg \ --saver mixtral \ --save-model-type huggingface \ - --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep2/ \ + --load-dir ./model_weights/Mixtral-8x7B-v0.1-tp8-pp4-ep1/ \ --save-dir ./model_from_hf/Mixtral-8x7B/ # <-- Fill in the original HF model path here, new weights will be saved in ./model_from_hf/Mixtral-8x7B/mg2hg/ ``` @@ -185,14 +185,14 @@ Recommended hardware configuration for inference: GPUS_PER_NODE=8 MASTER_ADDR="your master node IP" MASTER_PORT=6000 - NNODES=2 + NNODES=4 NODE_RANK="current node id" WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) # Training parallel strategy - TP=1 - PP=8 - EP=2 + TP=8 + PP=4 + EP=1 ``` Start Mixtral-8x7B pre-training script: ***examples/pretrain_mixtral_8x7b_ptd.sh*** @@ -246,13 +246,12 @@ Recommended hardware configuration for inference: ### Throughput -Comparison of Mixtral-8x7B performance on 2 nodes and 16 chips with ep2 pp8: -**(When there are enough nodes, the larger the ep, the higher the throughput. This is not the optimal performance here, just for reference)** +Comparison of Mixtral-8x7B performance on 4 nodes and 32 chips with tp8 pp4: | Device | Model | Iterations | Sample Throughput (samples/step) | Tokens Throughput (tokens/s/p) | Single Step Iteration Time (s/step) | -| :-------: | :----------: | :--------: | :------------------------------: | :----------------------------: | :---------------------------------: | -| NPUs | Mixtral-8x7B | 1000 | 3.13 | 1053.63 | 31.13 | -| Reference | Mixtral-8x7B | 1000 | 4.45 | 1139.3 | 28.76 | +| :-------: | :----------: | :--------: |:--------------------------------:|:------------------------------:|:-----------------------------------:| +| NPUs | Mixtral-8x7B | 1000 | 0.47 | 487 | 16.81 | +| Reference | Mixtral-8x7B | 1000 | 0.59 | 610 | 13.41 | ## Model-Inference @@ -263,7 +262,7 @@ First, configure the inference script: ***examples/mixtral/generate_mixtral_8x7b source /usr/local/Ascend/ascend-toolkit/set_env.sh # Modify the model weight path and tokenizer path -CHECKPOINT="./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep1/" +CHECKPOINT="./model_weights/Mixtral-8x7B-v0.1-tp8-pp1-ep1/" TOKENIZER_MODEL="./model_from_hf/Mixtral-8x7B/" # Modify according to the actual loaded model weight the parallel configuration @@ -302,7 +301,7 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh # Modify the model parameter path and tokenizer path TOKENIZER_PATH="./model_from_hf/Mixtral-8x7B/" #tokenizer path -CHECKPOINT="./model_weights/Mixtral-8x7B-v0.1-tp1-pp8-ep1" #model path +CHECKPOINT="./model_weights/Mixtral-8x7B-v0.1-tp8-pp1-ep1" #model path # Configure tasks and dataset paths DATA_PATH="./mmlu/data/test/" diff --git a/examples/mixtral/evaluate_mixtral_8x7b_ptd.sh b/examples/mixtral/evaluate_mixtral_8x7b_ptd.sh index 1e43e25af..4abc5776d 100644 --- a/examples/mixtral/evaluate_mixtral_8x7b_ptd.sh +++ b/examples/mixtral/evaluate_mixtral_8x7b_ptd.sh @@ -41,7 +41,7 @@ GPT_ARGS=" --num-query-groups 8 \ --tokenizer-type PretrainedFromHF \ --tokenizer-name-or-path ${TOKENIZER_PATH} \ - --seq-length 4096 \ + --seq-length 32768 \ --max-position-embeddings 32768 \ --micro-batch-size 1 \ --make-vocab-size-divisible-by 1 \ diff --git a/examples/mixtral/generate_mixtral_8x7b_ptd.sh b/examples/mixtral/generate_mixtral_8x7b_ptd.sh index fe2c4fcca..aa667e7cc 100644 --- a/examples/mixtral/generate_mixtral_8x7b_ptd.sh +++ b/examples/mixtral/generate_mixtral_8x7b_ptd.sh @@ -38,7 +38,7 @@ GPT_ARGS=" --num-query-groups 8 \ --tokenizer-type PretrainedFromHF \ --tokenizer-name-or-path ${TOKENIZER_MODEL} \ - --seq-length 4096 \ + --seq-length 32768 \ --max-position-embeddings 32768 \ --micro-batch-size 1 \ --make-vocab-size-divisible-by 1 \ diff --git a/examples/mixtral/pretrain_mixtral_8x7b_ptd.sh b/examples/mixtral/pretrain_mixtral_8x7b_ptd.sh index 26ef5f435..22dc0baf5 100644 --- a/examples/mixtral/pretrain_mixtral_8x7b_ptd.sh +++ b/examples/mixtral/pretrain_mixtral_8x7b_ptd.sh @@ -6,7 +6,7 @@ export CUDA_DEVICE_MAX_CONNECTIONS=1 GPUS_PER_NODE=8 MASTER_ADDR="your master node IP" MASTER_PORT=6000 -NNODES=2 +NNODES=4 NODE_RANK=0 WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) @@ -26,9 +26,9 @@ CKPT_SAVE_DIR="your model save ckpt path" CKPT_LOAD_DIR="your model ckpt path" -TP=1 -PP=8 -EP=2 +TP=8 +PP=4 +EP=1 NUM_LAYERS=32 MOE_ARGS=" @@ -56,13 +56,13 @@ GPT_ARGS=" --num-query-groups 8 \ --tokenizer-type PretrainedFromHF \ --tokenizer-name-or-path ${TOKENIZER_MODEL} \ - --seq-length 4096 \ + --seq-length 32768 \ --max-position-embeddings 32768 \ --micro-batch-size 1 \ - --global-batch-size 128 \ + --global-batch-size 8 \ --make-vocab-size-divisible-by 1 \ --lr 1.25e-6 \ - --train-iters 1000 \ + --train-iters 2000 \ --lr-decay-style cosine \ --untie-embeddings-and-output-weights \ --disable-bias-linear \ diff --git a/sources/images/logo.png b/sources/images/logo.png index ae0c89f32..a2849e3e5 100644 Binary files a/sources/images/logo.png and b/sources/images/logo.png differ