kingsleyandher
b5a0fc04a2
Optimizer Pipeline parallel
...
Author: 李冰聪
2023-07-07 11:41:45 +08:00
kingsleyandher
21609f3083
llama模型zeroshot 33B/65B适配代码提交;提交README.md文件
2023-07-05 14:25:29 +08:00
wiyr
d87e921410
added trick
2023-06-30 11:00:38 +08:00
kingsleyandher
bc2a4a33d5
提交VP算法
...
Author: 李冰聪/张梦阳
2023-06-29 10:22:17 +08:00
kingsleyandher
2c104a087e
llama-zeroshot任务精度适配,对齐源论文中的效果。
2023-06-25 09:34:42 +08:00
wiyr
6304cab765
remove useless code
2023-06-20 16:54:12 +08:00
machangjun
2d8c6fee9d
add bloom st and adapt new data load method
...
modify bloom st run
modify bloom st run
modify times
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add new pretrain_bloom.py
add st
2023-06-17 17:36:17 +08:00
simon717
7047d75663
Llama模型结构与huggingface对齐, 前向推理结果与huggingface对齐
2023-06-20 11:27:15 +08:00
kingsleyandher
4e3b7cd992
LlamaTokenizer适配及预训练脚本更改
2023-06-13 12:33:33 +08:00
wiyr
2f826f7351
can run with bloom7b and pass ci
2023-06-12 14:42:29 +08:00
wiyr
89bfaf64c2
added fused softmax, tokenizer, language, utils for bloom
2023-06-12 11:50:46 +08:00
wiyr
6d07c029fe
added func code
2023-06-12 10:39:51 +08:00
fengliangjun
37ba281c40
readme update
2023-06-10 11:26:55 +08:00
chenzomi
37cc0b949d
change megatron to ascendspeed
2023-06-10 21:26:01 +08:00
fengliangjun
106a415556
inital AscendSpeed
2023-06-09 16:15:23 +08:00
wangyixian
d55d341fe1
Adapt the bloom 7.1b model to the AscendSpeed framework, which is jointly completed by liulinfeng and wangyixian
2023-06-06 22:30:19 +08:00
chenzomi
ce6af59f73
remove unused paraemter and models.
2023-05-26 10:53:07 +08:00
chenzomi
e4a120a662
fork megatron-deepspeed code.
2023-05-25 14:49:59 +08:00
王姜奔
ea6e3d2ceb
Initial commit
2023-05-25 02:15:25 +00:00