Commit Graph

49 Commits

Author SHA1 Message Date
fengliangjun
314e852a0a
update .gitignore.
Signed-off-by: fengliangjun <fengliangjun@huawei.com>
2023-09-26 07:47:34 +00:00
fengliangjun
de42a65b6b
update .gitignore.
Signed-off-by: fengliangjun <fengliangjun@huawei.com>
2023-09-26 07:37:14 +00:00
fengliangjun
9b87852c90
update ci/access_control_test.py.
Signed-off-by: fengliangjun <fengliangjun@huawei.com>
2023-09-26 07:20:59 +00:00
fengliangjun
964cc96582
update README.md.
Signed-off-by: fengliangjun <fengliangjun@huawei.com>
2023-09-26 06:50:01 +00:00
fengliangjun
7a21f0bf58 up 2023-09-26 14:30:47 +08:00
yangyuan667
be5d413ec6 Won't Import ascendspeed.ops if unneccssary 2023-08-16 09:35:10 +08:00
xuqiang
8e0f2e9b1a sync parallel_state.py file 2023-08-15 10:45:26 +08:00
yangyuan667
ff9d62bcc7 Add gcc compiler args 2023-08-11 11:44:28 +08:00
machangjun
8e436e3a9a add ffts mode
del torch_trans

del torch_trans and resove bloom ckpt and add bloom ffts+

add ffts mode

del torch_trans

del torch_trans and resove bloom ckpt and add bloom ffts+

replace fused_adam to adam

del unused code
2023-07-25 14:14:28 +08:00
yangyuan667
2ae07e63ee [New]add FlashAttention adpater 2023-08-10 11:11:24 +08:00
xuqiang
eb3dcf3f02 update OWNERS 2023-08-10 09:40:28 +08:00
kingsleyandher
e4bee6c48a Layer Fusion for LLama 2023-07-25 15:25:55 +08:00
xuqiang
49f7bc726e update OWNERS 2023-08-07 09:46:16 +08:00
Mrtutu
4532812837 更新bloom README: bloom7b在osacr-1G单机8卡训练 2023-07-26 14:10:51 +08:00
fengliangjun
260e8eea8f create megatron core 2023-07-24 15:00:57 +08:00
fengliangjun
b559dc6385 set log level 2023-07-20 22:23:33 +08:00
chenzomi
92c27d5e2a add a llama2 brach. 2023-07-21 15:20:25 +08:00
fengliangjun
db9c25bdd9 llama modify 2023-07-19 10:20:40 +08:00
machangjun
de85201818 modify baddbmm to bmm to accelerate 2023-07-14 15:41:00 +08:00
simon717
3a7d87c2b8 1. llama_model.py attention实现回退
2. huggingface llama权重转换脚本
3. llama并行训练策略改变后权重转换脚本
4. codecheck解决
2023-07-06 10:33:21 +08:00
simon717
fedb2127c0 1. llama_model.py attention实现回退
2. huggingface llama权重转换脚本
3. llama并行训练策略改变后权重转换脚本
2023-07-05 15:30:44 +08:00
liulinfeng
f6d7982b02 处理review意见 2023-07-14 15:34:07 +08:00
liulinfeng
243bfe5cfa Bloom适配SP代码 2023-07-14 14:18:07 +08:00
chenzomi
937791fa6d format some code. 2023-07-21 01:42:26 +08:00
chenzomi
4455b80650 change the readme format. 2023-07-14 10:54:42 +08:00
kingsleyandher
31cf1ecdd0 merge code 2023-07-11 20:12:39 +08:00
liulinfeng
85e23be9f2 删除保存初始权重的代码 2023-07-10 11:16:43 +08:00
liulinfeng
8aa62e1049 修复codecheck问题,删除多余的空行 2023-07-10 10:54:13 +08:00
liulinfeng
36f787bc89 Author:刘林峰
修改说明:
1、提交权重加载、推理生成文本的代码实现
2、修改codecheck问题
3、修复断点续训卡死的问题
2023-07-07 15:03:48 +08:00
kingsleyandher
3afb525a97 提交SP算法 2023-07-10 14:44:42 +08:00
kingsleyandher
b5a0fc04a2 Optimizer Pipeline parallel
Author: 李冰聪
2023-07-07 11:41:45 +08:00
kingsleyandher
21609f3083 llama模型zeroshot 33B/65B适配代码提交;提交README.md文件 2023-07-05 14:25:29 +08:00
wiyr
d87e921410 added trick 2023-06-30 11:00:38 +08:00
kingsleyandher
bc2a4a33d5 提交VP算法
Author: 李冰聪/张梦阳
2023-06-29 10:22:17 +08:00
kingsleyandher
2c104a087e llama-zeroshot任务精度适配,对齐源论文中的效果。 2023-06-25 09:34:42 +08:00
wiyr
6304cab765 remove useless code 2023-06-20 16:54:12 +08:00
machangjun
2d8c6fee9d add bloom st and adapt new data load method
modify bloom st run

modify bloom st run

modify times

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add new pretrain_bloom.py

add st
2023-06-17 17:36:17 +08:00
simon717
7047d75663 Llama模型结构与huggingface对齐, 前向推理结果与huggingface对齐 2023-06-20 11:27:15 +08:00
kingsleyandher
4e3b7cd992 LlamaTokenizer适配及预训练脚本更改 2023-06-13 12:33:33 +08:00
wiyr
2f826f7351 can run with bloom7b and pass ci 2023-06-12 14:42:29 +08:00
wiyr
89bfaf64c2 added fused softmax, tokenizer, language, utils for bloom 2023-06-12 11:50:46 +08:00
wiyr
6d07c029fe added func code 2023-06-12 10:39:51 +08:00
fengliangjun
37ba281c40 readme update 2023-06-10 11:26:55 +08:00
chenzomi
37cc0b949d change megatron to ascendspeed 2023-06-10 21:26:01 +08:00
fengliangjun
106a415556 inital AscendSpeed 2023-06-09 16:15:23 +08:00
wangyixian
d55d341fe1 Adapt the bloom 7.1b model to the AscendSpeed framework, which is jointly completed by liulinfeng and wangyixian 2023-06-06 22:30:19 +08:00
chenzomi
ce6af59f73 remove unused paraemter and models. 2023-05-26 10:53:07 +08:00
chenzomi
e4a120a662 fork megatron-deepspeed code. 2023-05-25 14:49:59 +08:00
王姜奔
ea6e3d2ceb Initial commit 2023-05-25 02:15:25 +00:00