mirror of
https://gitee.com/fastnlp/fastNLP.git
synced 2024-11-29 18:59:01 +08:00
1511 lines
96 KiB
Plaintext
1511 lines
96 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# 使用 paddlenlp 和 FastNLP 训练中文阅读理解任务\n",
|
||
"\n",
|
||
"本篇教程属于 **`FastNLP v0.8 tutorial` 的 `paddle examples` 系列**。在本篇教程中,我们将为您展示如何在 `FastNLP` 中通过自定义 `Metric` 和 损失函数来完成进阶的问答任务。\n",
|
||
"\n",
|
||
"1. 基础介绍:自然语言处理中的阅读理解任务\n",
|
||
"\n",
|
||
"2. 准备工作:加载 `DuReader-robust` 数据集,并使用 `tokenizer` 处理数据\n",
|
||
"\n",
|
||
"3. 模型训练:自己定义评测用的 `Metric` 实现更加自由的任务评测"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 1. 基础介绍:自然语言处理中的阅读理解任务\n",
|
||
"\n",
|
||
"阅读理解任务,顾名思义,就是给出一段文字,然后让模型理解这段文字所含的语义。大部分机器阅读理解任务都采用问答式测评,即设计与文章内容相关的自然语言式问题,让模型理解问题并根据文章作答。与文本分类任务不同的是,在阅读理解任务中我们有时需要需要输入“一对”句子,分别代表问题和上下文;答案的格式也分为多种:\n",
|
||
"\n",
|
||
"- 多项选择:让模型从多个答案选项中选出正确答案\n",
|
||
"- 区间答案:答案为上下文的一段子句,需要模型给出答案的起始位置\n",
|
||
"- 自由回答:不做限制,让模型自行生成答案\n",
|
||
"- 完形填空:在原文中挖空部分关键词,让模型补全;这类答案往往不需要问题\n",
|
||
"\n",
|
||
"如果您对 `transformers` 有所了解的话,其中的 `ModelForQuestionAnswering` 系列模型就可以用于这项任务。阅读理解模型的泛用性是衡量该技术能否在实际应用中大规模落地的重要指标之一,随着当前技术的进步,许多模型虽然能够在一些测试集上取得较好的性能,但在实际应用中,这些模型仍然难以让人满意。在本篇教程中,我们将会为您展示如何训练一个问答模型。\n",
|
||
"\n",
|
||
"在这一领域,`SQuAD` 数据集是一个影响深远的数据集。它的全称是斯坦福问答数据集(Stanford Question Answering Dataset),每条数据包含 `(问题,上下文,答案)` 三部分,规模大(约十万条,2.0又新增了五万条),在提出之后很快成为训练问答任务的经典数据集之一。`SQuAD` 数据集有两个指标来衡量模型的表现:`EM`(Exact Match,精确匹配)和 `F1`(模糊匹配)。前者反应了模型给出的答案中有多少和正确答案完全一致,后者则反应了模型给出的答案中与正确答案重叠的部分,均为越高越好。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 2. 准备工作:加载 DuReader-robust 数据集,并使用 tokenizer 处理数据"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"/remote-home/shxing/anaconda3/envs/fnlp-paddle/lib/python3.7/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
||
" from .autonotebook import tqdm as notebook_tqdm\n"
|
||
]
|
||
},
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"2.3.3\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import sys\n",
|
||
"sys.path.append(\"../\")\n",
|
||
"import paddle\n",
|
||
"import paddlenlp\n",
|
||
"\n",
|
||
"print(paddlenlp.__version__)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"在数据集方面,我们选用 `DuReader-robust` 中文数据集作为训练数据。它是一种抽取式问答数据集,采用 `SQuAD` 数据格式,能够评估真实应用场景下模型的泛用性。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Reusing dataset dureader_robust (/remote-home/shxing/.cache/huggingface/datasets/dureader_robust/plain_text/1.0.0/d462ecadc8c010cee20f57632f1413f272867cd802a91a602df48c7d34eb0c27)\n",
|
||
"Reusing dataset dureader_robust (/remote-home/shxing/.cache/huggingface/datasets/dureader_robust/plain_text/1.0.0/d462ecadc8c010cee20f57632f1413f272867cd802a91a602df48c7d34eb0c27)\n",
|
||
"\u001b[32m[2022-06-27 19:22:46,998] [ INFO]\u001b[0m - Already cached /remote-home/shxing/.paddlenlp/models/ernie-1.0-base-zh/vocab.txt\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"{'id': '0a25cb4bc1ab6f474c699884e04601e4', 'title': '', 'context': '第35集雪见缓缓张开眼睛,景天又惊又喜之际,长卿和紫萱的仙船驶至,见众人无恙,也十分高兴。众人登船,用尽合力把自身的真气和水分输给她。雪见终于醒过来了,但却一脸木然,全无反应。众人向常胤求助,却发现人世界竟没有雪见的身世纪录。长卿询问清微的身世,清微语带双关说一切上了天界便有答案。长卿驾驶仙船,众人决定立马动身,往天界而去。众人来到一荒山,长卿指出,魔界和天界相连。由魔界进入通过神魔之井,便可登天。众人至魔界入口,仿若一黑色的蝙蝠洞,但始终无法进入。后来花楹发现只要有翅膀便能飞入。于是景天等人打下许多乌鸦,模仿重楼的翅膀,制作数对翅膀状巨物。刚佩戴在身,便被吸入洞口。众人摔落在地,抬头发现魔界守卫。景天和众魔套交情,自称和魔尊重楼相熟,众魔不理,打了起来。', 'question': '仙剑奇侠传3第几集上天界', 'answers': {'text': ['第35集'], 'answer_start': [0]}}\n",
|
||
"{'id': '7de192d6adf7d60ba73ba25cf590cc1e', 'title': '', 'context': '选择燃气热水器时,一定要关注这几个问题:1、出水稳定性要好,不能出现忽热忽冷的现象2、快速到达设定的需求水温3、操作要智能、方便4、安全性要好,要装有安全报警装置 市场上燃气热水器品牌众多,购买时还需多加对比和仔细鉴别。方太今年主打的磁化恒温热水器在使用体验方面做了全面升级:9秒速热,可快速进入洗浴模式;水温持久稳定,不会出现忽热忽冷的现象,并通过水量伺服技术将出水温度精确控制在±0.5℃,可满足家里宝贝敏感肌肤洗护需求;配备CO和CH4双气体报警装置更安全(市场上一般多为CO单气体报警)。另外,这款热水器还有智能WIFI互联功能,只需下载个手机APP即可用手机远程操作热水器,实现精准调节水温,满足家人多样化的洗浴需求。当然方太的磁化恒温系列主要的是增加磁化功能,可以有效吸附水中的铁锈、铁屑等微小杂质,防止细菌滋生,使沐浴水质更洁净,长期使用磁化水沐浴更利于身体健康。', 'question': '燃气热水器哪个牌子好', 'answers': {'text': ['方太'], 'answer_start': [110]}}\n",
|
||
"{'id': 'b9e74d4b9228399b03701d1fe6d52940', 'title': '', 'context': '迈克尔.乔丹在NBA打了15个赛季。他在84年进入nba,期间在1993年10月6日第一次退役改打棒球,95年3月18日重新回归,在99年1月13日第二次退役,后于2001年10月31日复出,在03年最终退役。迈克尔·乔丹(Michael Jordan),1963年2月17日生于纽约布鲁克林,美国著名篮球运动员,司职得分后卫,历史上最伟大的篮球运动员。1984年的NBA选秀大会,乔丹在首轮第3顺位被芝加哥公牛队选中。 1986-87赛季,乔丹场均得到37.1分,首次获得分王称号。1990-91赛季,乔丹连夺常规赛MVP和总决赛MVP称号,率领芝加哥公牛首次夺得NBA总冠军。 1997-98赛季,乔丹获得个人职业生涯第10个得分王,并率领公牛队第六次夺得总冠军。2009年9月11日,乔丹正式入选NBA名人堂。', 'question': '乔丹打了多少个赛季', 'answers': {'text': ['15个'], 'answer_start': [12]}}\n",
|
||
"训练集大小: 14520\n",
|
||
"验证集大小: 1417\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from paddlenlp.datasets import load_dataset\n",
|
||
"train_dataset = load_dataset(\"PaddlePaddle/dureader_robust\", splits=\"train\")\n",
|
||
"val_dataset = load_dataset(\"PaddlePaddle/dureader_robust\", splits=\"validation\")\n",
|
||
"for i in range(3):\n",
|
||
" print(train_dataset[i])\n",
|
||
"print(\"训练集大小:\", len(train_dataset))\n",
|
||
"print(\"验证集大小:\", len(val_dataset))\n",
|
||
"\n",
|
||
"MODEL_NAME = \"ernie-1.0-base-zh\"\n",
|
||
"from paddlenlp.transformers import ErnieTokenizer\n",
|
||
"tokenizer =ErnieTokenizer.from_pretrained(MODEL_NAME)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 2.1 处理训练集\n",
|
||
"\n",
|
||
"对于阅读理解任务,数据处理的方式较为麻烦。接下来我们会为您详细讲解处理函数 `_process_train` 的功能,同时也将通过实践展示关于 `tokenizer` 的更多功能,让您更加深入地了解自然语言处理任务。首先让我们向 `tokenizer` 输入一条数据(以列表的形式):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"2\n",
|
||
"dict_keys(['offset_mapping', 'input_ids', 'token_type_ids', 'overflow_to_sample'])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"result = tokenizer(\n",
|
||
" [train_dataset[0][\"question\"]],\n",
|
||
" [train_dataset[0][\"context\"]],\n",
|
||
" stride=128,\n",
|
||
" max_length=256,\n",
|
||
" padding=\"max_length\",\n",
|
||
" return_dict=False\n",
|
||
")\n",
|
||
"\n",
|
||
"print(len(result))\n",
|
||
"print(result[0].keys())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"首先不难理解的是,模型必须要同时接受问题(`question`)和上下文(`context`)才能够进行阅读理解,因此我们需要将二者同时进行分词(`tokenize`)。所幸,`Tokenizer` 提供了这一功能,当我们调用 `tokenizer` 的时候,其第一个参数名为 `text`,第二个参数名为 `text_pair`,这使得我们可以同时对一对文本进行分词。同时,`tokenizer` 还需要标记出一条数据中哪些属于问题,哪些属于上下文,这一功能则由 `token_type_ids` 完成。`token_type_ids` 会将输入的第一个文本(问题)标记为 `0`,第二个文本(上下文)标记为 `1`,这样模型在训练时便可以将问题和上下文区分开来:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[1, 1034, 1189, 734, 2003, 241, 284, 131, 553, 271, 28, 125, 280, 2, 131, 1773, 271, 1097, 373, 1427, 1427, 501, 88, 662, 1906, 4, 561, 125, 311, 1168, 311, 692, 46, 430, 4, 84, 2073, 14, 1264, 3967, 5, 1034, 1020, 1829, 268, 4, 373, 539, 8, 154, 5210, 4, 105, 167, 59, 69, 685, 12043, 539, 8, 883, 1020, 4, 29, 720, 95, 90, 427, 67, 262, 5, 384, 266, 14, 101, 59, 789, 416, 237, 12043, 1097, 373, 616, 37, 1519, 93, 61, 15, 4, 255, 535, 7, 1529, 619, 187, 4, 62, 154, 451, 149, 12043, 539, 8, 253, 223, 3679, 323, 523, 4, 535, 34, 87, 8, 203, 280, 1186, 340, 9, 1097, 373, 5, 262, 203, 623, 704, 12043, 84, 2073, 1137, 358, 334, 702, 5, 262, 203, 4, 334, 702, 405, 360, 653, 129, 178, 7, 568, 28, 15, 125, 280, 518, 9, 1179, 487, 12043, 84, 2073, 1621, 1829, 1034, 1020, 4, 539, 8, 448, 91, 202, 466, 70, 262, 4, 638, 125, 280, 83, 299, 12043, 539, 8, 61, 45, 7, 1537, 176, 4, 84, 2073, 288, 39, 4, 889, 280, 14, 125, 280, 156, 538, 12043, 190, 889, 280, 71, 109, 124, 93, 292, 889, 46, 1248, 4, 518, 48, 883, 125, 12043, 539, 8, 268, 889, 280, 109, 270, 4, 1586, 845, 7, 669, 199, 5, 3964, 3740, 1084, 4, 255, 440, 616, 154, 72, 71, 109, 12043, 49, 61, 283, 3591, 34, 87, 297, 41, 9, 1993, 2602, 518, 52, 706, 109, 2]\n",
|
||
"['[CLS]', '仙', '剑', '奇', '侠', '传', '3', '第', '几', '集', '上', '天', '界', '[SEP]', '第', '35', '集', '雪', '见', '缓', '缓', '张', '开', '眼', '睛', ',', '景', '天', '又', '惊', '又', '喜', '之', '际', ',', '长', '卿', '和', '紫', '萱', '的', '仙', '船', '驶', '至', ',', '见', '众', '人', '无', '恙', ',', '也', '十', '分', '高', '兴', '。', '众', '人', '登', '船', ',', '用', '尽', '合', '力', '把', '自', '身', '的', '真', '气', '和', '水', '分', '输', '给', '她', '。', '雪', '见', '终', '于', '醒', '过', '来', '了', ',', '但', '却', '一', '脸', '木', '然', ',', '全', '无', '反', '应', '。', '众', '人', '向', '常', '胤', '求', '助', ',', '却', '发', '现', '人', '世', '界', '竟', '没', '有', '雪', '见', '的', '身', '世', '纪', '录', '。', '长', '卿', '询', '问', '清', '微', '的', '身', '世', ',', '清', '微', '语', '带', '双', '关', '说', '一', '切', '上', '了', '天', '界', '便', '有', '答', '案', '。', '长', '卿', '驾', '驶', '仙', '船', ',', '众', '人', '决', '定', '立', '马', '动', '身', ',', '往', '天', '界', '而', '去', '。', '众', '人', '来', '到', '一', '荒', '山', ',', '长', '卿', '指', '出', ',', '魔', '界', '和', '天', '界', '相', '连', '。', '由', '魔', '界', '进', '入', '通', '过', '神', '魔', '之', '井', ',', '便', '可', '登', '天', '。', '众', '人', '至', '魔', '界', '入', '口', ',', '仿', '若', '一', '黑', '色', '的', '蝙', '蝠', '洞', ',', '但', '始', '终', '无', '法', '进', '入', '。', '后', '来', '花', '楹', '发', '现', '只', '要', '有', '翅', '膀', '便', '能', '飞', '入', '[SEP]']\n",
|
||
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(result[0][\"input_ids\"])\n",
|
||
"print(tokenizer.convert_ids_to_tokens(result[0][\"input_ids\"]))\n",
|
||
"print(result[0][\"token_type_ids\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"根据上面的输出我们可以看出,`tokenizer` 会将数据开头用 `[CLS]` 标记,用 `[SEP]` 来分割句子。同时,根据 `token_type_ids` 得到的 0、1 串,我们也很容易将问题和上下文区分开。顺带一提,如果一条数据进行了 `padding`,那么这部分会被标记为 `0` 。\n",
|
||
"\n",
|
||
"在输出的 `keys` 中还有一项名为 `offset_mapping` 的键。该项数据能够表示分词后的每个 `token` 在原文中对应文字或词语的位置。比如我们可以像下面这样将数据打印出来:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[(0, 0), (0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11), (11, 12), (0, 0), (0, 1), (1, 3), (3, 4), (4, 5), (5, 6), (6, 7)]\n",
|
||
"[1, 1034, 1189, 734, 2003, 241, 284, 131, 553, 271, 28, 125, 280, 2, 131, 1773, 271, 1097, 373, 1427]\n",
|
||
"['[CLS]', '仙', '剑', '奇', '侠', '传', '3', '第', '几', '集', '上', '天', '界', '[SEP]', '第', '35', '集', '雪', '见', '缓']\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(result[0][\"offset_mapping\"][:20])\n",
|
||
"print(result[0][\"input_ids\"][:20])\n",
|
||
"print(tokenizer.convert_ids_to_tokens(result[0][\"input_ids\"])[:20])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"`[CLS]` 由于是 `tokenizer` 自己添加进去用于标记数据的 `token`,因此它在原文中找不到任何对应的词语,所以给出的位置范围就是 `(0, 0)`;第二个 `token` 对应第一个 `“仙”` 字,因此映射的位置就是 `(0, 1)`;同理,后面的 `[SEP]` 也不对应任何文字,映射的位置为 `(0, 0)`;而接下来的 `token` 对应 **上下文** 中的第一个字 `“第”`,映射出的位置为 `(0, 1)`;再后面的 `token` 对应原文中的两个字符 `35`,因此其位置映射为 `(1, 3)` 。通过这种手段,我们可以更方便地获取 `token` 与原文的对应关系。\n",
|
||
"\n",
|
||
"最后,您也许会注意到我们获取的 `result` 长度为 2 。这是文本在分词后长度超过了 `max_length` 256 ,`tokenizer` 将数据分成了两部分所致。在阅读理解任务中,我们不可能像文本分类那样轻易地将一条数据截断,因为答案很可能就出现在后面被丢弃的那部分数据中,因此,我们需要保留所有的数据(当然,您也可以直接丢弃这些超长的数据)。`overflow_to_sample` 则可以标识当前数据在原数据的索引:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[CLS]仙剑奇侠传3第几集上天界[SEP]第35集雪见缓缓张开眼睛,景天又惊又喜之际,长卿和紫萱的仙船驶至,见众人无恙,也十分高兴。众人登船,用尽合力把自身的真气和水分输给她。雪见终于醒过来了,但却一脸木然,全无反应。众人向常胤求助,却发现人世界竟没有雪见的身世纪录。长卿询问清微的身世,清微语带双关说一切上了天界便有答案。长卿驾驶仙船,众人决定立马动身,往天界而去。众人来到一荒山,长卿指出,魔界和天界相连。由魔界进入通过神魔之井,便可登天。众人至魔界入口,仿若一黑色的蝙蝠洞,但始终无法进入。后来花楹发现只要有翅膀便能飞入[SEP]\n",
|
||
"overflow_to_sample: 0\n",
|
||
"[CLS]仙剑奇侠传3第几集上天界[SEP]说一切上了天界便有答案。长卿驾驶仙船,众人决定立马动身,往天界而去。众人来到一荒山,长卿指出,魔界和天界相连。由魔界进入通过神魔之井,便可登天。众人至魔界入口,仿若一黑色的蝙蝠洞,但始终无法进入。后来花楹发现只要有翅膀便能飞入。于是景天等人打下许多乌鸦,模仿重楼的翅膀,制作数对翅膀状巨物。刚佩戴在身,便被吸入洞口。众人摔落在地,抬头发现魔界守卫。景天和众魔套交情,自称和魔尊重楼相熟,众魔不理,打了起来。[SEP][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD]\n",
|
||
"overflow_to_sample: 0\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"for res in result:\n",
|
||
" tokens = tokenizer.convert_ids_to_tokens(res[\"input_ids\"])\n",
|
||
" print(\"\".join(tokens))\n",
|
||
" print(\"overflow_to_sample: \", res[\"overflow_to_sample\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"将两条数据均输出之后可以看到,它们都出自我们传入的数据,并且存在一部分重合。`tokenizer` 的 `stride` 参数可以设置重合部分的长度,这也可以帮助模型识别被分割开的两条数据;`overflow_to_sample` 的 `0` 则代表它们来自于第 `0` 条数据。\n",
|
||
"\n",
|
||
"基于以上信息,我们处理训练集的思路如下:\n",
|
||
"\n",
|
||
"1. 通过 `overflow_to_sample` 来获取原来的数据\n",
|
||
"2. 通过原数据的 `answers` 找到答案的起始位置\n",
|
||
"3. 通过 `offset_mapping` 给出的映射关系在分词处理后的数据中找到答案的起始位置,分别记录在 `start_pos` 和 `end_pos` 中;如果没有找到答案(比如答案被截断了),那么答案的起始位置就被标记为 `[CLS]` 的位置。\n",
|
||
"\n",
|
||
"这样 `_process_train` 函数就呼之欲出了,我们调用 `train_dataset.map` 函数,并将 `batched` 参数设置为 `True` ,将所有数据批量地进行更新。有一点需要注意的是,**在处理过后数据量会增加**。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 18,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"{'offset_mapping': [(0, 0), (0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11), (11, 12), (0, 0), (0, 1), (1, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11), (11, 12), (12, 13), (13, 14), (14, 15), (15, 16), (16, 17), (17, 18), (18, 19), (19, 20), (20, 21), (21, 22), (22, 23), (23, 24), (24, 25), (25, 26), (26, 27), (27, 28), (28, 29), (29, 30), (30, 31), (31, 32), (32, 33), (33, 34), (34, 35), (35, 36), (36, 37), (37, 38), (38, 39), (39, 40), (40, 41), (41, 42), (42, 43), (43, 44), (44, 45), (45, 46), (46, 47), (47, 48), (48, 49), (49, 50), (50, 51), (51, 52), (52, 53), (53, 54), (54, 55), (55, 56), (56, 57), (57, 58), (58, 59), (59, 60), (60, 61), (61, 62), (62, 63), (63, 64), (64, 65), (65, 66), (66, 67), (67, 68), (68, 69), (69, 70), (70, 71), (71, 72), (72, 73), (73, 74), (74, 75), (75, 76), (76, 77), (77, 78), (78, 79), (79, 80), (80, 81), (81, 82), (82, 83), (83, 84), (84, 85), (85, 86), (86, 87), (87, 88), (88, 89), (89, 90), (90, 91), (91, 92), (92, 93), (93, 94), (94, 95), (95, 96), (96, 97), (97, 98), (98, 99), (99, 100), (100, 101), (101, 102), (102, 103), (103, 104), (104, 105), (105, 106), (106, 107), (107, 108), (108, 109), (109, 110), (110, 111), (111, 112), (112, 113), (113, 114), (114, 115), (115, 116), (116, 117), (117, 118), (118, 119), (119, 120), (120, 121), (121, 122), (122, 123), (123, 124), (124, 125), (125, 126), (126, 127), (127, 128), (128, 129), (129, 130), (130, 131), (131, 132), (132, 133), (133, 134), (134, 135), (135, 136), (136, 137), (137, 138), (138, 139), (139, 140), (140, 141), (141, 142), (142, 143), (143, 144), (144, 145), (145, 146), (146, 147), (147, 148), (148, 149), (149, 150), (150, 151), (151, 152), (152, 153), (153, 154), (154, 155), (155, 156), (156, 157), (157, 158), (158, 159), (159, 160), (160, 161), (161, 162), (162, 163), (163, 164), (164, 165), (165, 166), (166, 167), (167, 168), (168, 169), (169, 170), (170, 171), (171, 172), (172, 173), (173, 174), (174, 175), (175, 176), (176, 177), (177, 178), (178, 179), (179, 180), (180, 181), (181, 182), (182, 183), (183, 184), (184, 185), (185, 186), (186, 187), (187, 188), (188, 189), (189, 190), (190, 191), (191, 192), (192, 193), (193, 194), (194, 195), (195, 196), (196, 197), (197, 198), (198, 199), (199, 200), (200, 201), (201, 202), (202, 203), (203, 204), (204, 205), (205, 206), (206, 207), (207, 208), (208, 209), (209, 210), (210, 211), (211, 212), (212, 213), (213, 214), (214, 215), (215, 216), (216, 217), (217, 218), (218, 219), (219, 220), (220, 221), (221, 222), (222, 223), (223, 224), (224, 225), (225, 226), (226, 227), (227, 228), (228, 229), (229, 230), (230, 231), (231, 232), (232, 233), (233, 234), (234, 235), (235, 236), (236, 237), (237, 238), (238, 239), (239, 240), (240, 241), (241, 242), (0, 0)], 'input_ids': [1, 1034, 1189, 734, 2003, 241, 284, 131, 553, 271, 28, 125, 280, 2, 131, 1773, 271, 1097, 373, 1427, 1427, 501, 88, 662, 1906, 4, 561, 125, 311, 1168, 311, 692, 46, 430, 4, 84, 2073, 14, 1264, 3967, 5, 1034, 1020, 1829, 268, 4, 373, 539, 8, 154, 5210, 4, 105, 167, 59, 69, 685, 12043, 539, 8, 883, 1020, 4, 29, 720, 95, 90, 427, 67, 262, 5, 384, 266, 14, 101, 59, 789, 416, 237, 12043, 1097, 373, 616, 37, 1519, 93, 61, 15, 4, 255, 535, 7, 1529, 619, 187, 4, 62, 154, 451, 149, 12043, 539, 8, 253, 223, 3679, 323, 523, 4, 535, 34, 87, 8, 203, 280, 1186, 340, 9, 1097, 373, 5, 262, 203, 623, 704, 12043, 84, 2073, 1137, 358, 334, 702, 5, 262, 203, 4, 334, 702, 405, 360, 653, 129, 178, 7, 568, 28, 15, 125, 280, 518, 9, 1179, 487, 12043, 84, 2073, 1621, 1829, 1034, 1020, 4, 539, 8, 448, 91, 202, 466, 70, 262, 4, 638, 125, 280, 83, 299, 12043, 539, 8, 61, 45, 7, 1537, 176, 4, 84, 2073, 288, 39, 4, 889, 280, 14, 125, 280, 156, 538, 12043, 190, 889, 280, 71, 109, 124, 93, 292, 889, 46, 1248, 4, 518, 48, 883, 125, 12043, 539, 8, 268, 889, 280, 109, 270, 4, 1586, 845, 7, 669, 199, 5, 3964, 3740, 1084, 4, 255, 440, 616, 154, 72, 71, 109, 12043, 49, 61, 283, 3591, 34, 87, 297, 41, 9, 1993, 2602, 518, 52, 706, 109, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'overflow_to_sample': 0, 'start_pos': 14, 'end_pos': 16}\n",
|
||
"处理后的训练集大小: 26198\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"max_length = 256\n",
|
||
"doc_stride = 128\n",
|
||
"def _process_train(data):\n",
|
||
"\n",
|
||
" contexts = [data[i][\"context\"] for i in range(len(data))]\n",
|
||
" questions = [data[i][\"question\"] for i in range(len(data))]\n",
|
||
"\n",
|
||
" tokenized_data_list = tokenizer(\n",
|
||
" questions,\n",
|
||
" contexts,\n",
|
||
" stride=doc_stride,\n",
|
||
" max_length=max_length,\n",
|
||
" padding=\"max_length\",\n",
|
||
" return_dict=False\n",
|
||
" )\n",
|
||
"\n",
|
||
" for i, tokenized_data in enumerate(tokenized_data_list):\n",
|
||
" # 获取 [CLS] 对应的位置\n",
|
||
" input_ids = tokenized_data[\"input_ids\"]\n",
|
||
" cls_index = input_ids.index(tokenizer.cls_token_id)\n",
|
||
"\n",
|
||
" # 在 tokenize 的过程中,汉字和 token 在位置上并非一一对应的\n",
|
||
" # 而 offset mapping 记录了每个 token 在原文中对应的起始位置\n",
|
||
" offsets = tokenized_data[\"offset_mapping\"]\n",
|
||
" # token_type_ids 记录了一条数据中哪些是问题,哪些是上下文\n",
|
||
" token_type_ids = tokenized_data[\"token_type_ids\"]\n",
|
||
"\n",
|
||
" # 一条数据可能因为长度过长而在 tokenized_data 中存在多个结果\n",
|
||
" # overflow_to_sample 表示了当前 tokenize_example 属于 data 中的哪一条数据\n",
|
||
" sample_index = tokenized_data[\"overflow_to_sample\"]\n",
|
||
" answers = data[sample_index][\"answers\"]\n",
|
||
"\n",
|
||
" # answers 和 answer_starts 均为长度为 1 的 list\n",
|
||
" # 我们可以计算出答案的结束位置\n",
|
||
" start_char = answers[\"answer_start\"][0]\n",
|
||
" end_char = start_char + len(answers[\"text\"][0])\n",
|
||
"\n",
|
||
" token_start_index = 0\n",
|
||
" while token_type_ids[token_start_index] != 1:\n",
|
||
" token_start_index += 1\n",
|
||
"\n",
|
||
" token_end_index = len(input_ids) - 1\n",
|
||
" while token_type_ids[token_end_index] != 1:\n",
|
||
" token_end_index -= 1\n",
|
||
" # 分词后一条数据的结尾一定是 [SEP],因此还需要减一\n",
|
||
" token_end_index -= 1\n",
|
||
"\n",
|
||
" if not (offsets[token_start_index][0] <= start_char and\n",
|
||
" offsets[token_end_index][1] >= end_char):\n",
|
||
" # 如果答案不在这条数据中,则将答案位置标记为 [CLS] 的位置\n",
|
||
" tokenized_data_list[i][\"start_pos\"] = cls_index\n",
|
||
" tokenized_data_list[i][\"end_pos\"] = cls_index\n",
|
||
" else:\n",
|
||
" # 否则,我们可以找到答案对应的 token 的起始位置,记录在 start_pos 和 end_pos 中\n",
|
||
" while token_start_index < len(offsets) and offsets[\n",
|
||
" token_start_index][0] <= start_char:\n",
|
||
" token_start_index += 1\n",
|
||
" tokenized_data_list[i][\"start_pos\"] = token_start_index - 1\n",
|
||
" while offsets[token_end_index][1] >= end_char:\n",
|
||
" token_end_index -= 1\n",
|
||
" tokenized_data_list[i][\"end_pos\"] = token_end_index + 1\n",
|
||
"\n",
|
||
" return tokenized_data_list\n",
|
||
"\n",
|
||
"train_dataset.map(_process_train, batched=True, num_workers=5)\n",
|
||
"print(train_dataset[0])\n",
|
||
"print(\"处理后的训练集大小:\", len(train_dataset))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 2.2 处理验证集\n",
|
||
"\n",
|
||
"对于验证集的处理则简单得多,我们只需要保存原数据的 `id` 并将 `offset_mapping` 中不属于上下文的部分设置为 `None` 即可。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"<paddlenlp.datasets.dataset.MapDataset at 0x7f697503d7d0>"
|
||
]
|
||
},
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"def _process_val(data):\n",
|
||
"\n",
|
||
" contexts = [data[i][\"context\"] for i in range(len(data))]\n",
|
||
" questions = [data[i][\"question\"] for i in range(len(data))]\n",
|
||
"\n",
|
||
" tokenized_data_list = tokenizer(\n",
|
||
" questions,\n",
|
||
" contexts,\n",
|
||
" stride=doc_stride,\n",
|
||
" max_length=max_length,\n",
|
||
" return_dict=False\n",
|
||
" )\n",
|
||
"\n",
|
||
" for i, tokenized_data in enumerate(tokenized_data_list):\n",
|
||
" token_type_ids = tokenized_data[\"token_type_ids\"]\n",
|
||
" # 保存数据对应的 id\n",
|
||
" sample_index = tokenized_data[\"overflow_to_sample\"]\n",
|
||
" tokenized_data_list[i][\"example_id\"] = data[sample_index][\"id\"]\n",
|
||
"\n",
|
||
" # 将不属于 context 的 offset 设置为 None\n",
|
||
" tokenized_data_list[i][\"offset_mapping\"] = [\n",
|
||
" (o if token_type_ids[k] == 1 else None)\n",
|
||
" for k, o in enumerate(tokenized_data[\"offset_mapping\"])\n",
|
||
" ]\n",
|
||
"\n",
|
||
" return tokenized_data_list\n",
|
||
"\n",
|
||
"val_dataset.map(_process_val, batched=True, num_workers=5)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 2.3 DataLoader\n",
|
||
"\n",
|
||
"最后使用 `PaddleDataLoader` 将数据集包裹起来即可。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"from fastNLP.core import PaddleDataLoader\n",
|
||
"\n",
|
||
"train_dataloader = PaddleDataLoader(train_dataset, batch_size=32, shuffle=True)\n",
|
||
"val_dataloader = PaddleDataLoader(val_dataset, batch_size=16)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3. 模型训练:自己定义评测用的 Metric 实现更加自由的任务评测\n",
|
||
"\n",
|
||
"#### 3.1 损失函数\n",
|
||
"\n",
|
||
"对于阅读理解任务,我们使用的是 `ErnieForQuestionAnswering` 模型。该模型在接受输入后会返回两个值:`start_logits` 和 `end_logits` ,大小均为 `(batch_size, sequence_length)`,反映了每条数据每个词语为答案起始位置的可能性,因此我们需要自定义一个损失函数来计算 `loss`。 `CrossEntropyLossForSquad` 会分别对答案起始位置的预测值和真实值计算交叉熵,最后返回其平均值作为最终的损失。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class CrossEntropyLossForSquad(paddle.nn.Layer):\n",
|
||
" def __init__(self):\n",
|
||
" super(CrossEntropyLossForSquad, self).__init__()\n",
|
||
"\n",
|
||
" def forward(self, start_logits, end_logits, start_pos, end_pos):\n",
|
||
" start_pos = paddle.unsqueeze(start_pos, axis=-1)\n",
|
||
" end_pos = paddle.unsqueeze(end_pos, axis=-1)\n",
|
||
" start_loss = paddle.nn.functional.softmax_with_cross_entropy(\n",
|
||
" logits=start_logits, label=start_pos)\n",
|
||
" start_loss = paddle.mean(start_loss)\n",
|
||
" end_loss = paddle.nn.functional.softmax_with_cross_entropy(\n",
|
||
" logits=end_logits, label=end_pos)\n",
|
||
" end_loss = paddle.mean(end_loss)\n",
|
||
"\n",
|
||
" loss = (start_loss + end_loss) / 2\n",
|
||
" return loss"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 3.2 定义模型\n",
|
||
"\n",
|
||
"模型的核心则是 `ErnieForQuestionAnswering` 的 `ernie-1.0-base-zh` 预训练模型,同时按照 `FastNLP` 的规定定义 `train_step` 和 `evaluate_step` 函数。这里 `evaluate_step` 函数并没有像文本分类那样直接返回该批次数据的评测结果,这一点我们将在下面为您讲解。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[32m[2022-06-27 19:00:15,825] [ INFO]\u001b[0m - Already cached /remote-home/shxing/.paddlenlp/models/ernie-1.0-base-zh/ernie_v1_chn_base.pdparams\u001b[0m\n",
|
||
"W0627 19:00:15.831080 21543 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 11.2, Runtime API Version: 11.2\n",
|
||
"W0627 19:00:15.843276 21543 gpu_context.cc:306] device: 0, cuDNN Version: 8.1.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from paddlenlp.transformers import ErnieForQuestionAnswering\n",
|
||
"\n",
|
||
"class QAModel(paddle.nn.Layer):\n",
|
||
" def __init__(self, model_checkpoint):\n",
|
||
" super(QAModel, self).__init__()\n",
|
||
" self.model = ErnieForQuestionAnswering.from_pretrained(model_checkpoint)\n",
|
||
" self.loss_func = CrossEntropyLossForSquad()\n",
|
||
"\n",
|
||
" def forward(self, input_ids, token_type_ids):\n",
|
||
" start_logits, end_logits = self.model(input_ids, token_type_ids)\n",
|
||
" return start_logits, end_logits\n",
|
||
"\n",
|
||
" def train_step(self, input_ids, token_type_ids, start_pos, end_pos):\n",
|
||
" start_logits, end_logits = self(input_ids, token_type_ids)\n",
|
||
" loss = self.loss_func(start_logits, end_logits, start_pos, end_pos)\n",
|
||
" return {\"loss\": loss}\n",
|
||
"\n",
|
||
" def evaluate_step(self, input_ids, token_type_ids):\n",
|
||
" start_logits, end_logits = self(input_ids, token_type_ids)\n",
|
||
" return {\"start_logits\": start_logits, \"end_logits\": end_logits}\n",
|
||
"\n",
|
||
"model = QAModel(MODEL_NAME)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 3.3 自定义 Metric 进行数据的评估\n",
|
||
"\n",
|
||
"`paddlenlp` 为我们提供了评测 `SQuAD` 格式数据集的函数 `compute_prediction` 和 `squad_evaluate`:\n",
|
||
"- `compute_prediction` 函数要求传入原数据 `examples` 、处理后的数据 `features` 和 `features` 对应的结果 `predictions`(一个包含所有数据 `start_logits` 和 `end_logits` 的元组)\n",
|
||
"- `squad_evaluate` 要求传入原数据 `examples` 和预测结果 `all_predictions`(通常来自于 `compute_prediction`)\n",
|
||
"\n",
|
||
"在使用这两个函数的时候,我们需要向其中传入数据集,但显然根据 `fastNLP` 的设计,我们无法在 `evaluate_step` 里实现这一过程,并且 `FastNLP` 也并没有提供计算 `F1` 和 `EM` 的 `Metric`,故我们需要自己定义用于评测的 `Metric`。\n",
|
||
"\n",
|
||
"在初始化之外,一个 `Metric` 还需要实现三个函数:\n",
|
||
"\n",
|
||
"1. `reset` - 该函数会在验证数据集的迭代之前被调用,用于清空数据;在我们自定义的 `Metric` 中,我们需要将 `all_start_logits` 和 `all_end_logits` 清空,重新收集每个 `batch` 的结果。\n",
|
||
"2. `update` - 该函数会在在每个 `batch` 得到结果后被调用,用于更新 `Metric` 的状态;它的参数即为 `evaluate_step` 返回的内容。我们在这里将得到的 `start_logits` 和 `end_logits` 收集起来。\n",
|
||
"3. `get_metric` - 该函数会在数据集被迭代完毕后调用,用于计算评测的结果。现在我们有了整个验证集的 `all_start_logits` 和 `all_end_logits` ,将他们传入 `compute_predictions` 函数得到预测的结果,并继续使用 `squad_evaluate` 函数得到评测的结果。\n",
|
||
" - 注:`suqad_evaluate` 函数会自己输出评测结果,为了不让其干扰 `FastNLP` 输出,这里我们使用 `contextlib.redirect_stdout(None)` 将函数的标准输出屏蔽掉。\n",
|
||
"\n",
|
||
"综上,`SquadEvaluateMetric` 实现的评估过程是:将验证集中所有数据的 `logits` 收集起来,然后统一传入 `compute_prediction` 和 `squad_evaluate` 中进行评估。值得一提的是,`paddlenlp.datasets.load_dataset` 返回的结果是一个 `MapDataset` 类型,其 `data` 成员为加载时的数据,`new_data` 为经过 `map` 函数处理后更新的数据,因此可以分别作为 `examples` 和 `features` 传入。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 14,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from fastNLP.core import Metric\n",
|
||
"from paddlenlp.metrics.squad import squad_evaluate, compute_prediction\n",
|
||
"import contextlib\n",
|
||
"\n",
|
||
"class SquadEvaluateMetric(Metric):\n",
|
||
" def __init__(self, examples, features, testing=False):\n",
|
||
" super(SquadEvaluateMetric, self).__init__(\"paddle\", False)\n",
|
||
" self.examples = examples\n",
|
||
" self.features = features\n",
|
||
" self.all_start_logits = []\n",
|
||
" self.all_end_logits = []\n",
|
||
" self.testing = testing\n",
|
||
"\n",
|
||
" def reset(self):\n",
|
||
" self.all_start_logits = []\n",
|
||
" self.all_end_logits = []\n",
|
||
"\n",
|
||
" def update(self, start_logits, end_logits):\n",
|
||
" for start, end in zip(start_logits, end_logits):\n",
|
||
" self.all_start_logits.append(start.numpy())\n",
|
||
" self.all_end_logits.append(end.numpy())\n",
|
||
"\n",
|
||
" def get_metric(self):\n",
|
||
" all_predictions, _, _ = compute_prediction(\n",
|
||
" self.examples, self.features[:len(self.all_start_logits)],\n",
|
||
" (self.all_start_logits, self.all_end_logits),\n",
|
||
" False, 20, 30\n",
|
||
" )\n",
|
||
" with contextlib.redirect_stdout(None):\n",
|
||
" result = squad_evaluate(\n",
|
||
" examples=self.examples,\n",
|
||
" preds=all_predictions,\n",
|
||
" is_whitespace_splited=False\n",
|
||
" )\n",
|
||
"\n",
|
||
" if self.testing:\n",
|
||
" self.print_predictions(all_predictions)\n",
|
||
" return result\n",
|
||
"\n",
|
||
" def print_predictions(self, preds):\n",
|
||
" for i, data in enumerate(self.examples):\n",
|
||
" if i >= 5:\n",
|
||
" break\n",
|
||
" print()\n",
|
||
" print(\"原文:\", data[\"context\"])\n",
|
||
" print(\"问题:\", data[\"question\"], \\\n",
|
||
" \"答案:\", preds[data[\"id\"]], \\\n",
|
||
" \"正确答案:\", data[\"answers\"][\"text\"])\n",
|
||
"\n",
|
||
"metric = SquadEvaluateMetric(\n",
|
||
" val_dataloader.dataset.data,\n",
|
||
" val_dataloader.dataset.new_data,\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 3.4 训练\n",
|
||
"\n",
|
||
"至此所有的准备工作已经完成,可以使用 `Trainer` 进行训练了。学习率我们依旧采用线性预热策略 `LinearDecayWithWarmup`,优化器为 `AdamW`;回调模块我们选择 `LRSchedCallback` 更新学习率和 `LoadBestModelCallback` 监视评测结果的 `f1` 分数。初始化好 `Trainer` 之后,就将训练的过程交给 `FastNLP` 吧。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[19:04:54] </span><span style=\"color: #000080; text-decoration-color: #000080\">INFO </span> Running evaluator sanity check for <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span> batches. <a href=\"file://../fastNLP/core/controllers/trainer.py\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">trainer.py</span></a><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">:</span><a href=\"file://../fastNLP/core/controllers/trainer.py#631\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">631</span></a>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[2;36m[19:04:54]\u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Running evaluator sanity check for \u001b[1;36m2\u001b[0m batches. \u001b]8;id=367046;file://../fastNLP/core/controllers/trainer.py\u001b\\\u001b[2mtrainer.py\u001b[0m\u001b]8;;\u001b\\\u001b[2m:\u001b[0m\u001b]8;id=96810;file://../fastNLP/core/controllers/trainer.py#631\u001b\\\u001b[2m631\u001b[0m\u001b]8;;\u001b\\\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
|
||
],
|
||
"text/plain": []
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">100</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m100\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">49.25899788285109</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">66.55559127349602</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">49.25899788285109</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">66.55559127349602</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m49.25899788285109\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m66.55559127349602\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m49.25899788285109\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m66.55559127349602\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">200</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m200\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">57.37473535638673</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">70.93036525200617</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">57.37473535638673</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">70.93036525200617</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m57.37473535638673\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m70.93036525200617\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m57.37473535638673\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m70.93036525200617\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">300</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m300\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">63.86732533521524</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">78.62546663568186</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">63.86732533521524</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">78.62546663568186</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m63.86732533521524\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m78.62546663568186\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m63.86732533521524\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m78.62546663568186\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">400</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m400\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">64.92589978828511</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">79.36746074079691</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">64.92589978828511</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">79.36746074079691</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m64.92589978828511\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m79.36746074079691\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m64.92589978828511\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m79.36746074079691\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">500</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m500\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.70218772053634</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.33295482054824</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.70218772053634</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.33295482054824</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m65.70218772053634\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m80.33295482054824\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m65.70218772053634\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m80.33295482054824\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">600</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m600\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.41990119971771</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">79.7483487059053</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.41990119971771</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">79.7483487059053</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m65.41990119971771\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m79.7483487059053\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m65.41990119971771\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m79.7483487059053\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">700</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m700\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">66.61961891319689</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.32432238994133</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">66.61961891319689</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.32432238994133</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m66.61961891319689\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m80.32432238994133\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m66.61961891319689\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m80.32432238994133\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">800</span> ----------------------------\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"---------------------------- Eval. results on Epoch:\u001b[1;36m0\u001b[0m, Batch:\u001b[1;36m800\u001b[0m ----------------------------\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.84333098094567</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">79.23169801265415</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_exact#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.84333098094567</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_f1#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">79.23169801265415</span>,\n",
|
||
" <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"HasAns_total#squad\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[1;34m\"exact#squad\"\u001b[0m: \u001b[1;36m65.84333098094567\u001b[0m,\n",
|
||
" \u001b[1;34m\"f1#squad\"\u001b[0m: \u001b[1;36m79.23169801265415\u001b[0m,\n",
|
||
" \u001b[1;34m\"total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_exact#squad\"\u001b[0m: \u001b[1;36m65.84333098094567\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_f1#squad\"\u001b[0m: \u001b[1;36m79.23169801265415\u001b[0m,\n",
|
||
" \u001b[1;34m\"HasAns_total#squad\"\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
|
||
],
|
||
"text/plain": []
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[19:20:28] </span><span style=\"color: #000080; text-decoration-color: #000080\">INFO </span> Loading best model from fnlp-ernie-squad/ <a href=\"file://../fastNLP/core/callbacks/load_best_model_callback.py\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">load_best_model_callback.py</span></a><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">:</span><a href=\"file://../fastNLP/core/callbacks/load_best_model_callback.py#111\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">111</span></a>\n",
|
||
"<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\"> </span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2022</span>-<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">06</span>-<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">27</span>-19_00_15_388554/best_so_far <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\"> </span>\n",
|
||
"<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\"> </span> with f1#squad: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.33295482054824</span><span style=\"color: #808000; text-decoration-color: #808000\">...</span> <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\"> </span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[2;36m[19:20:28]\u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Loading best model from fnlp-ernie-squad/ \u001b]8;id=163935;file://../fastNLP/core/callbacks/load_best_model_callback.py\u001b\\\u001b[2mload_best_model_callback.py\u001b[0m\u001b]8;;\u001b\\\u001b[2m:\u001b[0m\u001b]8;id=31503;file://../fastNLP/core/callbacks/load_best_model_callback.py#111\u001b\\\u001b[2m111\u001b[0m\u001b]8;;\u001b\\\n",
|
||
"\u001b[2;36m \u001b[0m \u001b[1;36m2022\u001b[0m-\u001b[1;36m06\u001b[0m-\u001b[1;36m27\u001b[0m-19_00_15_388554/best_so_far \u001b[2m \u001b[0m\n",
|
||
"\u001b[2;36m \u001b[0m with f1#squad: \u001b[1;36m80.33295482054824\u001b[0m\u001b[33m...\u001b[0m \u001b[2m \u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\"> </span><span style=\"color: #000080; text-decoration-color: #000080\">INFO </span> Deleting fnlp-ernie-squad/<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2022</span>-<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">06</span>-<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">27</span>-19_0 <a href=\"file://../fastNLP/core/callbacks/load_best_model_callback.py\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">load_best_model_callback.py</span></a><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">:</span><a href=\"file://../fastNLP/core/callbacks/load_best_model_callback.py#131\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">131</span></a>\n",
|
||
"<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\"> </span> 0_15_388554/best_so_far<span style=\"color: #808000; text-decoration-color: #808000\">...</span> <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\"> </span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[2;36m \u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Deleting fnlp-ernie-squad/\u001b[1;36m2022\u001b[0m-\u001b[1;36m06\u001b[0m-\u001b[1;36m27\u001b[0m-19_0 \u001b]8;id=560859;file://../fastNLP/core/callbacks/load_best_model_callback.py\u001b\\\u001b[2mload_best_model_callback.py\u001b[0m\u001b]8;;\u001b\\\u001b[2m:\u001b[0m\u001b]8;id=573263;file://../fastNLP/core/callbacks/load_best_model_callback.py#131\u001b\\\u001b[2m131\u001b[0m\u001b]8;;\u001b\\\n",
|
||
"\u001b[2;36m \u001b[0m 0_15_388554/best_so_far\u001b[33m...\u001b[0m \u001b[2m \u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"from fastNLP import Trainer, LRSchedCallback, LoadBestModelCallback\n",
|
||
"from paddlenlp.transformers import LinearDecayWithWarmup\n",
|
||
"\n",
|
||
"n_epochs = 1\n",
|
||
"num_training_steps = len(train_dataloader) * n_epochs\n",
|
||
"lr_scheduler = LinearDecayWithWarmup(3e-5, num_training_steps, 0.1)\n",
|
||
"optimizer = paddle.optimizer.AdamW(\n",
|
||
" learning_rate=lr_scheduler,\n",
|
||
" parameters=model.parameters(),\n",
|
||
")\n",
|
||
"callbacks=[\n",
|
||
" LRSchedCallback(lr_scheduler, step_on=\"batch\"),\n",
|
||
" LoadBestModelCallback(\"f1#squad\", larger_better=True, save_folder=\"fnlp-ernie-squad\")\n",
|
||
"]\n",
|
||
"trainer = Trainer(\n",
|
||
" model=model,\n",
|
||
" train_dataloader=train_dataloader,\n",
|
||
" evaluate_dataloaders=val_dataloader,\n",
|
||
" device=1,\n",
|
||
" optimizers=optimizer,\n",
|
||
" n_epochs=n_epochs,\n",
|
||
" callbacks=callbacks,\n",
|
||
" evaluate_every=100,\n",
|
||
" metrics={\"squad\": metric},\n",
|
||
")\n",
|
||
"trainer.run()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 3.5 测试\n",
|
||
"\n",
|
||
"最后,我们可以使用 `Evaluator` 查看我们训练的结果。我们在之前为 `SquadEvaluateMetric` 设置了 `testing` 参数来在测试阶段进行输出,可以看到,训练的结果还是比较不错的。"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">原文: 爬行垫根据中间材料的不同可以分为:XPE爬行垫、EPE爬行垫、EVA爬行垫、PVC爬行垫;其中XPE爬\n",
|
||
"行垫、EPE爬行垫都属于PE材料加保鲜膜复合而成,都是无异味的环保材料,但是XPE爬行垫是品质较好的爬\n",
|
||
"行垫,韩国进口爬行垫都是这种爬行垫,而EPE爬行垫是国内厂家为了减低成本,使用EPE(珍珠棉)作为原料生\n",
|
||
"产的一款爬行垫,该材料弹性差,易碎,开孔发泡防水性弱。EVA爬行垫、PVC爬行垫是用EVA或PVC作为原材料\n",
|
||
"与保鲜膜复合的而成的爬行垫,或者把图案转印在原材料上,这两款爬行垫通常有异味,如果是图案转印的爬\n",
|
||
"行垫,油墨外露容易脱落。 \n",
|
||
"当时我儿子爬的时候,我们也买了垫子,但是始终有味。最后就没用了,铺的就的薄毯子让他爬。\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"原文: 爬行垫根据中间材料的不同可以分为:XPE爬行垫、EPE爬行垫、EVA爬行垫、PVC爬行垫;其中XPE爬\n",
|
||
"行垫、EPE爬行垫都属于PE材料加保鲜膜复合而成,都是无异味的环保材料,但是XPE爬行垫是品质较好的爬\n",
|
||
"行垫,韩国进口爬行垫都是这种爬行垫,而EPE爬行垫是国内厂家为了减低成本,使用EPE(珍珠棉)作为原料生\n",
|
||
"产的一款爬行垫,该材料弹性差,易碎,开孔发泡防水性弱。EVA爬行垫、PVC爬行垫是用EVA或PVC作为原材料\n",
|
||
"与保鲜膜复合的而成的爬行垫,或者把图案转印在原材料上,这两款爬行垫通常有异味,如果是图案转印的爬\n",
|
||
"行垫,油墨外露容易脱落。 \n",
|
||
"当时我儿子爬的时候,我们也买了垫子,但是始终有味。最后就没用了,铺的就的薄毯子让他爬。\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">问题: 爬行垫什么材质的好 答案: EPE(珍珠棉 正确答案: ['XPE']\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"问题: 爬行垫什么材质的好 答案: EPE(珍珠棉 正确答案: ['XPE']\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">原文: 真实情况是160-162。她平时谎报的168是因为不离脚穿高水台恨天高(15厘米) 图1她穿着高水台恨\n",
|
||
"天高和刘亦菲一样高,(刘亦菲对外报身高172)范冰冰礼服下厚厚的高水台暴露了她的心机,对比一下两者的\n",
|
||
"鞋子吧 图2 穿着高水台恨天高才和刘德华谢霆锋持平,如果她真的有168,那么加上鞋高,刘和谢都要有180?\n",
|
||
"明显是不可能的。所以刘德华对外报的身高174减去10-15厘米才是范冰冰的真实身高 图3,范冰冰有一次脱\n",
|
||
"鞋上场,这个最说明问题了,看看她的身体比例吧。还有目测一下她手上鞋子的鞋跟有多高多厚吧,至少超过\n",
|
||
"10厘米。\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"原文: 真实情况是160-162。她平时谎报的168是因为不离脚穿高水台恨天高(15厘米) 图1她穿着高水台恨\n",
|
||
"天高和刘亦菲一样高,(刘亦菲对外报身高172)范冰冰礼服下厚厚的高水台暴露了她的心机,对比一下两者的\n",
|
||
"鞋子吧 图2 穿着高水台恨天高才和刘德华谢霆锋持平,如果她真的有168,那么加上鞋高,刘和谢都要有180?\n",
|
||
"明显是不可能的。所以刘德华对外报的身高174减去10-15厘米才是范冰冰的真实身高 图3,范冰冰有一次脱\n",
|
||
"鞋上场,这个最说明问题了,看看她的身体比例吧。还有目测一下她手上鞋子的鞋跟有多高多厚吧,至少超过\n",
|
||
"10厘米。\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">问题: 范冰冰多高真实身高 答案: 160-162 正确答案: ['160-162']\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"问题: 范冰冰多高真实身高 答案: 160-162 正确答案: ['160-162']\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">原文: 防水作为目前高端手机的标配,特别是苹果也支持防水之后,国产大多数高端旗舰手机都已经支持防\n",
|
||
"水。虽然我们真的不会故意把手机放入水中,但是有了防水之后,用户心里会多一重安全感。那么近日最为\n",
|
||
"火热的小米6防水吗?小米6的防水级别又是多少呢? 小编查询了很多资料发现,小米6确实是防水的,但是为\n",
|
||
"了保持低调,同时为了不被别人说防水等级不够,很多资料都没有标注小米是否防水。根据评测资料显示,小\n",
|
||
"米6是支持IP68级的防水,是绝对能够满足日常生活中的防水需求的。\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"原文: 防水作为目前高端手机的标配,特别是苹果也支持防水之后,国产大多数高端旗舰手机都已经支持防\n",
|
||
"水。虽然我们真的不会故意把手机放入水中,但是有了防水之后,用户心里会多一重安全感。那么近日最为\n",
|
||
"火热的小米6防水吗?小米6的防水级别又是多少呢? 小编查询了很多资料发现,小米6确实是防水的,但是为\n",
|
||
"了保持低调,同时为了不被别人说防水等级不够,很多资料都没有标注小米是否防水。根据评测资料显示,小\n",
|
||
"米6是支持IP68级的防水,是绝对能够满足日常生活中的防水需求的。\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">问题: 小米6防水等级 答案: IP68级 正确答案: ['IP68级']\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"问题: 小米6防水等级 答案: IP68级 正确答案: ['IP68级']\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">原文: 这位朋友你好,女性出现妊娠反应一般是从6-12周左右,也就是女性怀孕1个多月就会开始出现反应,\n",
|
||
"第3个月的时候,妊辰反应基本结束。 而大部分女性怀孕初期都会出现恶心、呕吐的感觉,这些症状都是因\n",
|
||
"人而异的,除非恶心、呕吐的非常厉害,才需要就医,否则这些都是刚怀孕的的正常症状。1-3个月的时候可\n",
|
||
"以观察一下自己的皮肤,一般女性怀孕初期可能会产生皮肤色素沉淀或是腹壁产生妊娠纹,特别是在怀孕的\n",
|
||
"后期更加明显。 还有很多女性怀孕初期会出现疲倦、嗜睡的情况。怀孕三个月的时候,膀胱会受到日益胀\n",
|
||
"大的子宫的压迫,容量会变小,所以怀孕期间也会有尿频的现象出现。月经停止也是刚怀孕最容易出现的症\n",
|
||
"状,只要是平时月经正常的女性,在性行为后超过正常经期两周,就有可能是怀孕了。 如果你想判断自己是\n",
|
||
"否怀孕,可以看看自己有没有这些反应。当然这也只是多数人的怀孕表现,也有部分女性怀孕表现并不完全\n",
|
||
"是这样,如果你无法确定自己是否怀孕,最好去医院检查一下。\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"原文: 这位朋友你好,女性出现妊娠反应一般是从6-12周左右,也就是女性怀孕1个多月就会开始出现反应,\n",
|
||
"第3个月的时候,妊辰反应基本结束。 而大部分女性怀孕初期都会出现恶心、呕吐的感觉,这些症状都是因\n",
|
||
"人而异的,除非恶心、呕吐的非常厉害,才需要就医,否则这些都是刚怀孕的的正常症状。1-3个月的时候可\n",
|
||
"以观察一下自己的皮肤,一般女性怀孕初期可能会产生皮肤色素沉淀或是腹壁产生妊娠纹,特别是在怀孕的\n",
|
||
"后期更加明显。 还有很多女性怀孕初期会出现疲倦、嗜睡的情况。怀孕三个月的时候,膀胱会受到日益胀\n",
|
||
"大的子宫的压迫,容量会变小,所以怀孕期间也会有尿频的现象出现。月经停止也是刚怀孕最容易出现的症\n",
|
||
"状,只要是平时月经正常的女性,在性行为后超过正常经期两周,就有可能是怀孕了。 如果你想判断自己是\n",
|
||
"否怀孕,可以看看自己有没有这些反应。当然这也只是多数人的怀孕表现,也有部分女性怀孕表现并不完全\n",
|
||
"是这样,如果你无法确定自己是否怀孕,最好去医院检查一下。\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">问题: 怀孕多久会有反应 答案: 6-12周左右 正确答案: ['6-12周左右', '6-12周', '1个多月']\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"问题: 怀孕多久会有反应 答案: 6-12周左右 正确答案: ['6-12周左右', '6-12周', '1个多月']\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">原文: 【东奥会计在线——中级会计职称频道推荐】根据《关于提高科技型中小企业研究开发费用税前加计\n",
|
||
"扣除比例的通知》的规定,研发费加计扣除比例提高到75%。|财政部、国家税务总局、科技部发布《关于提\n",
|
||
"高科技型中小企业研究开发费用税前加计扣除比例的通知》。|通知称,为进一步激励中小企业加大研发投\n",
|
||
"入,支持科技创新,就提高科技型中小企业研究开发费用(以下简称研发费用)税前加计扣除比例有关问题发\n",
|
||
"布通知。|通知明确,科技型中小企业开展研发活动中实际发生的研发费用,未形成无形资产计入当期损益的\n",
|
||
",在按规定据实扣除的基础上,在2017年1月1日至2019年12月31日期间,再按照实际发生额的75%在税前加计\n",
|
||
"扣除;形成无形资产的,在上述期间按照无形资产成本的175%在税前摊销。|科技型中小企业享受研发费用税\n",
|
||
"前加计扣除政策的其他政策口径按照《财政部国家税务总局科技部关于完善研究开发费用税前加计扣除政\n",
|
||
"策的通知》(财税〔2015〕119号)规定执行。|科技型中小企业条件和管理办法由科技部、财政部和国家税\n",
|
||
"务总局另行发布。科技、财政和税务部门应建立信息共享机制,及时共享科技型中小企业的相关信息,加强\n",
|
||
"协调配合,保障优惠政策落实到位。|上一篇文章:关于2016年度企业研究开发费用税前加计扣除政策企业所\n",
|
||
"得税纳税申报问题的公告 下一篇文章:关于提高科技型中小企业研究开发费用税前加计扣除比例的通知\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"原文: 【东奥会计在线——中级会计职称频道推荐】根据《关于提高科技型中小企业研究开发费用税前加计\n",
|
||
"扣除比例的通知》的规定,研发费加计扣除比例提高到75%。|财政部、国家税务总局、科技部发布《关于提\n",
|
||
"高科技型中小企业研究开发费用税前加计扣除比例的通知》。|通知称,为进一步激励中小企业加大研发投\n",
|
||
"入,支持科技创新,就提高科技型中小企业研究开发费用(以下简称研发费用)税前加计扣除比例有关问题发\n",
|
||
"布通知。|通知明确,科技型中小企业开展研发活动中实际发生的研发费用,未形成无形资产计入当期损益的\n",
|
||
",在按规定据实扣除的基础上,在2017年1月1日至2019年12月31日期间,再按照实际发生额的75%在税前加计\n",
|
||
"扣除;形成无形资产的,在上述期间按照无形资产成本的175%在税前摊销。|科技型中小企业享受研发费用税\n",
|
||
"前加计扣除政策的其他政策口径按照《财政部国家税务总局科技部关于完善研究开发费用税前加计扣除政\n",
|
||
"策的通知》(财税〔2015〕119号)规定执行。|科技型中小企业条件和管理办法由科技部、财政部和国家税\n",
|
||
"务总局另行发布。科技、财政和税务部门应建立信息共享机制,及时共享科技型中小企业的相关信息,加强\n",
|
||
"协调配合,保障优惠政策落实到位。|上一篇文章:关于2016年度企业研究开发费用税前加计扣除政策企业所\n",
|
||
"得税纳税申报问题的公告 下一篇文章:关于提高科技型中小企业研究开发费用税前加计扣除比例的通知\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">问题: 研发费用加计扣除比例 答案: 75% 正确答案: ['75%']\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"问题: 研发费用加计扣除比例 答案: 75% 正确答案: ['75%']\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
|
||
],
|
||
"text/plain": []
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/html": [
|
||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
|
||
" <span style=\"color: #008000; text-decoration-color: #008000\">'exact#squad'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.70218772053634</span>,\n",
|
||
" <span style=\"color: #008000; text-decoration-color: #008000\">'f1#squad'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.33295482054824</span>,\n",
|
||
" <span style=\"color: #008000; text-decoration-color: #008000\">'total#squad'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>,\n",
|
||
" <span style=\"color: #008000; text-decoration-color: #008000\">'HasAns_exact#squad'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">65.70218772053634</span>,\n",
|
||
" <span style=\"color: #008000; text-decoration-color: #008000\">'HasAns_f1#squad'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">80.33295482054824</span>,\n",
|
||
" <span style=\"color: #008000; text-decoration-color: #008000\">'HasAns_total#squad'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1417</span>\n",
|
||
"<span style=\"font-weight: bold\">}</span>\n",
|
||
"</pre>\n"
|
||
],
|
||
"text/plain": [
|
||
"\u001b[1m{\u001b[0m\n",
|
||
" \u001b[32m'exact#squad'\u001b[0m: \u001b[1;36m65.70218772053634\u001b[0m,\n",
|
||
" \u001b[32m'f1#squad'\u001b[0m: \u001b[1;36m80.33295482054824\u001b[0m,\n",
|
||
" \u001b[32m'total#squad'\u001b[0m: \u001b[1;36m1417\u001b[0m,\n",
|
||
" \u001b[32m'HasAns_exact#squad'\u001b[0m: \u001b[1;36m65.70218772053634\u001b[0m,\n",
|
||
" \u001b[32m'HasAns_f1#squad'\u001b[0m: \u001b[1;36m80.33295482054824\u001b[0m,\n",
|
||
" \u001b[32m'HasAns_total#squad'\u001b[0m: \u001b[1;36m1417\u001b[0m\n",
|
||
"\u001b[1m}\u001b[0m\n"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"from fastNLP import Evaluator\n",
|
||
"evaluator = Evaluator(\n",
|
||
" model=model,\n",
|
||
" dataloaders=val_dataloader,\n",
|
||
" device=1,\n",
|
||
" metrics={\n",
|
||
" \"squad\": SquadEvaluateMetric(\n",
|
||
" val_dataloader.dataset.data,\n",
|
||
" val_dataloader.dataset.new_data,\n",
|
||
" testing=True,\n",
|
||
" ),\n",
|
||
" },\n",
|
||
")\n",
|
||
"result = evaluator.run()"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3.7.13 ('fnlp-paddle')",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.7.13"
|
||
},
|
||
"orig_nbformat": 4,
|
||
"vscode": {
|
||
"interpreter": {
|
||
"hash": "31f2d9d3efc23c441973d7c4273acfea8b132b6a578f002629b6b44b8f65e720"
|
||
}
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|