Merge pull request #193 from fastnlp/dev0.5.0

Dev0.4.5
This commit is contained in:
ChenXin 2019-07-12 14:35:17 +08:00 committed by GitHub
commit d6326de8d2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
361 changed files with 224192 additions and 5565 deletions

16
.gitignore vendored Normal file
View File

@ -0,0 +1,16 @@
.gitignore
.DS_Store
.ipynb_checkpoints
*.pyc
__pycache__
*.swp
.vscode/
.idea/**
caches
# fitlog
.fitlog
logs/
.fitconfig

View File

@ -8,7 +8,7 @@ install:
- pip install pytest-cov - pip install pytest-cov
# command to run tests # command to run tests
script: script:
- pytest --cov=./ - pytest --cov=./ test/
after_success: after_success:
- bash <(curl -s https://codecov.io/bash) - bash <(curl -s https://codecov.io/bash)

View File

@ -6,48 +6,69 @@
![Hex.pm](https://img.shields.io/hexpm/l/plug.svg) ![Hex.pm](https://img.shields.io/hexpm/l/plug.svg)
[![Documentation Status](https://readthedocs.org/projects/fastnlp/badge/?version=latest)](http://fastnlp.readthedocs.io/?badge=latest) [![Documentation Status](https://readthedocs.org/projects/fastnlp/badge/?version=latest)](http://fastnlp.readthedocs.io/?badge=latest)
fastNLP 是一款轻量级的 NLP 处理套件。你既可以使用它快速地完成一个命名实体识别NER、中文分词或文本分类任务 也可以使用他构建许多复杂的网络模型,进行科研。它具有如下的特性: fastNLP 是一款轻量级的 NLP 处理套件。你既可以使用它快速地完成一个序列标注([NER](reproduction/seqence_labelling/ner)、POS-Tagging等、中文分词、[文本分类](reproduction/text_classification)、[Matching](reproduction/matching)、[指代消解](reproduction/coreference_resolution)、[摘要](reproduction/Summarization)等任务; 也可以使用它构建许多复杂的网络模型,进行科研。它具有如下的特性:
- 统一的Tabular式数据容器让数据预处理过程简洁明了。内置多种数据集的DataSet Loader省去预处理代码。 - 统一的Tabular式数据容器让数据预处理过程简洁明了。内置多种数据集的DataSet Loader省去预处理代码;
- 各种方便的NLP工具例如预处理embedding加载; 中间数据cache等; - 多种训练、测试组件例如训练器Trainer测试器Tester以及各种评测metrics等等;
- 详尽的中文文档以供查阅; - 各种方便的NLP工具例如预处理embedding加载包括ELMo和BERT; 中间数据cache等;
- 详尽的中文[文档](https://fastnlp.readthedocs.io/)、[教程](https://fastnlp.readthedocs.io/zh/latest/user/tutorials.html)以供查阅;
- 提供诸多高级模块例如Variational LSTM, Transformer, CRF等; - 提供诸多高级模块例如Variational LSTM, Transformer, CRF等;
- 封装CNNTextBiaffine等模型可供直接使用; - 在序列标注、中文分词、文本分类、Matching、指代消解、摘要等任务上封装了各种模型可供直接使用详细内容见 [reproduction](reproduction) 部分;
- 便捷且具有扩展性的训练器; 提供多种内置callback函数方便实验记录、异常捕获等。 - 便捷且具有扩展性的训练器; 提供多种内置callback函数方便实验记录、异常捕获等。
## 安装指南 ## 安装指南
fastNLP 依赖下包: fastNLP 依赖下包:
+ numpy + numpy>=1.14.2
+ torch>=0.4.0 + torch>=1.0.0
+ tqdm + tqdm>=4.28.1
+ nltk + nltk>=3.4.1
+ requests
+ spacy
其中torch的安装可能与操作系统及 CUDA 的版本相关,请参见 PyTorch 官网 。 其中torch的安装可能与操作系统及 CUDA 的版本相关,请参见 [PyTorch 官网](https://pytorch.org/)
在依赖包安装完成的情况,您可以在命令行执行如下指令完成安装 在依赖包安装完成,您可以在命令行执行如下指令完成安装
```shell ```shell
pip install fastNLP pip install fastNLP
python -m spacy download en
``` ```
目前使用pip安装fastNLP的版本是0.4.1有较多功能仍未更新最新内容以master分支为准。
fastNLP0.5.0版本将在近期推出,请密切关注。
## 参考资源
- [文档](https://fastnlp.readthedocs.io/zh/latest/) ## fastNLP教程
- [源码](https://github.com/fastnlp/fastNLP)
- [0. 快速入门](https://fastnlp.readthedocs.io/zh/latest/user/quickstart.html)
- [1. 使用DataSet预处理文本](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_1_data_preprocess.html)
- [2. 使用DataSetLoader加载数据集](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_2_load_dataset.html)
- [3. 使用Embedding模块将文本转成向量](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_3_embedding.html)
- [4. 动手实现一个文本分类器I-使用Trainer和Tester快速训练和测试](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_4_loss_optimizer.html)
- [5. 动手实现一个文本分类器II-使用DataSetIter实现自定义训练过程](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_5_datasetiter.html)
- [6. 快速实现序列标注模型](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_6_seq_labeling.html)
- [7. 使用Modules和Models快速搭建自定义模型](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_7_modules_models.html)
- [8. 使用Metric快速评测你的模型](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_8_metrics.html)
- [9. 使用Callback自定义你的训练过程](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_9_callback.html)
- [10. 使用fitlog 辅助 fastNLP 进行科研](https://fastnlp.readthedocs.io/zh/latest/tutorials/tutorial_10_fitlog.html)
## 内置组件 ## 内置组件
大部分用于的 NLP 任务神经网络都可以看做由编码encoder、聚合aggregator、解码decoder三种模块组成。 大部分用于的 NLP 任务神经网络都可以看做由词嵌入embeddings和两种模块编码器encoder、解码器decoder组成。
以文本分类任务为例下图展示了一个BiLSTM+Attention实现文本分类器的模型流程图
![](./docs/source/figures/text_classification.png) ![](./docs/source/figures/text_classification.png)
fastNLP 在 modules 模块中内置了三种模块的诸多组件,可以帮助用户快速搭建自己所需的网络。 三种模块的功能和常见组件如下: fastNLP 在 embeddings 模块中内置了几种不同的embedding静态embeddingGloVe、word2vec、上下文相关embedding
ELMo、BERT、字符embedding基于CNN或者LSTM的CharEmbedding
与此同时fastNLP 在 modules 模块中内置了两种模块的诸多组件,可以帮助用户快速搭建自己所需的网络。 两种模块的功能和常见组件如下:
<table> <table>
<tr> <tr>
@ -57,29 +78,17 @@ fastNLP 在 modules 模块中内置了三种模块的诸多组件,可以帮助
</tr> </tr>
<tr> <tr>
<td> encoder </td> <td> encoder </td>
<td> 将输入编码为具有具 有表示能力的向量 </td> <td> 将输入编码为具有具有表示能力的向量 </td>
<td> embedding, RNN, CNN, transformer <td> embedding, RNN, CNN, transformer
</tr> </tr>
<tr>
<td> aggregator </td>
<td> 从多个向量中聚合信息 </td>
<td> self-attention, max-pooling </td>
</tr>
<tr> <tr>
<td> decoder </td> <td> decoder </td>
<td> 将具有某种表示意义的 向量解码为需要的输出 形式 </td> <td> 将具有某种表示意义的向量解码为需要的输出形式 </td>
<td> MLP, CRF </td> <td> MLP, CRF </td>
</tr> </tr>
</table> </table>
## 完整模型
fastNLP 为不同的 NLP 任务实现了许多完整的模型,它们都经过了训练和测试。
你可以在以下两个地方查看相关信息
- [介绍](reproduction/)
- [源码](fastNLP/models/)
## 项目结构 ## 项目结构
![](./docs/source/figures/workflow.png) ![](./docs/source/figures/workflow.png)
@ -93,7 +102,7 @@ fastNLP的大致工作流程如上图所示而项目结构如下
</tr> </tr>
<tr> <tr>
<td><b> fastNLP.core </b></td> <td><b> fastNLP.core </b></td>
<td> 实现了核心功能,包括数据处理组件、训练器、测器等 </td> <td> 实现了核心功能,包括数据处理组件、训练器、测器等 </td>
</tr> </tr>
<tr> <tr>
<td><b> fastNLP.models </b></td> <td><b> fastNLP.models </b></td>
@ -103,6 +112,10 @@ fastNLP的大致工作流程如上图所示而项目结构如下
<td><b> fastNLP.modules </b></td> <td><b> fastNLP.modules </b></td>
<td> 实现了用于搭建神经网络模型的诸多组件 </td> <td> 实现了用于搭建神经网络模型的诸多组件 </td>
</tr> </tr>
<tr>
<td><b> fastNLP.embeddings </b></td>
<td> 实现了将序列index转为向量序列的功能包括读取预训练embedding等 </td>
</tr>
<tr> <tr>
<td><b> fastNLP.io </b></td> <td><b> fastNLP.io </b></td>
<td> 实现了读写功能,包括数据读入,模型读写等 </td> <td> 实现了读写功能,包括数据读入,模型读写等 </td>

View File

@ -19,6 +19,9 @@ apidoc:
server: server:
cd build/html && python -m http.server cd build/html && python -m http.server
dev:
rm -rf build/html && make html && make server
.PHONY: help Makefile .PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new # Catch-all target: route all unknown targets to Sphinx using the new

41
docs/README.md Normal file
View File

@ -0,0 +1,41 @@
# 快速入门 fastNLP 文档编写
本教程为 fastNLP 文档编写者创建,文档编写者包括合作开发人员和文档维护人员。您在一般情况下属于前者,
只需要了解整个框架的部分内容即可。
## 合作开发人员
FastNLP的文档使用基于[reStructuredText标记语言](http://docutils.sourceforge.net/rst.html)的
[Sphinx](http://sphinx.pocoo.org/)工具生成,由[Read the Docs](https://readthedocs.org/)网站自动维护生成。
一般开发者只要编写符合reStructuredText语法规范的文档并通过[PR](https://help.github.com/en/articles/about-pull-requests)
就可以为fastNLP的文档贡献一份力量。
如果你想在本地编译文档并进行大段文档的编写您需要安装Sphinx工具以及sphinx-rtd-theme主题
```bash
fastNLP/docs> pip install sphinx
fastNLP/docs> pip install sphinx-rtd-theme
```
然后在本目录下执行 `make dev` 命令。该命令只支持Linux和MacOS系统期望看到如下输出
```bash
fastNLP/docs> make dev
rm -rf build/html && make html && make server
Running Sphinx v1.5.6
making output directory...
......
Build finished. The HTML pages are in build/html.
cd build/html && python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
```
现在您浏览器访问 http://localhost:8000/ 查看文档。如果你在远程服务器尚进行工作,则访问地址为 http://{服务器的ip地址}:8000/ 。
但您必须保证服务器的8000端口是开放的。如果您的电脑或远程服务器的8000端口被占用程序会顺延使用8001、8002……等端口。
当你结束访问时您可以使用Control(Ctrl) + C 来结束进程。
我们在[这里](./source/user/example.rst)列举了fastNLP文档经常用到的reStructuredText语法网页查看请结合Raw模式
您可以通过阅读它进行快速上手。FastNLP大部分的文档都是写在代码中通过Sphinx工具进行抽取生成的
您还可以参考这篇[未完成的文章](./source/user/docs_in_code.rst)了解代码内文档编写的规范。
## 文档维护人员
文档维护人员需要了解 Makefile 中全部命令的含义,并了解到目前的文档结构
是在 sphinx-apidoc 自动抽取的基础上进行手动修改得到的。
文档维护人员应进一步提升整个框架的自动化程度,并监督合作开发人员不要破坏文档项目的整体结构。

View File

@ -1,36 +0,0 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
set SPHINXPROJ=fastNLP
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd

View File

@ -1,2 +0,0 @@
# FastNLP Quick Tutorial

View File

@ -24,9 +24,9 @@ copyright = '2018, xpqiu'
author = 'xpqiu' author = 'xpqiu'
# The short X.Y version # The short X.Y version
version = '0.4' version = '0.4.5'
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = '0.4' release = '0.4.5'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------

View File

@ -2,6 +2,6 @@ fastNLP.core.batch
================== ==================
.. automodule:: fastNLP.core.batch .. automodule:: fastNLP.core.batch
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.callback
===================== =====================
.. automodule:: fastNLP.core.callback .. automodule:: fastNLP.core.callback
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.const
================== ==================
.. automodule:: fastNLP.core.const .. automodule:: fastNLP.core.const
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.dataset
==================== ====================
.. automodule:: fastNLP.core.dataset .. automodule:: fastNLP.core.dataset
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.field
================== ==================
.. automodule:: fastNLP.core.field .. automodule:: fastNLP.core.field
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.instance
===================== =====================
.. automodule:: fastNLP.core.instance .. automodule:: fastNLP.core.instance
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.losses
=================== ===================
.. automodule:: fastNLP.core.losses .. automodule:: fastNLP.core.losses
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.metrics
==================== ====================
.. automodule:: fastNLP.core.metrics .. automodule:: fastNLP.core.metrics
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.optimizer
====================== ======================
.. automodule:: fastNLP.core.optimizer .. automodule:: fastNLP.core.optimizer
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,15 +2,15 @@ fastNLP.core
============ ============
.. automodule:: fastNLP.core .. automodule:: fastNLP.core
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
子模块 子模块
---------- ----------
.. toctree:: .. toctree::
:titlesonly: :maxdepth: 1
fastNLP.core.batch fastNLP.core.batch
fastNLP.core.callback fastNLP.core.callback
@ -26,4 +26,3 @@ fastNLP.core
fastNLP.core.trainer fastNLP.core.trainer
fastNLP.core.utils fastNLP.core.utils
fastNLP.core.vocabulary fastNLP.core.vocabulary

View File

@ -2,6 +2,6 @@ fastNLP.core.sampler
==================== ====================
.. automodule:: fastNLP.core.sampler .. automodule:: fastNLP.core.sampler
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.tester
=================== ===================
.. automodule:: fastNLP.core.tester .. automodule:: fastNLP.core.tester
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.trainer
==================== ====================
.. automodule:: fastNLP.core.trainer .. automodule:: fastNLP.core.trainer
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.utils
================== ==================
.. automodule:: fastNLP.core.utils .. automodule:: fastNLP.core.utils
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.core.vocabulary
======================= =======================
.. automodule:: fastNLP.core.vocabulary .. automodule:: fastNLP.core.vocabulary
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.bert\_embedding
==================================
.. automodule:: fastNLP.embeddings.bert_embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.char\_embedding
==================================
.. automodule:: fastNLP.embeddings.char_embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.elmo\_embedding
==================================
.. automodule:: fastNLP.embeddings.elmo_embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.embedding
============================
.. automodule:: fastNLP.embeddings.embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,21 @@
fastNLP.embeddings
==================
.. automodule:: fastNLP.embeddings
:members:
:undoc-members:
:show-inheritance:
子模块
----------
.. toctree::
:maxdepth: 1
fastNLP.embeddings.bert_embedding
fastNLP.embeddings.char_embedding
fastNLP.embeddings.elmo_embedding
fastNLP.embeddings.embedding
fastNLP.embeddings.stack_embedding
fastNLP.embeddings.static_embedding
fastNLP.embeddings.utils

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.stack\_embedding
===================================
.. automodule:: fastNLP.embeddings.stack_embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.static\_embedding
====================================
.. automodule:: fastNLP.embeddings.static_embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.embeddings.utils
========================
.. automodule:: fastNLP.embeddings.utils
:members:
:undoc-members:
:show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.io.base\_loader
======================= =======================
.. automodule:: fastNLP.io.base_loader .. automodule:: fastNLP.io.base_loader
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -0,0 +1,7 @@
fastNLP.io.data\_loader
==========================
.. automodule:: fastNLP.io.data_loader
:members:
:undoc-members:
:show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.io.dataset\_loader
========================== ==========================
.. automodule:: fastNLP.io.dataset_loader .. automodule:: fastNLP.io.dataset_loader
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.io.embed\_loader
======================== ========================
.. automodule:: fastNLP.io.embed_loader .. automodule:: fastNLP.io.embed_loader
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.io.model\_io
==================== ====================
.. automodule:: fastNLP.io.model_io .. automodule:: fastNLP.io.model_io
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,18 +2,18 @@ fastNLP.io
========== ==========
.. automodule:: fastNLP.io .. automodule:: fastNLP.io
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
子模块 子模块
---------- ----------
.. toctree:: .. toctree::
:titlesonly: :maxdepth: 1
fastNLP.io.base_loader fastNLP.io.base_loader
fastNLP.io.dataset_loader
fastNLP.io.embed_loader fastNLP.io.embed_loader
fastNLP.io.dataset_loader
fastNLP.io.data_loader
fastNLP.io.model_io fastNLP.io.model_io

View File

@ -2,6 +2,6 @@ fastNLP.models.biaffine\_parser
=============================== ===============================
.. automodule:: fastNLP.models.biaffine_parser .. automodule:: fastNLP.models.biaffine_parser
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.models.cnn\_text\_classification
======================================== ========================================
.. automodule:: fastNLP.models.cnn_text_classification .. automodule:: fastNLP.models.cnn_text_classification
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,19 +2,18 @@ fastNLP.models
============== ==============
.. automodule:: fastNLP.models .. automodule:: fastNLP.models
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
子模块 子模块
---------- ----------
.. toctree:: .. toctree::
:titlesonly: :maxdepth: 1
fastNLP.models.biaffine_parser fastNLP.models.biaffine_parser
fastNLP.models.cnn_text_classification fastNLP.models.cnn_text_classification
fastNLP.models.sequence_labeling fastNLP.models.sequence_labeling
fastNLP.models.snli fastNLP.models.snli
fastNLP.models.star_transformer fastNLP.models.star_transformer

View File

@ -2,6 +2,6 @@ fastNLP.models.sequence\_labeling
================================= =================================
.. automodule:: fastNLP.models.sequence_labeling .. automodule:: fastNLP.models.sequence_labeling
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.models.snli
=================== ===================
.. automodule:: fastNLP.models.snli .. automodule:: fastNLP.models.snli
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -2,6 +2,6 @@ fastNLP.models.star\_transformer
================================ ================================
.. automodule:: fastNLP.models.star_transformer .. automodule:: fastNLP.models.star_transformer
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.aggregator.attention
====================================
.. automodule:: fastNLP.modules.aggregator.attention
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.aggregator.pooling
==================================
.. automodule:: fastNLP.modules.aggregator.pooling
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,17 +0,0 @@
fastNLP.modules.aggregator
==========================
.. automodule:: fastNLP.modules.aggregator
:members:
:undoc-members:
:show-inheritance:
子模块
----------
.. toctree::
:titlesonly:
fastNLP.modules.aggregator.attention
fastNLP.modules.aggregator.pooling

View File

@ -1,7 +0,0 @@
fastNLP.modules.decoder.CRF
===========================
.. automodule:: fastNLP.modules.decoder.crf
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.decoder.MLP
===========================
.. automodule:: fastNLP.modules.decoder.mlp
:members:
:undoc-members:
:show-inheritance:

View File

@ -2,17 +2,7 @@ fastNLP.modules.decoder
======================= =======================
.. automodule:: fastNLP.modules.decoder .. automodule:: fastNLP.modules.decoder
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
子模块
----------
.. toctree::
:titlesonly:
fastNLP.modules.decoder.crf
fastNLP.modules.decoder.mlp
fastNLP.modules.decoder.utils

View File

@ -1,7 +0,0 @@
fastNLP.modules.decoder.utils
=============================
.. automodule:: fastNLP.modules.decoder.utils
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.bert
============================
.. automodule:: fastNLP.modules.encoder.bert
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.char\_encoder
=====================================
.. automodule:: fastNLP.modules.encoder.char_encoder
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.conv\_maxpool
=====================================
.. automodule:: fastNLP.modules.encoder.conv_maxpool
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.embedding
=================================
.. automodule:: fastNLP.modules.encoder.embedding
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.lstm
============================
.. automodule:: fastNLP.modules.encoder.lstm
:members:
:undoc-members:
:show-inheritance:

View File

@ -2,22 +2,6 @@ fastNLP.modules.encoder
======================= =======================
.. automodule:: fastNLP.modules.encoder .. automodule:: fastNLP.modules.encoder
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
子模块
----------
.. toctree::
:titlesonly:
fastNLP.modules.encoder.bert
fastNLP.modules.encoder.char_encoder
fastNLP.modules.encoder.conv_maxpool
fastNLP.modules.encoder.embedding
fastNLP.modules.encoder.lstm
fastNLP.modules.encoder.star_transformer
fastNLP.modules.encoder.transformer
fastNLP.modules.encoder.variational_rnn

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.star\_transformer
=========================================
.. automodule:: fastNLP.modules.encoder.star_transformer
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.transformer
===================================
.. automodule:: fastNLP.modules.encoder.transformer
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
fastNLP.modules.encoder.variational\_rnn
========================================
.. automodule:: fastNLP.modules.encoder.variational_rnn
:members:
:undoc-members:
:show-inheritance:

View File

@ -2,16 +2,16 @@ fastNLP.modules
=============== ===============
.. automodule:: fastNLP.modules .. automodule:: fastNLP.modules
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
子模块 子模块
----------- -----------
.. toctree:: .. toctree::
:titlesonly: :titlesonly:
:maxdepth: 1
fastNLP.modules.aggregator fastNLP.modules.decoder
fastNLP.modules.decoder fastNLP.modules.encoder
fastNLP.modules.encoder

View File

@ -2,19 +2,18 @@ API 文档
=============== ===============
.. automodule:: fastNLP .. automodule:: fastNLP
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
内部模块 内部模块
----------- -----------
.. toctree:: .. toctree::
:titlesonly: :maxdepth: 1
:maxdepth: 3
fastNLP.core
fastNLP.io
fastNLP.modules
fastNLP.models
fastNLP.core
fastNLP.embeddings
fastNLP.io
fastNLP.models
fastNLP.modules

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 315 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 328 KiB

After

Width:  |  Height:  |  Size: 244 KiB

View File

@ -1,61 +1,28 @@
fastNLP 中文文档 fastNLP 中文文档
===================== =====================
fastNLP 是一款轻量级的 NLP 处理套件。你既可以使用它快速地完成一个命名实体识别NER、中文分词或文本分类任务 `fastNLP <https://github.com/fastnlp/fastNLP/>`_ 是一款轻量级的 NLP 处理套件。你既可以使用它快速地完成一个序列标注
也可以使用他构建许多复杂的网络模型,进行科研。它具有如下的特性: NER、POS-Tagging等、中文分词、文本分类、Matching、指代消解、摘要等任务
(详见 `reproduction <https://github.com/fastnlp/fastNLP/tree/master/reproduction>`_
也可以使用它构建许多复杂的网络模型,进行科研。它具有如下的特性:
- 统一的Tabular式数据容器让数据预处理过程简洁明了。内置多种数据集的DataSet Loader省去预处理代码。 - 统一的Tabular式数据容器让数据预处理过程简洁明了。内置多种数据集的 :mod:`~fastNLP.io.data_loader` ,省去预处理代码;
- 各种方便的NLP工具例如预处理embedding加载; 中间数据cache等; - 多种训练、测试组件,例如训练器 :class:`~fastNLP.Trainer` ;测试器 :class:`~fastNLP.Tester` ;以及各种评测 :mod:`~fastNLP.core.metrics`等;
- 详尽的中文文档以供查阅; - 各种方便的NLP工具例如预处理 :mod:`embedding<fastNLP.embeddings>` 加载包括ELMo和BERT; 中间数据存储 :func:`cache <fastNLP.cache_results>` 等;
- 提供诸多高级模块例如Variational LSTM, Transformer, CRF等; - 提供诸多高级模块 :mod:`~fastNLP.modules`,例如 :class:`~fastNLP.modules.VarLSTM` , :class:`Transformer<fastNLP.modules.TransformerEncoder>` , :class:`CRF<fastNLP.modules.ConditionalRandomField>` 等;
- 封装CNNTextBiaffine等模型可供直接使用; - 在序列标注、中文分词、文本分类、Matching、指代消解、摘要等任务上封装了各种 :mod:`~fastNLP.models` 可供直接使用;
- 便捷且具有扩展性的训练器; 提供多种内置callback函数,方便实验记录、异常捕获等。 - 训练器便捷且具有扩展性,提供多种内置 :mod:`~fastNLP.core.callback` 函数,方便实验记录、异常捕获等。
内置组件
------------
大部分用于的 NLP 任务神经网络都可以看做由编码encoder、聚合aggregator、解码decoder三种模块组成。
.. image:: figures/text_classification.png
fastNLP 在 :mod:`~fastNLP.modules` 模块中内置了三种模块的诸多组件,可以帮助用户快速搭建自己所需的网络。
三种模块的功能和常见组件如下:
+-----------------------+-----------------------+-----------------------+
| module type | functionality | example |
+=======================+=======================+=======================+
| encoder | 将输入编码为具有具 | embedding, RNN, CNN, |
| | 有表示能力的向量 | transformer |
+-----------------------+-----------------------+-----------------------+
| aggregator | 从多个向量中聚合信息 | self-attention, |
| | | max-pooling |
+-----------------------+-----------------------+-----------------------+
| decoder | 将具有某种表示意义的 | MLP, CRF |
| | 向量解码为需要的输出 | |
| | 形式 | |
+-----------------------+-----------------------+-----------------------+
内置模型
----------------
fastNLP 在 :mod:`~fastNLP.models` 模块中内置了如 :class:`~fastNLP.models.CNNText`
:class:`~fastNLP.models.SeqLabeling` 等完整的模型,以供用户直接使用。
.. todo::
这些模型的介绍如下表所示:(模型名称 + 介绍 + 任务上的结果)
用户手册 用户手册
---------------- ----------------
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
安装指南 <user/installation> 安装指南 </user/installation>
快速入门 <user/quickstart> 快速入门 </user/quickstart>
详细指南 <user/tutorial_one> 详细教程 </user/tutorials>
科研指南 <user/with_fitlog>
API 文档 API 文档
------------- -------------
@ -68,11 +35,11 @@ API 文档
fastNLP fastNLP
fitlog fitlog文档
------ ----------
用户可以 `点此 <https://fitlog.readthedocs.io/zh/latest/>`_ 查看fitlog的文档。 可以 `点此 <https://fitlog.readthedocs.io/zh/latest/>`_ 查看fitlog的文档。
fitlog 是由我们团队开发,用于帮助用户记录日志并管理代码的工具 fitlog 是由我们团队开发的日志记录+代码管理的工具。
索引与搜索 索引与搜索
================== ==================

View File

@ -1,6 +1,6 @@
================= ============================================
科研向导 使用fitlog 辅助 fastNLP 进行科研
================= ============================================
本文介绍结合使用 fastNLP 和 fitlog 进行科研的方法。 本文介绍结合使用 fastNLP 和 fitlog 进行科研的方法。

View File

@ -0,0 +1,156 @@
==============================
使用DataSet预处理文本
==============================
:class:`~fastNLP.DataSet` 是fastNLP中用于承载数据的容器。可以将DataSet看做是一个表格
每一行是一个sample (在fastNLP中被称为 :mod:`~fastNLP.core.instance` )
每一列是一个feature (在fastNLP中称为 :mod:`~fastNLP.core.field` )。
.. csv-table::
:header: "sentence", "words", "seq_len"
"This is the first instance .", "[This, is, the, first, instance, .]", 6
"Second instance .", "[Second, instance, .]", 3
"Third instance .", "[Third, instance, .]", 3
"...", "[...]", "..."
上面是一个样例数据中 DataSet 的存储结构。其中它的每一行是一个 :class:`~fastNLP.Instance` 对象; 每一列是一个 :class:`~fastNLP.FieldArray` 对象。
-----------------------------
数据集构建和删除
-----------------------------
我们使用传入字典的方式构建一个数据集,这是 :class:`~fastNLP.DataSet` 初始化的最基础的方式
.. code-block:: python
from fastNLP import DataSet
data = {'sentence':["This is the first instance .", "Second instance .", "Third instance ."],
'words': [['this', 'is', 'the', 'first', 'instance', '.'], ['Second', 'instance', '.'], ['Third', 'instance', '.']],
'seq_len': [6, 3, 3]}
dataset = DataSet(data)
# 传入的dict的每个key的value应该为具有相同长度的list
我们还可以使用 :func:`~fastNLP.DataSet.append` 方法向数据集内增加数据
.. code-block:: python
from fastNLP import DataSet
from fastNLP import Instance
dataset = DataSet()
instance = Instance(sentence="This is the first instance",
words=['this', 'is', 'the', 'first', 'instance', '.'],
seq_len=6)
dataset.append(instance)
# 可以继续append更多内容但是append的instance应该和前面的instance拥有完全相同的field
另外,我们还可以用 :class:`~fastNLP.Instance` 数组的方式构建数据集
.. code-block:: python
from fastNLP import DataSet
from fastNLP import Instance
dataset = DataSet([
Instance(sentence="This is the first instance",
words=['this', 'is', 'the', 'first', 'instance', '.'],
seq_len=6),
Instance(sentence="Second instance .",
words=['Second', 'instance', '.'],
seq_len=3)
])
在初步构建完数据集之后,我们可以通过 `for` 循环遍历 :class:`~fastNLP.DataSet` 中的内容。
.. code-block:: python
for instance in dataset:
# do something
FastNLP 同样提供了多种删除数据的方法 :func:`~fastNLP.DataSet.drop`:func:`~fastNLP.DataSet.delete_instance`:func:`~fastNLP.DataSet.delete_field`
.. code-block:: python
from fastNLP import DataSet
dataset = DataSet({'a': list(range(-5, 5))})
# 返回满足条件的instance,并放入DataSet中
dropped_dataset = dataset.drop(lambda ins:ins['a']<0, inplace=False)
# 在dataset中删除满足条件的instance
dataset.drop(lambda ins:ins['a']<0) # dataset的instance数量减少
# 删除第3个instance
dataset.delete_instance(2)
# 删除名为'a'的field
dataset.delete_field('a')
-----------------------------
简单的数据预处理
-----------------------------
因为 fastNLP 中的数据是按列存储的,所以大部分的数据预处理操作是以列( :mod:`~fastNLP.core.field` )为操作对象的。
首先,我们可以检查特定名称的 :mod:`~fastNLP.core.field` 是否存在,并对其进行改名。
.. code-block:: python
# 检查是否存在名为'a'的field
dataset.has_field('a') # 或 ('a' in dataset)
# 将名为'a'的field改名为'b'
dataset.rename_field('a', 'b')
# DataSet的长度
len(dataset)
其次,我们可以使用 :func:`~fastNLP.DataSet.apply`:func:`~fastNLP.DataSet.apply_field` 进行数据预处理操作操作。
这两个方法通过传入一个对单一 :mod:`~fastNLP.core.instance` 操作的函数,
自动地帮助你对一个 :mod:`~fastNLP.core.field` 中的每个 :mod:`~fastNLP.core.instance` 调用这个函数,完成整体的操作。
这个传入的函数可以是 lambda 匿名函数,也可以是完整定义的函数。同时,你还可以用 ``new_field_name`` 参数指定数据处理后存储的 :mod:`~fastNLP.core.field` 的名称。
.. code-block:: python
from fastNLP import DataSet
data = {'sentence':["This is the first instance .", "Second instance .", "Third instance ."]}
dataset = DataSet(data)
# 将句子分成单词形式, 详见DataSet.apply()方法
dataset.apply(lambda ins: ins['sentence'].split(), new_field_name='words')
# 或使用DataSet.apply_field()
dataset.apply_field(lambda sent:sent.split(), field_name='sentence', new_field_name='words')
# 除了匿名函数,也可以定义函数传递进去
def get_words(instance):
sentence = instance['sentence']
words = sentence.split()
return words
dataset.apply(get_words, new_field_name='words')
除了手动处理数据集之外,你还可以使用 fastNLP 提供的各种 :class:`~fastNLP.io.base_loader.DataSetLoader` 来进行数据处理。
详细请参考这篇教程 :doc:`使用DataSetLoader加载数据集 </tutorials/tutorial_2_load_dataset>`
-----------------------------
DataSet与pad
-----------------------------
在fastNLP里pad是与一个 :mod:`~fastNLP.core.field` 绑定的。即不同的 :mod:`~fastNLP.core.field` 可以使用不同的pad方式比如在英文任务中word需要的pad和
character的pad方式往往是不同的。fastNLP是通过一个叫做 :class:`~fastNLP.Padder` 的子类来完成的。
默认情况下所有field使用 :class:`~fastNLP.AutoPadder`
。可以通过使用以下方式设置Padder(如果将padder设置为None则该field不会进行pad操作)。
大多数情况下直接使用 :class:`~fastNLP.AutoPadder` 就可以了。
如果 :class:`~fastNLP.AutoPadder`:class:`~fastNLP.EngChar2DPadder` 无法满足需求,
也可以自己写一个 :class:`~fastNLP.Padder`
.. code-block:: python
from fastNLP import DataSet
from fastNLP import EngChar2DPadder
import random
dataset = DataSet()
max_chars, max_words, sent_num = 5, 10, 20
contents = [[
[random.randint(1, 27) for _ in range(random.randint(1, max_chars))]
for _ in range(random.randint(1, max_words))
] for _ in range(sent_num)]
# 初始化时传入
dataset.add_field('chars', contents, padder=EngChar2DPadder())
# 直接设置
dataset.set_padder('chars', EngChar2DPadder())
# 也可以设置pad的value
dataset.set_pad_val('chars', -1)

View File

@ -0,0 +1,224 @@
=================================
使用DataSetLoader加载数据集
=================================
这一部分是一个关于如何加载数据集的教程
教程目录:
- `Part I: 数据集容器`_
- `Part II: 数据集的使用方式`_
- `Part III: 不同数据类型的DataSetLoader`_
- `Part IV: DataSetLoader举例`_
- `Part V: fastNLP封装好的数据集加载器`_
----------------------------
Part I: 数据集容器
----------------------------
在fastNLP中我们使用 :class:`~fastNLP.io.base_loader.DataBundle` 来存储数据集信息。
:class:`~fastNLP.io.base_loader.DataBundle` 类包含了两个重要内容: `datasets``vocabs`
`datasets` 是一个 `key` 为数据集名称(如 `train` `dev` ,和 `test` 等), `value`:class:`~fastNLP.DataSet` 的字典。
`vocabs` 是一个 `key` 为词表名称(如 :attr:`fastNLP.Const.INPUT` 表示输入文本的词表名称, :attr:`fastNLP.Const.TARGET` 表示目标
的真实标签词表的名称,等等), `value` 为词表内容( :class:`~fastNLP.Vocabulary` )的字典。
----------------------------
Part II: 数据集的使用方式
----------------------------
在fastNLP中我们采用 :class:`~fastNLP.io.base_loader.DataSetLoader` 来作为加载数据集的基类。
:class:`~fastNLP.io.base_loader.DataSetLoader` 定义了各种DataSetLoader所需的API接口开发者应该继承它实现各种的DataSetLoader。
在各种数据集的DataSetLoader当中至少应该编写如下内容:
- _load 函数:从一个数据文件中读取数据到一个 :class:`~fastNLP.DataSet`
- load 函数(可以使用基类的方法):从一个或多个数据文件中读取数据到一个或多个 :class:`~fastNLP.DataSet`
- process 函数:一个或多个从数据文件中读取数据,并处理成可以训练的 :class:`~fastNLP.io.DataBundle`
**\*process函数中可以调用load函数或_load函数**
DataSetLoader的_load或者load函数返回的 :class:`~fastNLP.DataSet` 当中内容为数据集的文本信息process函数返回的
:class:`~fastNLP.io.DataBundle` 当中, `datasets` 的内容为已经index好的、可以直接被 :class:`~fastNLP.Trainer`
接受的内容。
--------------------------------------------------------
Part III: 不同数据类型的DataSetLoader
--------------------------------------------------------
:class:`~fastNLP.io.dataset_loader.CSVLoader`
读取CSV类型的数据集文件。例子如下
.. code-block:: python
data_set_loader = CSVLoader(
headers=('words', 'target'), sep='\t'
)
# 表示将CSV文件中每一行的第一项填入'words' field第二项填入'target' field。
# 其中每两项之间由'\t'分割开来
data_set = data_set_loader._load('path/to/your/file')
数据集内容样例如下 ::
But it does not leave you with much . 1
You could hate it for the same reason . 1
The performances are an absolute joy . 4
:class:`~fastNLP.io.dataset_loader.JsonLoader`
读取Json类型的数据集文件数据必须按行存储每行是一个包含各类属性的Json对象。例子如下
.. code-block:: python
data_set_loader = JsonLoader(
fields={'sentence1': 'words1', 'sentence2': 'words2', 'gold_label': 'target'}
)
# 表示将Json对象中'sentence1'、'sentence2'和'gold_label'对应的值赋给'words1'、'words2'、'target'这三个fields
data_set = data_set_loader._load('path/to/your/file')
数据集内容样例如下 ::
{"annotator_labels": ["neutral"], "captionID": "3416050480.jpg#4", "gold_label": "neutral", "pairID": "3416050480.jpg#4r1n", "sentence1": "A person on a horse jumps over a broken down airplane.", "sentence1_binary_parse": "( ( ( A person ) ( on ( a horse ) ) ) ( ( jumps ( over ( a ( broken ( down airplane ) ) ) ) ) . ) )", "sentence1_parse": "(ROOT (S (NP (NP (DT A) (NN person)) (PP (IN on) (NP (DT a) (NN horse)))) (VP (VBZ jumps) (PP (IN over) (NP (DT a) (JJ broken) (JJ down) (NN airplane)))) (. .)))", "sentence2": "A person is training his horse for a competition.", "sentence2_binary_parse": "( ( A person ) ( ( is ( ( training ( his horse ) ) ( for ( a competition ) ) ) ) . ) )", "sentence2_parse": "(ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) (VP (VBG training) (NP (PRP$ his) (NN horse)) (PP (IN for) (NP (DT a) (NN competition))))) (. .)))"}
{"annotator_labels": ["contradiction"], "captionID": "3416050480.jpg#4", "gold_label": "contradiction", "pairID": "3416050480.jpg#4r1c", "sentence1": "A person on a horse jumps over a broken down airplane.", "sentence1_binary_parse": "( ( ( A person ) ( on ( a horse ) ) ) ( ( jumps ( over ( a ( broken ( down airplane ) ) ) ) ) . ) )", "sentence1_parse": "(ROOT (S (NP (NP (DT A) (NN person)) (PP (IN on) (NP (DT a) (NN horse)))) (VP (VBZ jumps) (PP (IN over) (NP (DT a) (JJ broken) (JJ down) (NN airplane)))) (. .)))", "sentence2": "A person is at a diner, ordering an omelette.", "sentence2_binary_parse": "( ( A person ) ( ( ( ( is ( at ( a diner ) ) ) , ) ( ordering ( an omelette ) ) ) . ) )", "sentence2_parse": "(ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) (PP (IN at) (NP (DT a) (NN diner))) (, ,) (S (VP (VBG ordering) (NP (DT an) (NN omelette))))) (. .)))"}
{"annotator_labels": ["entailment"], "captionID": "3416050480.jpg#4", "gold_label": "entailment", "pairID": "3416050480.jpg#4r1e", "sentence1": "A person on a horse jumps over a broken down airplane.", "sentence1_binary_parse": "( ( ( A person ) ( on ( a horse ) ) ) ( ( jumps ( over ( a ( broken ( down airplane ) ) ) ) ) . ) )", "sentence1_parse": "(ROOT (S (NP (NP (DT A) (NN person)) (PP (IN on) (NP (DT a) (NN horse)))) (VP (VBZ jumps) (PP (IN over) (NP (DT a) (JJ broken) (JJ down) (NN airplane)))) (. .)))", "sentence2": "A person is outdoors, on a horse.", "sentence2_binary_parse": "( ( A person ) ( ( ( ( is outdoors ) , ) ( on ( a horse ) ) ) . ) )", "sentence2_parse": "(ROOT (S (NP (DT A) (NN person)) (VP (VBZ is) (ADVP (RB outdoors)) (, ,) (PP (IN on) (NP (DT a) (NN horse)))) (. .)))"}
------------------------------------------
Part IV: DataSetLoader举例
------------------------------------------
以Matching任务为例子
:class:`~fastNLP.io.data_loader.MatchingLoader`
我们在fastNLP当中封装了一个Matching任务数据集的数据加载类 :class:`~fastNLP.io.data_loader.MatchingLoader` .
在MatchingLoader类当中我们封装了一个对数据集中的文本内容进行进一步的预处理的函数
:meth:`~fastNLP.io.data_loader.MatchingLoader.process`
这个函数具有各种预处理option
- 是否将文本转成全小写
- 是否需要序列长度信息,需要什么类型的序列长度信息
- 是否需要用BertTokenizer来获取序列的WordPiece信息
- 等等
具体内容参见 :meth:`fastNLP.io.MatchingLoader.process`
:class:`~fastNLP.io.data_loader.SNLILoader`
一个关于SNLI数据集的DataSetLoader。SNLI数据集来自
`SNLI Data Set <https://nlp.stanford.edu/projects/snli/snli_1.0.zip>`_ .
:class:`~fastNLP.io.data_loader.SNLILoader`:meth:`~fastNLP.io.data_loader.SNLILoader._load`
函数中,我们用以下代码将数据集内容从文本文件读入内存:
.. code-block:: python
data = SNLILoader().process(
paths='path/to/snli/data', to_lower=False, seq_len_type='seq_len',
get_index=True, concat=False,
)
print(data)
输出的内容是::
In total 3 datasets:
train has 549367 instances.
dev has 9842 instances.
test has 9824 instances.
In total 2 vocabs:
words has 43154 entries.
target has 3 entries.
这里的data是一个 :class:`~fastNLP.io.base_loader.DataBundle` ,取 ``datasets`` 字典里的内容即可直接传入
:class:`~fastNLP.Trainer` 或者 :class:`~fastNLP.Tester` 进行训练或者测试。
:class:`~fastNLP.io.data_loader.IMDBLoader`
以IMDB数据集为例:class:`~fastNLP.io.data_loader.IMDBLoader`:meth:`~fastNLP.io.data_loader.IMDBLoader._load`
函数中,我们用以下代码将数据集内容从文本文件读入内存:
.. code-block:: python
data = IMDBLoader().process(
paths={'train': 'path/to/train/file', 'test': 'path/to/test/file'}
)
print(data)
输出的内容是::
In total 3 datasets:
train has 22500 instances.
test has 25000 instances.
dev has 2500 instances.
In total 2 vocabs:
words has 82846 entries.
target has 2 entries.
这里的将原来的train集按9:1的比例分成了训练集和验证集。
------------------------------------------
Part V: fastNLP封装好的数据集加载器
------------------------------------------
fastNLP封装好的数据集加载器可以适用于多种类型的任务
- `文本分类任务`_
- `序列标注任务`_
- `Matching任务`_
文本分类任务
-------------------
========================== ==================================================================
数据集名称 数据集加载器
-------------------------- ------------------------------------------------------------------
IMDb :class:`~fastNLP.io.data_loader.IMDBLoader`
-------------------------- ------------------------------------------------------------------
SST :class:`~fastNLP.io.data_loader.SSTLoader`
-------------------------- ------------------------------------------------------------------
SST-2 :class:`~fastNLP.io.data_loader.SST2Loader`
-------------------------- ------------------------------------------------------------------
Yelp Polarity :class:`~fastNLP.io.data_loader.YelpLoader`
-------------------------- ------------------------------------------------------------------
Yelp Full :class:`~fastNLP.io.data_loader.YelpLoader`
-------------------------- ------------------------------------------------------------------
MTL16 :class:`~fastNLP.io.data_loader.MTL16Loader`
========================== ==================================================================
序列标注任务
-------------------
========================== ==================================================================
数据集名称 数据集加载器
-------------------------- ------------------------------------------------------------------
Conll :class:`~fastNLP.io.data_loader.ConllLoader`
-------------------------- ------------------------------------------------------------------
Conll2003 :class:`~fastNLP.io.data_loader.Conll2003Loader`
-------------------------- ------------------------------------------------------------------
人民日报数据集 :class:`~fastNLP.io.data_loader.PeopleDailyCorpusLoader`
========================== ==================================================================
Matching任务
-------------------
========================== ==================================================================
数据集名称 数据集加载器
-------------------------- ------------------------------------------------------------------
SNLI :class:`~fastNLP.io.data_loader.SNLILoader`
-------------------------- ------------------------------------------------------------------
MultiNLI :class:`~fastNLP.io.data_loader.MNLILoader`
-------------------------- ------------------------------------------------------------------
QNLI :class:`~fastNLP.io.data_loader.QNLILoader`
-------------------------- ------------------------------------------------------------------
RTE :class:`~fastNLP.io.data_loader.RTELoader`
-------------------------- ------------------------------------------------------------------
Quora Pair Dataset :class:`~fastNLP.io.data_loader.QuoraLoader`
========================== ==================================================================

View File

@ -0,0 +1,214 @@
=========================================
使用Embedding模块将文本转成向量
=========================================
这一部分是一个关于在fastNLP当中使用embedding的教程。
教程目录:
- `Part I: embedding介绍`_
- `Part II: 使用随机初始化的embedding`_
- `Part III: 使用预训练的静态embedding`_
- `Part IV: 使用预训练的Contextual Embedding(ELMo & BERT)`_
- `Part V: 使用character-level的embedding`_
- `Part VI: 叠加使用多个embedding`_
---------------------------------------
Part I: embedding介绍
---------------------------------------
与torch.nn.Embedding类似fastNLP的embedding接受的输入是一个被index好的序列输出的内容是这个序列的embedding结果。
fastNLP的embedding包括了预训练embedding和随机初始化embedding。
---------------------------------------
Part II: 使用随机初始化的embedding
---------------------------------------
使用随机初始化的embedding参见 :class:`~fastNLP.modules.encoder.embedding.Embedding`
可以传入词表大小和embedding维度
.. code-block:: python
embed = Embedding(10000, 50)
也可以传入一个初始化的参数矩阵:
.. code-block:: python
embed = Embedding(init_embed)
其中的init_embed可以是torch.FloatTensor、torch.nn.Embedding或者numpy.ndarray。
---------------------------------------
Part III: 使用预训练的静态embedding
---------------------------------------
在使用预训练的embedding之前需要根据数据集的内容构建一个词表 :class:`~fastNLP.core.vocabulary.Vocabulary` ,在
预训练embedding类初始化的时候需要将这个词表作为参数传入。
在fastNLP中我们提供了 :class:`~fastNLP.modules.encoder.embedding.StaticEmbedding` 这一个类。
通过 :class:`~fastNLP.modules.encoder.embedding.StaticEmbedding` 可以加载预训练好的静态
Embedding例子如下
.. code-block:: python
embed = StaticEmbedding(vocab, model_dir_or_name='en-glove-6b-50', requires_grad=True)
vocab为根据数据集构建的词表model_dir_or_name可以是一个路径也可以是embedding模型的名称
1 如果传入的是路径那么fastNLP将会根据该路径来读取预训练的权重文件并将embedding加载进来(glove
和word2vec类型的权重文件都支持)
2 如果传入的是模型名称那么fastNLP将会根据名称查找embedding模型如果在cache目录下找到模型则会
自动加载;如果找不到则会自动下载。可以通过环境变量 ``FASTNLP_CACHE_DIR`` 来自定义cache目录如::
$ FASTNLP_CACHE_DIR=~/fastnlp_cache_dir python your_python_file.py
这个命令表示fastNLP将会在 `~/fastnlp_cache_dir` 这个目录下寻找模型,找不到则会自动将模型下载到这个目录
目前支持的静态embedding模型有
========================== ================================
模型名称 模型
-------------------------- --------------------------------
en glove.840B.300d
-------------------------- --------------------------------
en-glove-840d-300 glove.840B.300d
-------------------------- --------------------------------
en-glove-6b-50 glove.6B.50d
-------------------------- --------------------------------
en-word2vec-300 谷歌word2vec 300维
-------------------------- --------------------------------
en-fasttext 英文fasttext 300维
-------------------------- --------------------------------
cn 腾讯中文词向量 200维
-------------------------- --------------------------------
cn-fasttext 中文fasttext 300维
========================== ================================
-----------------------------------------------------------
Part IV: 使用预训练的Contextual Embedding(ELMo & BERT)
-----------------------------------------------------------
在fastNLP中我们提供了ELMo和BERT的embedding :class:`~fastNLP.modules.encoder.embedding.ElmoEmbedding`
:class:`~fastNLP.modules.encoder.embedding.BertEmbedding`
与静态embedding类似ELMo的使用方法如下
.. code-block:: python
embed = ElmoEmbedding(vocab, model_dir_or_name='small', requires_grad=False)
目前支持的ElmoEmbedding模型有
========================== ================================
模型名称 模型
-------------------------- --------------------------------
small allennlp ELMo的small
-------------------------- --------------------------------
medium allennlp ELMo的medium
-------------------------- --------------------------------
original allennlp ELMo的original
-------------------------- --------------------------------
5.5b-original allennlp ELMo的5.5B original
========================== ================================
BERT-embedding的使用方法如下
.. code-block:: python
embed = BertEmbedding(
vocab, model_dir_or_name='en-base-cased', requires_grad=False, layers='4,-2,-1'
)
其中layers变量表示需要取哪几层的encode结果。
目前支持的BertEmbedding模型有
========================== ====================================
模型名称 模型
-------------------------- ------------------------------------
en bert-base-cased
-------------------------- ------------------------------------
en-base-uncased bert-base-uncased
-------------------------- ------------------------------------
en-base-cased bert-base-cased
-------------------------- ------------------------------------
en-large-uncased bert-large-uncased
-------------------------- ------------------------------------
en-large-cased bert-large-cased
-------------------------- ------------------------------------
-------------------------- ------------------------------------
en-large-cased-wwm bert-large-cased-whole-word-mask
-------------------------- ------------------------------------
en-large-uncased-wwm bert-large-uncased-whole-word-mask
-------------------------- ------------------------------------
en-base-cased-mrpc bert-base-cased-finetuned-mrpc
-------------------------- ------------------------------------
-------------------------- ------------------------------------
multilingual bert-base-multilingual-cased
-------------------------- ------------------------------------
multilingual-base-uncased bert-base-multilingual-uncased
-------------------------- ------------------------------------
multilingual-base-cased bert-base-multilingual-cased
========================== ====================================
-----------------------------------------------------
Part V: 使用character-level的embedding
-----------------------------------------------------
除了预训练的embedding以外fastNLP还提供了CharEmbedding :class:`~fastNLP.modules.encoder.embedding.CNNCharEmbedding`
:class:`~fastNLP.modules.encoder.embedding.LSTMCharEmbedding`
CNNCharEmbedding的使用例子如下
.. code-block:: python
embed = CNNCharEmbedding(vocab, embed_size=100, char_emb_size=50)
这表示这个CNNCharEmbedding当中character的embedding维度大小为50返回的embedding结果维度大小为100。
与CNNCharEmbedding类似LSTMCharEmbedding的使用例子如下
.. code-block:: python
embed = LSTMCharEmbedding(vocab, embed_size=100, char_emb_size=50)
这表示这个LSTMCharEmbedding当中character的embedding维度大小为50返回的embedding结果维度大小为100。
-----------------------------------------------------
Part VI: 叠加使用多个embedding
-----------------------------------------------------
在fastNLP中我们使用 :class:`~fastNLP.modules.encoder.embedding.StackEmbedding` 来叠加多个embedding
例子如下:
.. code-block:: python
embed_1 = StaticEmbedding(vocab, model_dir_or_name='en-glove-6b-50', requires_grad=True)
embed_2 = StaticEmbedding(vocab, model_dir_or_name='en-word2vec-300', requires_grad=True)
stack_embed = StackEmbedding([embed_1, embed_2])
StackEmbedding会把多个embedding的结果拼接起来如上面例子的stack_embed返回的embedding维度为350维。
除此以外还可以把静态embedding跟上下文相关的embedding拼接起来
.. code-block:: python
elmo_embedding = ElmoEmbedding(vocab, model_dir_or_name='medium', layers='0,1,2', requires_grad=False)
glove_embedding = StaticEmbedding(vocab, model_dir_or_name='en-glove-6b-50', requires_grad=True)
stack_embed = StackEmbedding([elmo_embedding, glove_embedding])

View File

@ -0,0 +1,267 @@
==============================================================================
动手实现一个文本分类器I-使用Trainer和Tester快速训练和测试
==============================================================================
我们使用和 :doc:`/user/quickstart` 中一样的任务来进行详细的介绍。给出一段评价性文字预测其情感倾向是积极label=1
消极label=0还是中性label=2使用 :class:`~fastNLP.Trainer`:class:`~fastNLP.Tester` 来进行快速训练和测试。
--------------
数据处理
--------------
数据读入
我们可以使用 fastNLP :mod:`fastNLP.io` 模块中的 :class:`~fastNLP.io.SSTLoader`轻松地读取SST数据集数据来源https://nlp.stanford.edu/sentiment/trainDevTestTrees_PTB.zip
这里的 dataset 是 fastNLP 中 :class:`~fastNLP.DataSet` 类的对象。
.. code-block:: python
from fastNLP.io import SSTLoader
loader = SSTLoader()
#这里的all.txt是下载好数据后train.txt、dev.txt、test.txt的组合
dataset = loader.load("./trainDevTestTrees_PTB/trees/all.txt")
print(dataset[0])
输出数据如下::
{'words': ['It', "'s", 'a', 'lovely', 'film', 'with', 'lovely', 'performances', 'by', 'Buy', 'and', 'Accorsi', '.'] type=list,
'target': positive type=str}
除了读取数据外fastNLP 还提供了读取其它文件类型的 Loader 类、读取 Embedding的 Loader 等。详见 :doc:`/fastNLP.io`
数据处理
我们使用 :class:`~fastNLP.DataSet` 类的 :meth:`~fastNLP.DataSet.apply` 方法将 ``target`` :mod:`~fastNLP.core.field` 转化为整数。
.. code-block:: python
def label_to_int(x):
if x['target']=="positive":
return 1
elif x['target']=="negative":
return 0
else:
return 2
# 将label转为整数
dataset.apply(lambda x: label_to_int(x), new_field_name='target')
``words````target`` 已经足够用于 :class:`~fastNLP.models.CNNText` 的训练了,但我们从其文档
:class:`~fastNLP.models.CNNText` 中看到,在 :meth:`~fastNLP.models.CNNText.forward` 的时候,还可以传入可选参数 ``seq_len``
所以,我们再使用 :meth:`~fastNLP.DataSet.apply_field` 方法增加一个名为 ``seq_len``:mod:`~fastNLP.core.field`
.. code-block:: python
# 增加长度信息
dataset.apply_field(lambda x: len(x), field_name='words', new_field_name='seq_len')
观察可知: :meth:`~fastNLP.DataSet.apply_field`:meth:`~fastNLP.DataSet.apply` 类似,
但所传入的 `lambda` 函数是针对一个 :class:`~fastNLP.Instance` 中的一个 :mod:`~fastNLP.core.field` 的;
:meth:`~fastNLP.DataSet.apply` 所传入的 `lambda` 函数是针对整个 :class:`~fastNLP.Instance` 的。
.. note::
`lambda` 函数即匿名函数,是 Python 的重要特性。 ``lambda x: len(x)`` 和下面的这个函数的作用相同::
def func_lambda(x):
return len(x)
你也可以编写复杂的函数做为 :meth:`~fastNLP.DataSet.apply_field`:meth:`~fastNLP.DataSet.apply` 的参数
Vocabulary 的使用
我们再用 :class:`~fastNLP.Vocabulary` 类来统计数据中出现的单词,并使用 :meth:`~fastNLP.Vocabulary.index_dataset`
将单词序列转化为训练可用的数字序列。
.. code-block:: python
from fastNLP import Vocabulary
# 使用Vocabulary类统计单词并将单词序列转化为数字序列
vocab = Vocabulary(min_freq=2).from_dataset(dataset, field_name='words')
vocab.index_dataset(dataset, field_name='words',new_field_name='words')
print(dataset[0])
输出数据如下::
{'words': [27, 9, 6, 913, 16, 18, 913, 124, 31, 5715, 5, 1, 2] type=list,
'target': 1 type=int,
'seq_len': 13 type=int}
---------------------
使用内置模型训练
---------------------
内置模型的输入输出命名
fastNLP内置了一些完整的神经网络模型详见 :doc:`/fastNLP.models` , 我们使用其中的 :class:`~fastNLP.models.CNNText` 模型进行训练。
为了使用内置的 :class:`~fastNLP.models.CNNText`,我们必须修改 :class:`~fastNLP.DataSet`:mod:`~fastNLP.core.field` 的名称。
在这个例子中模型输入 (forward方法的参数) 为 ``words````seq_len`` ; 预测输出为 ``pred`` ;标准答案为 ``target``
具体的命名规范可以参考 :doc:`/fastNLP.core.const`
如果不想查看文档,您也可以使用 :class:`~fastNLP.Const` 类进行命名。下面的代码展示了给 :class:`~fastNLP.DataSet`
:mod:`~fastNLP.core.field` 改名的 :meth:`~fastNLP.DataSet.rename_field` 方法,以及 :class:`~fastNLP.Const` 类的使用方法。
.. code-block:: python
from fastNLP import Const
dataset.rename_field('words', Const.INPUT)
dataset.rename_field('seq_len', Const.INPUT_LEN)
dataset.rename_field('target', Const.TARGET)
print(Const.INPUT)
print(Const.INPUT_LEN)
print(Const.TARGET)
print(Const.OUTPUT)
输出结果为::
words
seq_len
target
pred
在给 :class:`~fastNLP.DataSet`:mod:`~fastNLP.core.field` 改名后,我们还需要设置训练所需的输入和目标,这里使用的是
:meth:`~fastNLP.DataSet.set_input`:meth:`~fastNLP.DataSet.set_target` 两个函数。
.. code-block:: python
#使用dataset的 set_input 和 set_target函数告诉模型dataset中那些数据是输入那些数据是标签目标输出
dataset.set_input(Const.INPUT, Const.INPUT_LEN)
dataset.set_target(Const.TARGET)
数据集分割
除了修改 :mod:`~fastNLP.core.field` 之外,我们还可以对 :class:`~fastNLP.DataSet` 进行分割,以供训练、开发和测试使用。
下面这段代码展示了 :meth:`~fastNLP.DataSet.split` 的使用方法
.. code-block:: python
train_dev_data, test_data = dataset.split(0.1)
train_data, dev_data = train_dev_data.split(0.1)
print(len(train_data), len(dev_data), len(test_data))
输出结果为::
9603 1067 1185
评价指标
训练模型需要提供一个评价指标。这里使用准确率做为评价指标。参数的 `命名规则` 跟上面类似。
``pred`` 参数对应的是模型的 forward 方法返回的 dict 中的一个 key 的名字。
``target`` 参数对应的是 :class:`~fastNLP.DataSet` 中作为标签的 :mod:`~fastNLP.core.field` 的名字。
.. code-block:: python
from fastNLP import AccuracyMetric
# metrics=AccuracyMetric() 在本例中与下面这行代码等价
metrics=AccuracyMetric(pred=Const.OUTPUT, target=Const.TARGET)
损失函数
训练模型需要提供一个损失函数
,fastNLP中提供了直接可以导入使用的四种loss分别为
* :class:`~fastNLP.CrossEntropyLoss`包装了torch.nn.functional.cross_entropy()函数,返回交叉熵损失(可以运用于多分类场景)
* :class:`~fastNLP.BCELoss`包装了torch.nn.functional.binary_cross_entropy()函数,返回二分类的交叉熵
* :class:`~fastNLP.L1Loss`包装了torch.nn.functional.l1_loss()函数返回L1 损失
* :class:`~fastNLP.NLLLoss`包装了torch.nn.functional.nll_loss()函数,返回负对数似然损失
下面提供了一个在分类问题中常用的交叉熵损失。注意它的 **初始化参数**
``pred`` 参数对应的是模型的 forward 方法返回的 dict 中的一个 key 的名字。
``target`` 参数对应的是 :class:`~fastNLP.DataSet` 中作为标签的 :mod:`~fastNLP.core.field` 的名字。
这里我们用 :class:`~fastNLP.Const` 来辅助命名,如果你自己编写模型中 forward 方法的返回值或
数据集中 :mod:`~fastNLP.core.field` 的名字与本例不同, 你可以把 ``pred`` 参数和 ``target`` 参数设定符合自己代码的值。
.. code-block:: python
from fastNLP import CrossEntropyLoss
# loss = CrossEntropyLoss() 在本例中与下面这行代码等价
loss = CrossEntropyLoss(pred=Const.OUTPUT, target=Const.TARGET)
优化器
定义模型运行的时候使用的优化器可以使用fastNLP包装好的优化器
* :class:`~fastNLP.SGD` 包装了torch.optim.SGD优化器
* :class:`~fastNLP.Adam` 包装了torch.optim.Adam优化器
也可以直接使用torch.optim.Optimizer中的优化器并在实例化 :class:`~fastNLP.Trainer` 类的时候传入优化器实参
.. code-block:: python
import torch.optim as optim
from fastNLP import Adam
#使用 torch.optim 定义优化器
optimizer_1=optim.RMSprop(model_cnn.parameters(), lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
#使用fastNLP中包装的 Adam 定义优化器
optimizer_2=Adam(lr=4e-3, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, model_params=model_cnn.parameters())
快速训练
现在我们可以导入 fastNLP 内置的文本分类模型 :class:`~fastNLP.models.CNNText` ,并使用 :class:`~fastNLP.Trainer` 进行训练,
除了使用 :class:`~fastNLP.Trainer`进行训练,我们也可以通过使用 :class:`~fastNLP.DataSetIter` 来编写自己的训练过程,具体见 :doc:`/tutorials/tutorial_5_datasetiter`
.. code-block:: python
from fastNLP.models import CNNText
#词嵌入的维度、训练的轮数和batch size
EMBED_DIM = 100
N_EPOCHS = 10
BATCH_SIZE = 16
#使用CNNText的时候第一个参数输入一个tuple,作为模型定义embedding的参数
#还可以传入 kernel_nums, kernel_sizes, padding, dropout的自定义值
model_cnn = CNNText((len(vocab),EMBED_DIM), num_classes=3, padding=2, dropout=0.1)
#如果在定义trainer的时候没有传入optimizer参数模型默认的优化器为torch.optim.Adam且learning rate为lr=4e-3
#这里只使用了optimizer_1作为优化器输入感兴趣可以尝试optimizer_2或者其他优化器作为输入
#这里只使用了loss作为损失函数输入感兴趣可以尝试其他损失函数输入
trainer = Trainer(model=model_cnn, train_data=train_data, dev_data=dev_data, loss=loss, metrics=metrics,
optimizer=optimizer_1,n_epochs=N_EPOCHS, batch_size=BATCH_SIZE)
trainer.train()
训练过程的输出如下::
input fields after batch(if batch size is 2):
words: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 40])
seq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
target fields after batch(if batch size is 2):
target: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
training epochs started 2019-07-08-15-44-48
Evaluation at Epoch 1/10. Step:601/6010. AccuracyMetric: acc=0.59044
Evaluation at Epoch 2/10. Step:1202/6010. AccuracyMetric: acc=0.599813
Evaluation at Epoch 3/10. Step:1803/6010. AccuracyMetric: acc=0.508903
Evaluation at Epoch 4/10. Step:2404/6010. AccuracyMetric: acc=0.596064
Evaluation at Epoch 5/10. Step:3005/6010. AccuracyMetric: acc=0.47985
Evaluation at Epoch 6/10. Step:3606/6010. AccuracyMetric: acc=0.589503
Evaluation at Epoch 7/10. Step:4207/6010. AccuracyMetric: acc=0.311153
Evaluation at Epoch 8/10. Step:4808/6010. AccuracyMetric: acc=0.549203
Evaluation at Epoch 9/10. Step:5409/6010. AccuracyMetric: acc=0.581068
Evaluation at Epoch 10/10. Step:6010/6010. AccuracyMetric: acc=0.523899
In Epoch:2/Step:1202, got best dev performance:AccuracyMetric: acc=0.599813
Reloaded the best model.
快速测试
:class:`~fastNLP.Trainer` 对应fastNLP 也提供了 :class:`~fastNLP.Tester` 用于快速测试,用法如下
.. code-block:: python
from fastNLP import Tester
tester = Tester(test_data, model_cnn, metrics=AccuracyMetric())
tester.test()
训练过程输出如下::
[tester]
AccuracyMetric: acc=0.565401

View File

@ -0,0 +1,250 @@
==============================================================================
动手实现一个文本分类器II-使用DataSetIter实现自定义训练过程
==============================================================================
我们使用和 :doc:`/user/quickstart` 中一样的任务来进行详细的介绍。给出一段评价性文字预测其情感倾向是积极label=1
消极label=0还是中性label=2使用 :class:`~fastNLP.DataSetIter` 类来编写自己的训练过程。
自己编写训练过程之前的内容与 :doc:`/tutorials/tutorial_4_loss_optimizer` 中的完全一样,如已经阅读过可以跳过。
--------------
数据处理
--------------
数据读入
我们可以使用 fastNLP :mod:`fastNLP.io` 模块中的 :class:`~fastNLP.io.SSTLoader`轻松地读取SST数据集数据来源https://nlp.stanford.edu/sentiment/trainDevTestTrees_PTB.zip
这里的 dataset 是 fastNLP 中 :class:`~fastNLP.DataSet` 类的对象。
.. code-block:: python
from fastNLP.io import SSTLoader
loader = SSTLoader()
#这里的all.txt是下载好数据后train.txt、dev.txt、test.txt的组合
dataset = loader.load("./trainDevTestTrees_PTB/trees/all.txt")
print(dataset[0])
输出数据如下::
{'words': ['It', "'s", 'a', 'lovely', 'film', 'with', 'lovely', 'performances', 'by', 'Buy', 'and', 'Accorsi', '.'] type=list,
'target': positive type=str}
除了读取数据外fastNLP 还提供了读取其它文件类型的 Loader 类、读取 Embedding的 Loader 等。详见 :doc:`/fastNLP.io`
数据处理
我们使用 :class:`~fastNLP.DataSet` 类的 :meth:`~fastNLP.DataSet.apply` 方法将 ``target`` :mod:`~fastNLP.core.field` 转化为整数。
.. code-block:: python
def label_to_int(x):
if x['target']=="positive":
return 1
elif x['target']=="negative":
return 0
else:
return 2
# 将label转为整数
dataset.apply(lambda x: label_to_int(x), new_field_name='target')
``words````target`` 已经足够用于 :class:`~fastNLP.models.CNNText` 的训练了,但我们从其文档
:class:`~fastNLP.models.CNNText` 中看到,在 :meth:`~fastNLP.models.CNNText.forward` 的时候,还可以传入可选参数 ``seq_len``
所以,我们再使用 :meth:`~fastNLP.DataSet.apply_field` 方法增加一个名为 ``seq_len``:mod:`~fastNLP.core.field`
.. code-block:: python
# 增加长度信息
dataset.apply_field(lambda x: len(x), field_name='words', new_field_name='seq_len')
观察可知: :meth:`~fastNLP.DataSet.apply_field`:meth:`~fastNLP.DataSet.apply` 类似,
但所传入的 `lambda` 函数是针对一个 :class:`~fastNLP.Instance` 中的一个 :mod:`~fastNLP.core.field` 的;
:meth:`~fastNLP.DataSet.apply` 所传入的 `lambda` 函数是针对整个 :class:`~fastNLP.Instance` 的。
.. note::
`lambda` 函数即匿名函数,是 Python 的重要特性。 ``lambda x: len(x)`` 和下面的这个函数的作用相同::
def func_lambda(x):
return len(x)
你也可以编写复杂的函数做为 :meth:`~fastNLP.DataSet.apply_field`:meth:`~fastNLP.DataSet.apply` 的参数
Vocabulary 的使用
我们再用 :class:`~fastNLP.Vocabulary` 类来统计数据中出现的单词,并使用 :meth:`~fastNLP.Vocabulary.index_dataset`
将单词序列转化为训练可用的数字序列。
.. code-block:: python
from fastNLP import Vocabulary
# 使用Vocabulary类统计单词并将单词序列转化为数字序列
vocab = Vocabulary(min_freq=2).from_dataset(dataset, field_name='words')
vocab.index_dataset(dataset, field_name='words',new_field_name='words')
print(dataset[0])
输出数据如下::
{'words': [27, 9, 6, 913, 16, 18, 913, 124, 31, 5715, 5, 1, 2] type=list,
'target': 1 type=int,
'seq_len': 13 type=int}
---------------------
使用内置模型训练
---------------------
内置模型的输入输出命名
fastNLP内置了一些完整的神经网络模型详见 :doc:`/fastNLP.models` , 我们使用其中的 :class:`~fastNLP.models.CNNText` 模型进行训练。
为了使用内置的 :class:`~fastNLP.models.CNNText`,我们必须修改 :class:`~fastNLP.DataSet`:mod:`~fastNLP.core.field` 的名称。
在这个例子中模型输入 (forward方法的参数) 为 ``words````seq_len`` ; 预测输出为 ``pred`` ;标准答案为 ``target``
具体的命名规范可以参考 :doc:`/fastNLP.core.const`
如果不想查看文档,您也可以使用 :class:`~fastNLP.Const` 类进行命名。下面的代码展示了给 :class:`~fastNLP.DataSet`
:mod:`~fastNLP.core.field` 改名的 :meth:`~fastNLP.DataSet.rename_field` 方法,以及 :class:`~fastNLP.Const` 类的使用方法。
.. code-block:: python
from fastNLP import Const
dataset.rename_field('words', Const.INPUT)
dataset.rename_field('seq_len', Const.INPUT_LEN)
dataset.rename_field('target', Const.TARGET)
print(Const.INPUT)
print(Const.INPUT_LEN)
print(Const.TARGET)
print(Const.OUTPUT)
输出结果为::
words
seq_len
target
pred
在给 :class:`~fastNLP.DataSet`:mod:`~fastNLP.core.field` 改名后,我们还需要设置训练所需的输入和目标,这里使用的是
:meth:`~fastNLP.DataSet.set_input`:meth:`~fastNLP.DataSet.set_target` 两个函数。
.. code-block:: python
#使用dataset的 set_input 和 set_target函数告诉模型dataset中那些数据是输入那些数据是标签目标输出
dataset.set_input(Const.INPUT, Const.INPUT_LEN)
dataset.set_target(Const.TARGET)
数据集分割
除了修改 :mod:`~fastNLP.core.field` 之外,我们还可以对 :class:`~fastNLP.DataSet` 进行分割,以供训练、开发和测试使用。
下面这段代码展示了 :meth:`~fastNLP.DataSet.split` 的使用方法
.. code-block:: python
train_dev_data, test_data = dataset.split(0.1)
train_data, dev_data = train_dev_data.split(0.1)
print(len(train_data), len(dev_data), len(test_data))
输出结果为::
9603 1067 1185
评价指标
训练模型需要提供一个评价指标。这里使用准确率做为评价指标。参数的 `命名规则` 跟上面类似。
``pred`` 参数对应的是模型的 forward 方法返回的 dict 中的一个 key 的名字。
``target`` 参数对应的是 :class:`~fastNLP.DataSet` 中作为标签的 :mod:`~fastNLP.core.field` 的名字。
.. code-block:: python
from fastNLP import AccuracyMetric
# metrics=AccuracyMetric() 在本例中与下面这行代码等价
metrics=AccuracyMetric(pred=Const.OUTPUT, target=Const.TARGET)
--------------------------
自己编写训练过程
--------------------------
如果你想用类似 PyTorch 的使用方法,自己编写训练过程,你可以参考下面这段代码。
其中使用了 fastNLP 提供的 :class:`~fastNLP.DataSetIter` 来获得小批量训练的小批量数据,
使用 :class:`~fastNLP.BucketSampler` 做为 :class:`~fastNLP.DataSetIter` 的参数来选择采样的方式。
DataSetIter
fastNLP定义的 :class:`~fastNLP.DataSetIter`用于定义一个batch并实现batch的多种功能在初始化时传入的参数有
* dataset: :class:`~fastNLP.DataSet` 对象, 数据集
* batch_size: 取出的batch大小
* sampler: 规定使用的 :class:`~fastNLP.Sampler` 若为 None, 使用 :class:`~fastNLP.RandomSampler` Default: None
* as_numpy: 若为 True, 输出batch为 `numpy.array`. 否则为 `torch.Tensor` Default: False
* prefetch: 若为 True使用多进程预先取出下一batch. Default: False
sampler
fastNLP 实现的采样器有:
* :class:`~fastNLP.BucketSampler` 可以随机地取出长度相似的元素 【初始化参数: num_bucketsbucket的数量 batch_sizebatch大小 seq_len_field_namedataset中对应序列长度的 :mod:`~fastNLP.core.field` 的名字】
* SequentialSampler 顺序取出元素的采样器【无初始化参数】
* RandomSampler随机化取元素的采样器【无初始化参数】
以下代码使用BucketSampler作为 :class:`~fastNLP.DataSetIter` 初始化的输入,运用 :class:`~fastNLP.DataSetIter` 自己写训练程序
.. code-block:: python
from fastNLP import BucketSampler
from fastNLP import DataSetIter
from fastNLP.models import CNNText
from fastNLP import Tester
import torch
import time
embed_dim = 100
model = CNNText((len(vocab),embed_dim), num_classes=3, padding=2, dropout=0.1)
def train(epoch, data, devdata):
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
lossfunc = torch.nn.CrossEntropyLoss()
batch_size = 32
# 定义一个Batch传入DataSet规定batch_size和去batch的规则。
# 顺序Sequential随机Random相似长度组成一个batchBucket
train_sampler = BucketSampler(batch_size=batch_size, seq_len_field_name='seq_len')
train_batch = DataSetIter(batch_size=batch_size, dataset=data, sampler=train_sampler)
start_time = time.time()
print("-"*5+"start training"+"-"*5)
for i in range(epoch):
loss_list = []
for batch_x, batch_y in train_batch:
optimizer.zero_grad()
output = model(batch_x['words'])
loss = lossfunc(output['pred'], batch_y['target'])
loss.backward()
optimizer.step()
loss_list.append(loss.item())
#这里verbose如果为0在调用Tester对象的test()函数时不输出任何信息,返回评估信息; 如果为1打印出验证结果返回评估信息
#在调用过Tester对象的test()函数后调用其_format_eval_results(res)函数,结构化输出验证结果
tester_tmp = Tester(devdata, model, metrics=AccuracyMetric(), verbose=0)
res=tester_tmp.test()
print('Epoch {:d} Avg Loss: {:.2f}'.format(i, sum(loss_list) / len(loss_list)),end=" ")
print(tester._format_eval_results(res),end=" ")
print('{:d}ms'.format(round((time.time()-start_time)*1000)))
loss_list.clear()
train(10, train_data, dev_data)
#使用tester进行快速测试
tester = Tester(test_data, model, metrics=AccuracyMetric())
tester.test()
这段代码的输出如下::
-----start training-----
Epoch 0 Avg Loss: 1.09 AccuracyMetric: acc=0.480787 58989ms
Epoch 1 Avg Loss: 1.00 AccuracyMetric: acc=0.500469 118348ms
Epoch 2 Avg Loss: 0.93 AccuracyMetric: acc=0.536082 176220ms
Epoch 3 Avg Loss: 0.87 AccuracyMetric: acc=0.556701 236032ms
Epoch 4 Avg Loss: 0.78 AccuracyMetric: acc=0.562324 294351ms
Epoch 5 Avg Loss: 0.69 AccuracyMetric: acc=0.58388 353673ms
Epoch 6 Avg Loss: 0.60 AccuracyMetric: acc=0.574508 412106ms
Epoch 7 Avg Loss: 0.51 AccuracyMetric: acc=0.589503 471097ms
Epoch 8 Avg Loss: 0.44 AccuracyMetric: acc=0.581068 529174ms
Epoch 9 Avg Loss: 0.39 AccuracyMetric: acc=0.572634 586216ms
[tester]
AccuracyMetric: acc=0.527426

View File

@ -0,0 +1,114 @@
=====================
快速实现序列标注模型
=====================
这一部分的内容主要展示如何使用fastNLP 实现序列标注任务。你可以使用fastNLP的各个组件快捷方便地完成序列标注任务达到出色的效果。
在阅读这篇Tutorial前希望你已经熟悉了fastNLP的基础使用包括基本数据结构以及数据预处理embedding的嵌入等希望你对之前的教程有更进一步的掌握。
我们将对CoNLL-03的英文数据集进行处理展示如何完成命名实体标注任务整个训练的过程。
载入数据
===================================
fastNLP可以方便地载入各种类型的数据。同时针对常见的数据集我们已经预先实现了载入方法其中包含CoNLL-03数据集。
在设计dataloader时以DataSetLoader为基类可以改写并应用于其他数据集的载入。
.. code-block:: python
class Conll2003DataLoader(DataSetLoader):
def __init__(self, task:str='ner', encoding_type:str='bioes'):
assert task in ('ner', 'pos', 'chunk')
index = {'ner':3, 'pos':1, 'chunk':2}[task]
#ConllLoader是fastNLP内置的类
self._loader = ConllLoader(headers=['raw_words', 'target'], indexes=[0, index])
self._tag_converters = None
if task in ('ner', 'chunk'):
#iob和iob2bioes会对tag进行统一标准化
self._tag_converters = [iob2]
if encoding_type == 'bioes':
self._tag_converters.append(iob2bioes)
def load(self, path: str):
dataset = self._loader.load(path)
def convert_tag_schema(tags):
for converter in self._tag_converters:
tags = converter(tags)
return tags
if self._tag_converters:
#使用apply实现convert_tag_schema函数实际上也支持匿名函数
dataset.apply_field(convert_tag_schema, field_name=Const.TARGET, new_field_name=Const.TARGET)
return dataset
输出数据格式如:
{'raw_words': ['on', 'Friday', ':'] type=list,
'target': ['O', 'O', 'O'] type=list},
数据处理
----------------------------
我们进一步处理数据。将数据和词表封装在 :class:`~fastNLP.DataBundle` 类中。data是DataBundle的实例。
我们输入模型的数据包括char embedding以及word embedding。在数据处理部分我们尝试完成词表的构建。
使用fastNLP中的Vocabulary类来构建词表。
.. code-block:: python
word_vocab = Vocabulary(min_freq=2)
word_vocab.from_dataset(data.datasets['train'], field_name=Const.INPUT)
word_vocab.index_dataset(*data.datasets.values(),field_name=Const.INPUT, new_field_name=Const.INPUT)
处理后的data对象内部为
dataset
vocabs
dataset保存了train和test中的数据并保存为dataset类型
vocab保存了wordsraw-words以及target的词表。
模型构建
--------------------------------
我们使用CNN-BILSTM-CRF模型完成这一任务。在网络构建方面fastNLP的网络定义继承pytorch的 :class:`nn.Module` 类。
自己可以按照pytorch的方式定义网络。需要注意的是命名。fastNLP的标准命名位于 :class:`~fastNLP.Const` 类。
模型的训练
首先实例化模型导入所需的char embedding以及word embedding。Embedding的载入可以参考教程。
也可以查看 :mod:`~fastNLP.modules.encoder.embedding` 使用所需的embedding 载入方法。
fastNLP将模型的训练过程封装在了 :class:`~fastnlp.trainer` 类中。
根据不同的任务调整trainer中的参数即可。通常一个trainer实例需要有指定的训练数据集模型优化器loss函数评测指标以及指定训练的epoch数batch size等参数。
.. code-block:: python
#实例化模型
model = CNNBiLSTMCRF(word_embed, char_embed, hidden_size=200, num_layers=1, tag_vocab=data.vocabs[Const.TARGET], encoding_type=encoding_type)
#定义优化器
optimizer = Adam(model.parameters(), lr=0.005)
#定义评估指标
Metrics=SpanFPreRecMetric(tag_vocab=data.vocabs[Const.TARGET], encoding_type=encoding_type)
#实例化trainer
trainer = Trainer(train_data=data.datasets['train'], model=model, optimizer=optimizer, dev_data=data.datasets['test'], batch_size=10, metrics=Metrics,callbacks=callbacks, n_epochs=100)
#开始训练
trainer.train()
训练中会保存最优的参数配置。
训练的结果如下:
.. code-block:: python
Evaluation on DataSet test:
SpanFPreRecMetric: f=0.727661, pre=0.732293, rec=0.723088
Evaluation at Epoch 1/100. Step:1405/140500. SpanFPreRecMetric: f=0.727661, pre=0.732293, rec=0.723088
Evaluation on DataSet test:
SpanFPreRecMetric: f=0.784307, pre=0.779371, rec=0.789306
Evaluation at Epoch 2/100. Step:2810/140500. SpanFPreRecMetric: f=0.784307, pre=0.779371, rec=0.789306
Evaluation on DataSet test:
SpanFPreRecMetric: f=0.810068, pre=0.811003, rec=0.809136
Evaluation at Epoch 3/100. Step:4215/140500. SpanFPreRecMetric: f=0.810068, pre=0.811003, rec=0.809136
Evaluation on DataSet test:
SpanFPreRecMetric: f=0.829592, pre=0.84153, rec=0.817989
Evaluation at Epoch 4/100. Step:5620/140500. SpanFPreRecMetric: f=0.829592, pre=0.84153, rec=0.817989
Evaluation on DataSet test:
SpanFPreRecMetric: f=0.828789, pre=0.837096, rec=0.820644
Evaluation at Epoch 5/100. Step:7025/140500. SpanFPreRecMetric: f=0.828789, pre=0.837096, rec=0.820644

View File

@ -0,0 +1,207 @@
======================================
使用Modules和Models快速搭建自定义模型
======================================
:mod:`~fastNLP.modules`:mod:`~fastNLP.models` 用于构建 fastNLP 所需的神经网络模型,它可以和 torch.nn 中的模型一起使用。
下面我们会分三节介绍编写构建模型的具体方法。
----------------------
使用 models 中的模型
----------------------
fastNLP 在 :mod:`~fastNLP.models` 模块中内置了如 :class:`~fastNLP.models.CNNText`
:class:`~fastNLP.models.SeqLabeling` 等完整的模型,以供用户直接使用。
:class:`~fastNLP.models.CNNText` 为例,我们看一个简单的文本分类的任务的实现过程。
首先是数据读入和处理部分,这里的代码和 :doc:`快速入门 </user/quickstart>` 中一致。
.. code-block:: python
from fastNLP.io import CSVLoader
from fastNLP import Vocabulary, CrossEntropyLoss, AccuracyMetric
loader = CSVLoader(headers=('raw_sentence', 'label'), sep='\t')
dataset = loader.load("./sample_data/tutorial_sample_dataset.csv")
dataset.apply(lambda x: x['raw_sentence'].lower(), new_field_name='sentence')
dataset.apply_field(lambda x: x.split(), field_name='sentence', new_field_name='words', is_input=True)
dataset.apply(lambda x: int(x['label']), new_field_name='target', is_target=True)
train_dev_data, test_data = dataset.split(0.1)
train_data, dev_data = train_dev_data.split(0.1)
vocab = Vocabulary(min_freq=2).from_dataset(train_data, field_name='words')
vocab.index_dataset(train_data, dev_data, test_data, field_name='words', new_field_name='words')
然后我们从 :mod:`~fastNLP.models` 中导入 ``CNNText`` 模型,用它进行训练
.. code-block:: python
from fastNLP.models import CNNText
from fastNLP import Trainer
model_cnn = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)
trainer = Trainer(model=model_cnn, train_data=train_data, dev_data=dev_data,
loss=CrossEntropyLoss(), metrics=AccuracyMetric())
trainer.train()
在 iPython 环境输入 `model_cnn` ,我们可以看到 ``model_cnn`` 的网络结构
.. parsed-literal::
CNNText(
(embed): Embedding(
169, 50
(dropout): Dropout(p=0.0)
)
(conv_pool): ConvMaxpool(
(convs): ModuleList(
(0): Conv1d(50, 3, kernel_size=(3,), stride=(1,), padding=(2,))
(1): Conv1d(50, 4, kernel_size=(4,), stride=(1,), padding=(2,))
(2): Conv1d(50, 5, kernel_size=(5,), stride=(1,), padding=(2,))
)
)
(dropout): Dropout(p=0.1)
(fc): Linear(in_features=12, out_features=5, bias=True)
)
FastNLP 中内置的 models 如下表所示,您可以点击具体的名称查看详细的 API
.. csv-table::
:header: 名称, 介绍
:class:`~fastNLP.models.CNNText` , 使用 CNN 进行文本分类的模型
:class:`~fastNLP.models.SeqLabeling` , 简单的序列标注模型
:class:`~fastNLP.models.AdvSeqLabel` , 更大网络结构的序列标注模型
:class:`~fastNLP.models.ESIM` , ESIM 模型的实现
:class:`~fastNLP.models.StarTransEnc` , 带 word-embedding的Star-Transformer模 型
:class:`~fastNLP.models.STSeqLabel` , 用于序列标注的 Star-Transformer 模型
:class:`~fastNLP.models.STNLICls` ,用于自然语言推断 (NLI) 的 Star-Transformer 模型
:class:`~fastNLP.models.STSeqCls` , 用于分类任务的 Star-Transformer 模型
:class:`~fastNLP.models.BiaffineParser` , Biaffine 依存句法分析网络的实现
----------------------------
使用 nn.torch 编写模型
----------------------------
FastNLP 完全支持使用 pyTorch 编写的模型,但与 pyTorch 中编写模型的常见方法不同,
用于 fastNLP 的模型中 forward 函数需要返回一个字典,字典中至少需要包含 ``pred`` 这个字段。
下面是使用 pyTorch 中的 torch.nn 模块编写的文本分类,注意观察代码中标注的向量维度。
由于 pyTorch 使用了约定俗成的维度设置,使得 forward 中需要多次处理维度顺序
.. code-block:: python
import torch
import torch.nn as nn
class LSTMText(nn.Module):
def __init__(self, vocab_size, embedding_dim, output_dim, hidden_dim=64, num_layers=2, dropout=0.5):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=num_layers, bidirectional=True, dropout=dropout)
self.fc = nn.Linear(hidden_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, words):
# (input) words : (batch_size, seq_len)
words = words.permute(1,0)
# words : (seq_len, batch_size)
embedded = self.dropout(self.embedding(words))
# embedded : (seq_len, batch_size, embedding_dim)
output, (hidden, cell) = self.lstm(embedded)
# output: (seq_len, batch_size, hidden_dim * 2)
# hidden: (num_layers * 2, batch_size, hidden_dim)
# cell: (num_layers * 2, batch_size, hidden_dim)
hidden = torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)
hidden = self.dropout(hidden)
# hidden: (batch_size, hidden_dim * 2)
pred = self.fc(hidden.squeeze(0))
# result: (batch_size, output_dim)
return {"pred":pred}
我们同样可以在 iPython 环境中查看这个模型的网络结构
.. parsed-literal::
LSTMText(
(embedding): Embedding(169, 50)
(lstm): LSTM(50, 64, num_layers=2, dropout=0.5, bidirectional=True)
(fc): Linear(in_features=128, out_features=5, bias=True)
(dropout): Dropout(p=0.5)
)
----------------------------
使用 modules 编写模型
----------------------------
下面我们使用 :mod:`fastNLP.modules` 中的组件来构建同样的网络。由于 fastNLP 统一把 ``batch_size`` 放在第一维,
在编写代码的过程中会有一定的便利。
.. code-block:: python
from fastNLP.modules import Embedding, LSTM, MLP
class Model(nn.Module):
def __init__(self, vocab_size, embedding_dim, output_dim, hidden_dim=64, num_layers=2, dropout=0.5):
super().__init__()
self.embedding = Embedding((vocab_size, embedding_dim))
self.lstm = LSTM(embedding_dim, hidden_dim, num_layers=num_layers, bidirectional=True)
self.mlp = MLP([hidden_dim*2,output_dim], dropout=dropout)
def forward(self, words):
embedded = self.embedding(words)
_,(hidden,_) = self.lstm(embedded)
pred = self.mlp(torch.cat((hidden[-1],hidden[-2]),dim=1))
return {"pred":pred}
我们自己编写模型的网络结构如下
.. parsed-literal::
Model(
(embedding): Embedding(
169, 50
(dropout): Dropout(p=0.0)
)
(lstm): LSTM(
(lstm): LSTM(50, 64, num_layers=2, batch_first=True, bidirectional=True)
)
(mlp): MLP(
(hiddens): ModuleList()
(output): Linear(in_features=128, out_features=5, bias=True)
(dropout): Dropout(p=0.5)
)
)
FastNLP 中包含的各种模块如下表,您可以点击具体的名称查看详细的 API也可以通过 :doc:`/fastNLP.modules` 进行了解。
.. csv-table::
:header: 名称, 介绍
:class:`~fastNLP.modules.ConvolutionCharEncoder` , char级别的卷积 encoder
:class:`~fastNLP.modules.LSTMCharEncoder` , char级别基于LSTM的 encoder
:class:`~fastNLP.modules.ConvMaxpool` , 结合了Convolution和Max-Pooling于一体的模块
:class:`~fastNLP.modules.LSTM` , LSTM模块, 轻量封装了PyTorch的LSTM
:class:`~fastNLP.modules.StarTransformer` , Star-Transformer 的encoder部分
:class:`~fastNLP.modules.TransformerEncoder` , Transformer的encoder模块不包含embedding层
:class:`~fastNLP.modules.VarRNN` , Variational Dropout RNN 模块
:class:`~fastNLP.modules.VarLSTM` , Variational Dropout LSTM 模块
:class:`~fastNLP.modules.VarGRU` , Variational Dropout GRU 模块
:class:`~fastNLP.modules.MaxPool` , Max-pooling模块
:class:`~fastNLP.modules.MaxPoolWithMask` , 带mask矩阵的max pooling。在做 max-pooling的时候不会考虑mask值为0的位置。
:class:`~fastNLP.modules.AvgPool` , Average-pooling模块
:class:`~fastNLP.modules.AvgPoolWithMask` , 带mask矩阵的average pooling。在做 average-pooling的时候不会考虑mask值为0的位置。
:class:`~fastNLP.modules.MultiHeadAttention` , MultiHead Attention 模块
:class:`~fastNLP.modules.MLP` , 简单的多层感知器模块
:class:`~fastNLP.modules.ConditionalRandomField` , 条件随机场模块
:class:`~fastNLP.modules.viterbi_decode` , 给定一个特征矩阵以及转移分数矩阵,计算出最佳的路径以及对应的分数 (与 :class:`~fastNLP.modules.ConditionalRandomField` 配合使用)
:class:`~fastNLP.modules.allowed_transitions` , 给定一个id到label的映射表返回所有可以跳转的列表:class:`~fastNLP.modules.ConditionalRandomField` 配合使用)
:class:`~fastNLP.modules.TimestepDropout` , 简单包装过的Dropout 组件

View File

@ -0,0 +1,121 @@
===============================
使用Metric快速评测你的模型
===============================
在进行训练时fastNLP提供了各种各样的 :mod:`~fastNLP.core.metrics`
:doc:`/user/quickstart` 中所介绍的,:class:`~fastNLP.AccuracyMetric` 类的对象被直接传到 :class:`~fastNLP.Trainer` 中用于训练
.. code-block:: python
from fastNLP import Trainer, CrossEntropyLoss, AccuracyMetric
trainer = Trainer(model=model, train_data=train_data, dev_data=dev_data,
loss=CrossEntropyLoss(), metrics=AccuracyMetric())
trainer.train()
除了 :class:`~fastNLP.AccuracyMetric` 之外,:class:`~fastNLP.SpanFPreRecMetric` 也是一种非常见的评价指标,
例如在序列标注问题中常以span的方式计算 F-measure, precision, recall。
另外fastNLP 还实现了用于抽取式QA如SQuAD的metric :class:`~fastNLP.ExtractiveQAMetric`
用户可以参考下面这个表格,点击第一列查看各个 :mod:`~fastNLP.core.metrics` 的详细文档。
.. csv-table::
:header: 名称, 介绍
:class:`~fastNLP.core.metrics.MetricBase` , 自定义metrics需继承的基类
:class:`~fastNLP.core.metrics.AccuracyMetric` , 简单的正确率metric
:class:`~fastNLP.core.metrics.SpanFPreRecMetric` , "同时计算 F-measure, precision, recall 值的 metric"
:class:`~fastNLP.core.metrics.ExtractiveQAMetric` , 用于抽取式QA任务 的metric
更多的 :mod:`~fastNLP.core.metrics` 正在被添加到 fastNLP 当中,敬请期待。
------------------------------
定义自己的metrics
------------------------------
在定义自己的metrics类时需继承 fastNLP 的 :class:`~fastNLP.core.metrics.MetricBase`,
并覆盖写入 ``evaluate````get_metric`` 方法。
evaluate(xxx) 中传入一个批次的数据,将针对一个批次的预测结果做评价指标的累计
get_metric(xxx) 当所有数据处理完毕时调用该方法,它将根据 evaluate函数累计的评价指标统计量来计算最终的评价结果
以分类问题中Accuracy计算为例假设model的forward返回dict中包含 `pred` 这个key, 并且该key需要用于Accuracy::
class Model(nn.Module):
def __init__(xxx):
# do something
def forward(self, xxx):
# do something
return {'pred': pred, 'other_keys':xxx} # pred's shape: batch_size x num_classes
假设dataset中 `label` 这个field是需要预测的值并且该field被设置为了target
对应的AccMetric可以按如下的定义, version1, 只使用这一次::
class AccMetric(MetricBase):
def __init__(self):
super().__init__()
# 根据你的情况自定义指标
self.corr_num = 0
self.total = 0
def evaluate(self, label, pred): # 这里的名称需要和dataset中target field与model返回的key是一样的不然找不到对应的value
# dev或test时每个batch结束会调用一次该方法需要实现如何根据每个batch累加metric
self.total += label.size(0)
self.corr_num += label.eq(pred).sum().item()
def get_metric(self, reset=True): # 在这里定义如何计算metric
acc = self.corr_num/self.total
if reset: # 是否清零以便重新计算
self.corr_num = 0
self.total = 0
return {'acc': acc} # 需要返回一个dictkey为该metric的名称该名称会显示到Trainer的progress bar中
version2如果需要复用Metric比如下一次使用AccMetric时dataset中目标field不叫label而叫y或者model的输出不是pred::
class AccMetric(MetricBase):
def __init__(self, label=None, pred=None):
# 假设在另一场景使用时目标field叫ymodel给出的key为pred_y。则只需要在初始化AccMetric时
# acc_metric = AccMetric(label='y', pred='pred_y')即可。
# 当初始化为acc_metric = AccMetric()即label=None, pred=None, fastNLP会直接使用'label', 'pred'作为key去索取对
# 应的的值
super().__init__()
self._init_param_map(label=label, pred=pred) # 该方法会注册label和pred. 仅需要注册evaluate()方法会用到的参数名即可
# 如果没有注册该则效果与version1就是一样的
# 根据你的情况自定义指标
self.corr_num = 0
self.total = 0
def evaluate(self, label, pred): # 这里的参数名称需要和self._init_param_map()注册时一致。
# dev或test时每个batch结束会调用一次该方法需要实现如何根据每个batch累加metric
self.total += label.size(0)
self.corr_num += label.eq(pred).sum().item()
def get_metric(self, reset=True): # 在这里定义如何计算metric
acc = self.corr_num/self.total
if reset: # 是否清零以便重新计算
self.corr_num = 0
self.total = 0
return {'acc': acc} # 需要返回一个dictkey为该metric的名称该名称会显示到Trainer的progress bar中
``MetricBase`` 将会在输入的字典 ``pred_dict````target_dict`` 中进行检查.
``pred_dict`` 是模型当中 ``forward()`` 函数或者 ``predict()`` 函数的返回值.
``target_dict`` 是DataSet当中的ground truth, 判定ground truth的条件是field的 ``is_target`` 被设置为True.
``MetricBase`` 会进行以下的类型检测:
1. self.evaluate当中是否有varargs, 这是不支持的.
2. self.evaluate当中所需要的参数是否既不在 ``pred_dict`` 也不在 ``target_dict`` .
3. self.evaluate当中所需要的参数是否既在 ``pred_dict`` 也在 ``target_dict`` .
除此以外在参数被传入self.evaluate以前这个函数会检测 ``pred_dict````target_dict`` 当中没有被用到的参数
如果kwargs是self.evaluate的参数则不会检测
self.evaluate将计算一个批次(batch)的评价指标,并累计。 没有返回值
self.get_metric将统计当前的评价指标并返回评价结果, 返回值需要是一个dict, key是指标名称value是指标的值

View File

@ -0,0 +1,67 @@
===================================================
使用Callback自定义你的训练过程
===================================================
在训练时我们常常要使用trick来提高模型的性能如调节学习率或者要打印训练中的信息。
这里我们提供Callback类在Trainer中插入代码完成一些自定义的操作。
我们使用和 :doc:`/user/quickstart` 中一样的任务来进行详细的介绍。
给出一段评价性文字预测其情感倾向是积极label=1、消极label=0还是中性label=2使用 :class:`~fastNLP.Trainer`:class:`~fastNLP.Tester` 来进行快速训练和测试。
关于数据处理Loss和Optimizer的选择可以看其他教程这里仅在训练时加入学习率衰减。
---------------------
Callback的构建和使用
---------------------
创建Callback
我们可以继承fastNLP :class:`~fastNLP.Callback` 类来定义自己的Callback。
这里我们实现一个让学习率线性衰减的Callback。
.. code-block:: python
import fastNLP
class LRDecay(fastNLP.Callback):
def __init__(self):
super(MyCallback, self).__init__()
self.base_lrs = []
self.delta = []
def on_train_begin(self):
# 初始化,仅训练开始时调用
self.base_lrs = [pg['lr'] for pg in self.optimizer.param_groups]
self.delta = [float(lr) / self.n_epochs for lr in self.base_lrs]
def on_epoch_end(self):
# 每个epoch结束时更新学习率
ep = self.epoch
lrs = [lr - d * ep for lr, d in zip(self.base_lrs, self.delta)]
self.change_lr(lrs)
def change_lr(self, lrs):
for pg, lr in zip(self.optimizer.param_groups, lrs):
pg['lr'] = lr
这里,:class:`~fastNLP.Callback` 中所有以 ``on_`` 开头的类方法会在 :class:`~fastNLP.Trainer` 的训练中在特定时间调用。
如 on_train_begin() 会在训练开始时被调用on_epoch_end() 会在每个 epoch 结束时调用。
具体有哪些类方法,参见文档 :class:`~fastNLP.Callback`
另外,为了使用方便,可以在 :class:`~fastNLP.Callback` 内部访问 :class:`~fastNLP.Trainer` 中的属性,如 optimizer, epoch, step分别对应训练时的优化器当前epoch数和当前的总step数。
具体可访问的属性,参见文档 :class:`~fastNLP.Callback`
使用Callback
在定义好 :class:`~fastNLP.Callback` 之后就能将它传入Trainer的 ``callbacks`` 参数,在实际训练时使用。
.. code-block:: python
"""
数据预处理,模型定义等等
"""
trainer = fastNLP.Trainer(
model=model, train_data=train_data, dev_data=dev_data,
optimizer=optimizer, metrics=metrics,
batch_size=10, n_epochs=100,
callbacks=[LRDecay()])
trainer.train()

View File

@ -0,0 +1,3 @@
===============
在代码中写文档
===============

View File

@ -0,0 +1,156 @@
======
大标题
======
.. note::
中文标题需要符号的数量至少是中文字数的两倍
.. warning::
符号的数量只可以多,不可以少。
小标题1
###########
小标题2
*********
小标题3(正常使用)
========================
小标题4
-------------------
推荐使用大标题、小标题3和小标题4
官方文档 http://docutils.sourceforge.net/docs/user/rst/quickref.html
`熟悉markdown的同学推荐参考这篇文章 <https://macplay.github.io/posts/cong-markdown-dao-restructuredtext/#id30>`_
\<\>内表示的是链接地址,\<\>外的是显示到外面的文字
常见语法
============
*emphasis*
**strong**
`text`
``inline literal``
http://docutils.sf.net/ 孤立的网址会自动生成链接
显示为特定的文字的链接 `sohu <http://www.sohu.com>`_
突出显示的
上面文字
正常缩进
形成锻炼
特殊模块
============
选项会自动识别
-v An option
-o file Same with value
--delta A long option
--delta=len Same with value
图片
.. image:: ../figures/procedures.PNG
:height: 200
:width: 560
:scale: 50
:alt: alternate text
:align: center
显示一个冒号的代码块::
中间要空一行
::
不显示冒号的代码块
.. code-block:: python
:linenos:
:emphasize-lines: 1,3
print("专业的代码块")
print("")
print("有行号和高亮")
数学块
==========
.. math::
H_2O + Na = NaOH + H_2 \uparrow
复杂表格
==========
+------------------------+------------+----------+----------+
| Header row, column 1 | Header 2 | Header 3 | Header 4 |
| (header rows optional) | | | |
+========================+============+==========+==========+
| body row 1, column 1 | column 2 | column 3 | column 4 |
+------------------------+------------+----------+----------+
| body row 2 | Cells may span columns. |
+------------------------+------------+---------------------+
| body row 3 | Cells may | - Table cells |
+------------------------+ span rows. | - contain |
| body row 4 | | - body elements. |
+------------------------+------------+---------------------+
简易表格
==========
===== ===== ======
Inputs Output
------------ ------
A B A or B
===== ===== ======
False False False
True True True
===== ===== ======
csv 表格
============
.. csv-table::
:header: sentence, target
This is the first instance ., 0
Second instance ., 1
Third instance ., 1
..., ...
[重要]各种链接
===================
各种链接帮助我们连接到fastNLP文档的各个位置
\<\>内表示的是链接地址,\<\>外的是显示到外面的文字
:doc:`根据文件名链接 </user/quickstart>`
:mod:`~fastNLP.core.batch`
:class:`~fastNLP.Batch`
~表示只显示最后一项
:meth:`fastNLP.DataSet.apply`

View File

@ -7,10 +7,12 @@
fastNLP 依赖如下包:: fastNLP 依赖如下包::
torch>=0.4.0 numpy>=1.14.2
numpy torch>=1.0.0
tqdm tqdm>=4.28.1
nltk nltk>=3.4.1
requests
spacy
其中torch的安装可能与操作系统及 CUDA 的版本相关,请参见 `PyTorch 官网 <https://pytorch.org/get-started/locally/>`_ 其中torch的安装可能与操作系统及 CUDA 的版本相关,请参见 `PyTorch 官网 <https://pytorch.org/get-started/locally/>`_
在依赖包安装完成的情况,您可以在命令行执行如下指令完成安装 在依赖包安装完成的情况,您可以在命令行执行如下指令完成安装
@ -18,3 +20,4 @@ fastNLP 依赖如下包::
.. code:: shell .. code:: shell
>>> pip install fastNLP >>> pip install fastNLP
>>> python -m spacy download en

View File

@ -49,7 +49,7 @@
.. code-block:: python .. code-block:: python
from fastNLP.models import CNNText from fastNLP.models import CNNText
model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1) model = CNNText((len(vocab),50), num_classes=5, dropout=0.1)
:class:`~fastNLP.models.CNNText` 的网络结构如下:: :class:`~fastNLP.models.CNNText` 的网络结构如下::
@ -121,4 +121,4 @@
In Epoch:6/Step:12, got best dev performance:AccuracyMetric: acc=0.8 In Epoch:6/Step:12, got best dev performance:AccuracyMetric: acc=0.8
Reloaded the best model. Reloaded the best model.
这份教程只是简单地介绍了使用 fastNLP 工作的流程,具体的细节分析见 :doc:`/user/tutorial_one` 这份教程只是简单地介绍了使用 fastNLP 工作的流程,更多的教程分析见 :doc:`/user/tutorials`

View File

@ -1,371 +0,0 @@
===============
详细指南
===============
我们使用和 :doc:`/user/quickstart` 中一样的任务来进行详细的介绍。给出一段文字预测它的标签是0~4中的哪一个
(数据来源 `kaggle <https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews>`_ )。
--------------
数据处理
--------------
数据读入
我们可以使用 fastNLP :mod:`fastNLP.io` 模块中的 :class:`~fastNLP.io.CSVLoader` 类,轻松地从 csv 文件读取我们的数据。
这里的 dataset 是 fastNLP 中 :class:`~fastNLP.DataSet` 类的对象
.. code-block:: python
from fastNLP.io import CSVLoader
loader = CSVLoader(headers=('raw_sentence', 'label'), sep='\t')
dataset = loader.load("./sample_data/tutorial_sample_dataset.csv")
除了读取数据外fastNLP 还提供了读取其它文件类型的 Loader 类、读取 Embedding的 Loader 等。详见 :doc:`/fastNLP.io`
Instance 和 DataSet
fastNLP 中的 :class:`~fastNLP.DataSet` 类对象类似于二维表格,它的每一列是一个 :mod:`~fastNLP.core.field`
每一行是一个 :mod:`~fastNLP.core.instance` 。我们可以手动向数据集中添加 :class:`~fastNLP.Instance` 类的对象
.. code-block:: python
from fastNLP import Instance
dataset.append(Instance(raw_sentence='fake data', label='0'))
此时的 ``dataset[-1]`` 的值如下,可以看到,数据集中的每个数据包含 ``raw_sentence````label`` 两个
:mod:`~fastNLP.core.field` ,他们的类型都是 ``str`` ::
{'raw_sentence': fake data type=str, 'label': 0 type=str}
field 的修改
我们使用 :class:`~fastNLP.DataSet` 类的 :meth:`~fastNLP.DataSet.apply` 方法将 ``raw_sentence`` 中字母变成小写,并将句子分词。
同时也将 ``label`` :mod:`~fastNLP.core.field` 转化为整数并改名为 ``target``
.. code-block:: python
dataset.apply(lambda x: x['raw_sentence'].lower(), new_field_name='sentence')
dataset.apply_field(lambda x: x.split(), field_name='sentence', new_field_name='words')
dataset.apply(lambda x: int(x['label']), new_field_name='target')
``words````target`` 已经足够用于 :class:`~fastNLP.models.CNNText` 的训练了,但我们从其文档
:class:`~fastNLP.models.CNNText` 中看到,在 :meth:`~fastNLP.models.CNNText.forward` 的时候,还可以传入可选参数 ``seq_len``
所以,我们再使用 :meth:`~fastNLP.DataSet.apply_field` 方法增加一个名为 ``seq_len``:mod:`~fastNLP.core.field`
.. code-block:: python
dataset.apply_field(lambda x: len(x), field_name='words', new_field_name='seq_len')
观察可知: :meth:`~fastNLP.DataSet.apply_field`:meth:`~fastNLP.DataSet.apply` 类似,
但所传入的 `lambda` 函数是针对一个 :class:`~fastNLP.Instance` 中的一个 :mod:`~fastNLP.core.field` 的;
:meth:`~fastNLP.DataSet.apply` 所传入的 `lambda` 函数是针对整个 :class:`~fastNLP.Instance` 的。
.. note::
`lambda` 函数即匿名函数,是 Python 的重要特性。 ``lambda x: len(x)`` 和下面的这个函数的作用相同::
def func_lambda(x):
return len(x)
你也可以编写复杂的函数做为 :meth:`~fastNLP.DataSet.apply_field`:meth:`~fastNLP.DataSet.apply` 的参数
Vocabulary 的使用
我们再用 :class:`~fastNLP.Vocabulary` 类来统计数据中出现的单词,并使用 :meth:`~fastNLP.Vocabularyindex_dataset`
将单词序列转化为训练可用的数字序列。
.. code-block:: python
from fastNLP import Vocabulary
vocab = Vocabulary(min_freq=2).from_dataset(dataset, field_name='words')
vocab.index_dataset(dataset, field_name='words',new_field_name='words')
数据集分割
除了修改 :mod:`~fastNLP.core.field` 之外,我们还可以对 :class:`~fastNLP.DataSet` 进行分割,以供训练、开发和测试使用。
下面这段代码展示了 :meth:`~fastNLP.DataSet.split` 的使用方法(但实际应该放在后面两段改名和设置输入的代码之后)
.. code-block:: python
train_dev_data, test_data = dataset.split(0.1)
train_data, dev_data = train_dev_data.split(0.1)
len(train_data), len(dev_data), len(test_data)
---------------------
使用内置模型训练
---------------------
内置模型的输入输出命名
fastNLP内置了一些完整的神经网络模型详见 :doc:`/fastNLP.models` , 我们使用其中的 :class:`~fastNLP.models.CNNText` 模型进行训练。
为了使用内置的 :class:`~fastNLP.models.CNNText`,我们必须修改 :class:`~fastNLP.DataSet`:mod:`~fastNLP.core.field` 的名称。
在这个例子中模型输入 (forward方法的参数) 为 ``words````seq_len`` ; 预测输出为 ``pred`` ;标准答案为 ``target``
具体的命名规范可以参考 :doc:`/fastNLP.core.const`
如果不想查看文档,您也可以使用 :class:`~fastNLP.Const` 类进行命名。下面的代码展示了给 :class:`~fastNLP.DataSet`
:mod:`~fastNLP.core.field` 改名的 :meth:`~fastNLP.DataSet.rename_field` 方法,以及 :class:`~fastNLP.Const` 类的使用方法。
.. code-block:: python
from fastNLP import Const
dataset.rename_field('words', Const.INPUT)
dataset.rename_field('seq_len', Const.INPUT_LEN)
dataset.rename_field('target', Const.TARGET)
在给 :class:`~fastNLP.DataSet`:mod:`~fastNLP.core.field` 改名后,我们还需要设置训练所需的输入和目标,这里使用的是
:meth:`~fastNLP.DataSet.set_input`:meth:`~fastNLP.DataSet.set_target` 两个函数。
.. code-block:: python
dataset.set_input(Const.INPUT, Const.INPUT_LEN)
dataset.set_target(Const.TARGET)
快速训练
现在我们可以导入 fastNLP 内置的文本分类模型 :class:`~fastNLP.models.CNNText` ,并使用 :class:`~fastNLP.Trainer` 进行训练了
(其中 ``loss````metrics`` 的定义,我们将在后续两段代码中给出)。
.. code-block:: python
from fastNLP.models import CNNText
from fastNLP import Trainer
model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)
trainer = Trainer(model=model_cnn, train_data=train_data, dev_data=dev_data,
loss=loss, metrics=metrics)
trainer.train()
训练过程的输出如下::
input fields after batch(if batch size is 2):
words: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 26])
target fields after batch(if batch size is 2):
target: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
training epochs started 2019-05-09-10-59-39
Evaluation at Epoch 1/10. Step:2/20. AccuracyMetric: acc=0.333333
Evaluation at Epoch 2/10. Step:4/20. AccuracyMetric: acc=0.533333
Evaluation at Epoch 3/10. Step:6/20. AccuracyMetric: acc=0.533333
Evaluation at Epoch 4/10. Step:8/20. AccuracyMetric: acc=0.533333
Evaluation at Epoch 5/10. Step:10/20. AccuracyMetric: acc=0.6
Evaluation at Epoch 6/10. Step:12/20. AccuracyMetric: acc=0.8
Evaluation at Epoch 7/10. Step:14/20. AccuracyMetric: acc=0.8
Evaluation at Epoch 8/10. Step:16/20. AccuracyMetric: acc=0.733333
Evaluation at Epoch 9/10. Step:18/20. AccuracyMetric: acc=0.733333
Evaluation at Epoch 10/10. Step:20/20. AccuracyMetric: acc=0.733333
In Epoch:6/Step:12, got best dev performance:AccuracyMetric: acc=0.8
Reloaded the best model.
损失函数
训练模型需要提供一个损失函数, 下面提供了一个在分类问题中常用的交叉熵损失。注意它的 **初始化参数**
``pred`` 参数对应的是模型的 forward 方法返回的 dict 中的一个 key 的名字。
``target`` 参数对应的是 :class:`~fastNLP.DataSet` 中作为标签的 :mod:`~fastNLP.core.field` 的名字。
这里我们用 :class:`~fastNLP.Const` 来辅助命名,如果你自己编写模型中 forward 方法的返回值或
数据集中 :mod:`~fastNLP.core.field` 的名字与本例不同, 你可以把 ``pred`` 参数和 ``target`` 参数设定符合自己代码的值。
.. code-block:: python
from fastNLP import CrossEntropyLoss
# loss = CrossEntropyLoss() 在本例中与下面这行代码等价
loss = CrossEntropyLoss(pred=Const.OUTPUT, target=Const.TARGET)
评价指标
训练模型需要提供一个评价指标。这里使用准确率做为评价指标。参数的 `命名规则` 跟上面类似。
``pred`` 参数对应的是模型的 forward 方法返回的 dict 中的一个 key 的名字。
``target`` 参数对应的是 :class:`~fastNLP.DataSet` 中作为标签的 :mod:`~fastNLP.core.field` 的名字。
.. code-block:: python
from fastNLP import AccuracyMetric
# metrics=AccuracyMetric() 在本例中与下面这行代码等价
metrics=AccuracyMetric(pred=Const.OUTPUT, target=Const.TARGET)
快速测试
:class:`~fastNLP.Trainer` 对应fastNLP 也提供了 :class:`~fastNLP.Tester` 用于快速测试,用法如下
.. code-block:: python
from fastNLP import Tester
tester = Tester(test_data, model_cnn, metrics=AccuracyMetric())
tester.test()
---------------------
编写自己的模型
---------------------
因为 fastNLP 是基于 `PyTorch <https://pytorch.org/>`_ 开发的框架,所以我们可以基于 PyTorch 模型编写自己的神经网络模型。
与标准的 PyTorch 模型不同fastNLP 模型中 forward 方法返回的是一个字典,字典中至少需要包含 "pred" 这个字段。
而 forward 方法的参数名称必须与 :class:`~fastNLP.DataSet` 中用 :meth:`~fastNLP.DataSet.set_input` 设定的名称一致。
模型定义的代码如下:
.. code-block:: python
import torch
import torch.nn as nn
class LSTMText(nn.Module):
def __init__(self, vocab_size, embedding_dim, output_dim, hidden_dim=64, num_layers=2, dropout=0.5):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=num_layers, bidirectional=True, dropout=dropout)
self.fc = nn.Linear(hidden_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, words):
# (input) words : (batch_size, seq_len)
words = words.permute(1,0)
# words : (seq_len, batch_size)
embedded = self.dropout(self.embedding(words))
# embedded : (seq_len, batch_size, embedding_dim)
output, (hidden, cell) = self.lstm(embedded)
# output: (seq_len, batch_size, hidden_dim * 2)
# hidden: (num_layers * 2, batch_size, hidden_dim)
# cell: (num_layers * 2, batch_size, hidden_dim)
hidden = torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)
hidden = self.dropout(hidden)
# hidden: (batch_size, hidden_dim * 2)
pred = self.fc(hidden.squeeze(0))
# result: (batch_size, output_dim)
return {"pred":pred}
模型的使用方法与内置模型 :class:`~fastNLP.models.CNNText` 一致
.. code-block:: python
model_lstm = LSTMText(len(vocab),50,5)
trainer = Trainer(model=model_lstm, train_data=train_data, dev_data=dev_data,
loss=loss, metrics=metrics)
trainer.train()
tester = Tester(test_data, model_lstm, metrics=AccuracyMetric())
tester.test()
.. todo::
使用 :doc:`/fastNLP.modules` 编写模型
--------------------------
自己编写训练过程
--------------------------
如果你想用类似 PyTorch 的使用方法,自己编写训练过程,你可以参考下面这段代码。其中使用了 fastNLP 提供的 :class:`~fastNLP.Batch`
来获得小批量训练的小批量数据,使用 :class:`~fastNLP.BucketSampler` 做为 :class:`~fastNLP.Batch` 的参数来选择采样的方式。
这段代码中使用了 PyTorch 的 `torch.optim.Adam` 优化器 和 `torch.nn.CrossEntropyLoss` 损失函数,并自己计算了正确率
.. code-block:: python
from fastNLP import BucketSampler
from fastNLP import Batch
import torch
import time
model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)
def train(epoch, data):
optim = torch.optim.Adam(model.parameters(), lr=0.001)
lossfunc = torch.nn.CrossEntropyLoss()
batch_size = 32
train_sampler = BucketSampler(batch_size=batch_size, seq_len_field_name='seq_len')
train_batch = Batch(batch_size=batch_size, dataset=data, sampler=train_sampler)
start_time = time.time()
for i in range(epoch):
loss_list = []
for batch_x, batch_y in train_batch:
optim.zero_grad()
output = model(batch_x['words'])
loss = lossfunc(output['pred'], batch_y['target'])
loss.backward()
optim.step()
loss_list.append(loss.item())
print('Epoch {:d} Avg Loss: {:.2f}'.format(i, sum(loss_list) / len(loss_list)),end=" ")
print('{:d}ms'.format(round((time.time()-start_time)*1000)))
loss_list.clear()
train(10, train_data)
tester = Tester(test_data, model, metrics=AccuracyMetric())
tester.test()
这段代码的输出如下::
Epoch 0 Avg Loss: 2.76 17ms
Epoch 1 Avg Loss: 2.55 29ms
Epoch 2 Avg Loss: 2.37 41ms
Epoch 3 Avg Loss: 2.30 53ms
Epoch 4 Avg Loss: 2.12 65ms
Epoch 5 Avg Loss: 2.16 76ms
Epoch 6 Avg Loss: 1.88 88ms
Epoch 7 Avg Loss: 1.84 99ms
Epoch 8 Avg Loss: 1.71 111ms
Epoch 9 Avg Loss: 1.62 122ms
[tester]
AccuracyMetric: acc=0.142857
----------------------------------
使用 Callback 增强 Trainer
----------------------------------
如果你不想自己实现繁琐的训练过程,只希望在训练过程中实现一些自己的功能(比如:输出从训练开始到当前 batch 结束的总时间),
你可以使用 fastNLP 提供的 :class:`~fastNLP.Callback` 类。下面的例子中,我们继承 :class:`~fastNLP.Callback` 类实现了这个功能。
.. code-block:: python
from fastNLP import Callback
start_time = time.time()
class MyCallback(Callback):
def on_epoch_end(self):
print('Sum Time: {:d}ms\n\n'.format(round((time.time()-start_time)*1000)))
model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)
trainer = Trainer(model=model, train_data=train_data, dev_data=dev_data,
loss=CrossEntropyLoss(), metrics=AccuracyMetric(), callbacks=[MyCallback()])
trainer.train()
训练输出如下::
input fields after batch(if batch size is 2):
words: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 16])
seq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
target fields after batch(if batch size is 2):
target: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
training epochs started 2019-05-12-21-38-40
Evaluation at Epoch 1/10. Step:2/20. AccuracyMetric: acc=0.285714
Sum Time: 51ms
…………………………
Evaluation at Epoch 10/10. Step:20/20. AccuracyMetric: acc=0.857143
Sum Time: 212ms
In Epoch:10/Step:20, got best dev performance:AccuracyMetric: acc=0.857143
Reloaded the best model.
这个例子只是介绍了 :class:`~fastNLP.Callback` 类的使用方法。实际应用比如负采样、Learning Rate Decay、Early Stop 等)中
很多功能已经被 fastNLP 实现了。你可以直接 import 它们使用,详细请查看文档 :doc:`/fastNLP.core.callback`

View File

@ -0,0 +1,20 @@
========================
fastNLP 详细使用教程
========================
这里是更详细的使用教程。对于大部分的用户,我们建议你从第一篇开始顺序阅读;如果你只想了解其中的一部分,也可以进行选读。
.. toctree::
:maxdepth: 1
使用DataSet预处理文本 </tutorials/tutorial_1_data_preprocess>
使用DataSetLoader加载数据集 </tutorials/tutorial_2_load_dataset>
使用Embedding模块将文本转成向量 </tutorials/tutorial_3_embedding>
动手实现一个文本分类器I-使用Trainer和Tester快速训练和测试 </tutorials/tutorial_4_loss_optimizer>
动手实现一个文本分类器II-使用DataSetIter实现自定义训练过程 </tutorials/tutorial_5_datasetiter>
快速实现序列标注模型 </tutorials/tutorial_6_seq_labeling>
使用Modules和Models快速搭建自定义模型 </tutorials/tutorial_7_modules_models>
使用Metric快速评测你的模型 </tutorials/tutorial_8_metrics>
使用Callback自定义你的训练过程 </tutorials/tutorial_9_callback>
使用fitlog 辅助 fastNLP 进行科研 </tutorials/tutorial_10_fitlog>

View File

@ -1,18 +1,23 @@
""" """
fastNLP :mod:`~fastNLP.core` :mod:`~fastNLP.io` :mod:`~fastNLP.modules`:mod:`~fastNLP.models` fastNLP :mod:`~fastNLP.core` :mod:`~fastNLP.io` :mod:`~fastNLP.embeddings` :mod:`~fastNLP.modules`
等子模块组成你可以点进去查看每个模块的文档 :mod:`~fastNLP.models` 等子模块组成你可以查看每个模块的文档
- :mod:`~fastNLP.core` 是fastNLP 的核心模块包括 DataSet Trainer Tester 等组件详见文档 :doc:`/fastNLP.core` - :mod:`~fastNLP.core` 是fastNLP 的核心模块包括 DataSet Trainer Tester 等组件详见文档 :doc:`/fastNLP.core`
- :mod:`~fastNLP.io` 是实现输入输出的模块包括了数据集的读取模型的存取等功能详见文档 :doc:`/fastNLP.io` - :mod:`~fastNLP.io` 是实现输入输出的模块包括了数据集的读取模型的存取等功能详见文档 :doc:`/fastNLP.io`
- :mod:`~fastNLP.embeddings` 提供用于构建复杂网络模型所需的各种embedding详见文档 :doc:`/fastNLP.embeddings`
- :mod:`~fastNLP.modules` 包含了用于搭建神经网络模型的诸多组件可以帮助用户快速搭建自己所需的网络详见文档 :doc:`/fastNLP.modules` - :mod:`~fastNLP.modules` 包含了用于搭建神经网络模型的诸多组件可以帮助用户快速搭建自己所需的网络详见文档 :doc:`/fastNLP.modules`
- :mod:`~fastNLP.models` 包含了一些使用 fastNLP 实现的完整网络模型包括CNNTextSeqLabeling等常见模型详见文档 :doc:`/fastNLP.models` - :mod:`~fastNLP.models` 包含了一些使用 fastNLP 实现的完整网络模型包括 :class:`~fastNLP.models.CNNText` :class:`~fastNLP.models.SeqLabeling` 等常见模型详见文档 :doc:`fastNLP.models`
fastNLP 中最常用的组件可以直接从 fastNLP 包中 import 他们的文档如下 fastNLP 中最常用的组件可以直接从 fastNLP 包中 import 他们的文档如下
""" """
__all__ = [ __all__ = [
"Instance", "Instance",
"FieldArray", "FieldArray",
"Batch",
"DataSetIter",
"BatchIter",
"TorchLoaderIter",
"Vocabulary", "Vocabulary",
"DataSet", "DataSet",
"Const", "Const",
@ -33,7 +38,7 @@ __all__ = [
"AccuracyMetric", "AccuracyMetric",
"SpanFPreRecMetric", "SpanFPreRecMetric",
"SQuADMetric", "ExtractiveQAMetric",
"Optimizer", "Optimizer",
"SGD", "SGD",
@ -52,8 +57,10 @@ __all__ = [
"cache_results" "cache_results"
] ]
__version__ = '0.4.0' __version__ = '0.4.5'
from .core import * from .core import *
from . import models from . import models
from . import modules from . import modules
from . import embeddings
from .io import data_loader

View File

@ -1,12 +1,12 @@
""" """
core 模块里实现了 fastNLP 的核心框架常用的功能都可以从 fastNLP 包中直接 import当然你也同样可以从 core 模块的子模块中 import core 模块里实现了 fastNLP 的核心框架常用的功能都可以从 fastNLP 包中直接 import当然你也同样可以从 core 模块的子模块中 import
例如 Batch 组件有两种 import 的方式:: 例如 :class:`~fastNLP.DataSetIter` 组件有两种 import 的方式::
# 直接从 fastNLP 中 import # 直接从 fastNLP 中 import
from fastNLP import Batch from fastNLP import DataSetIter
# 从 core 模块的子模块 batch 中 import # 从 core 模块的子模块 batch 中 import DataSetIter
from fastNLP.core.batch import Batch from fastNLP.core.batch import DataSetIter
对于常用的功能你只需要在 :doc:`fastNLP` 中查看即可如果想了解各个子模块的具体作用您可以在下面找到每个子模块的具体文档 对于常用的功能你只需要在 :doc:`fastNLP` 中查看即可如果想了解各个子模块的具体作用您可以在下面找到每个子模块的具体文档
@ -14,14 +14,14 @@ core 模块里实现了 fastNLP 的核心框架,常用的功能都可以从 fa
介绍core 的子模块的分工好像必要性不大 介绍core 的子模块的分工好像必要性不大
""" """
from .batch import Batch from .batch import DataSetIter, BatchIter, TorchLoaderIter
from .callback import Callback, GradientClipCallback, EarlyStopCallback, TensorboardCallback, LRScheduler, ControlC from .callback import Callback, GradientClipCallback, EarlyStopCallback, TensorboardCallback, LRScheduler, ControlC
from .const import Const from .const import Const
from .dataset import DataSet from .dataset import DataSet
from .field import FieldArray, Padder, AutoPadder, EngChar2DPadder from .field import FieldArray, Padder, AutoPadder, EngChar2DPadder
from .instance import Instance from .instance import Instance
from .losses import LossFunc, CrossEntropyLoss, L1Loss, BCELoss, NLLLoss, LossInForward from .losses import LossFunc, CrossEntropyLoss, L1Loss, BCELoss, NLLLoss, LossInForward
from .metrics import AccuracyMetric, SpanFPreRecMetric, SQuADMetric from .metrics import AccuracyMetric, SpanFPreRecMetric, ExtractiveQAMetric
from .optimizer import Optimizer, SGD, Adam from .optimizer import Optimizer, SGD, Adam
from .sampler import SequentialSampler, BucketSampler, RandomSampler, Sampler from .sampler import SequentialSampler, BucketSampler, RandomSampler, Sampler
from .tester import Tester from .tester import Tester

View File

@ -0,0 +1,88 @@
import threading
import torch
from torch.nn.parallel.parallel_apply import get_a_var
from torch.nn.parallel.scatter_gather import scatter_kwargs, gather
from torch.nn.parallel.replicate import replicate
def parallel_apply(modules, func_name, inputs, kwargs_tup=None, devices=None):
r"""Applies each `module` in :attr:`modules` in parallel on arguments
contained in :attr:`inputs` (positional) and :attr:`kwargs_tup` (keyword)
on each of :attr:`devices`.
:attr:`modules`, :attr:`inputs`, :attr:`kwargs_tup` (if given), and
:attr:`devices` (if given) should all have same length. Moreover, each
element of :attr:`inputs` can either be a single object as the only argument
to a module, or a collection of positional arguments.
"""
assert len(modules) == len(inputs)
if kwargs_tup is not None:
assert len(modules) == len(kwargs_tup)
else:
kwargs_tup = ({},) * len(modules)
if devices is not None:
assert len(modules) == len(devices)
else:
devices = [None] * len(modules)
lock = threading.Lock()
results = {}
grad_enabled = torch.is_grad_enabled()
def _worker(i, module, input, kwargs, device=None):
torch.set_grad_enabled(grad_enabled)
if device is None:
device = get_a_var(input).get_device()
try:
with torch.cuda.device(device):
# this also avoids accidental slicing of `input` if it is a Tensor
if not isinstance(input, (list, tuple)):
input = (input,)
output = getattr(module, func_name)(*input, **kwargs)
with lock:
results[i] = output
except Exception as e:
with lock:
results[i] = e
if len(modules) > 1:
threads = [threading.Thread(target=_worker,
args=(i, module, input, kwargs, device))
for i, (module, input, kwargs, device) in
enumerate(zip(modules, inputs, kwargs_tup, devices))]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
else:
_worker(0, modules[0], inputs[0], kwargs_tup[0], devices[0])
outputs = []
for i in range(len(inputs)):
output = results[i]
if isinstance(output, Exception):
raise output
outputs.append(output)
return outputs
def _data_parallel_wrapper(func_name, device_ids, output_device):
"""
这个函数是用于对需要多卡执行的函数的wrapper函数参考的nn.DataParallel的forward函数
:param str, func_name: 对network中的这个函数进行多卡运行
:param device_ids: nn.DataParallel中的device_ids
:param output_device: nn.DataParallel中的output_device
:return:
"""
def wrapper(network, *inputs, **kwargs):
inputs, kwargs = scatter_kwargs(inputs, kwargs, device_ids, dim=0)
if len(device_ids) == 1:
return getattr(network, func_name)(*inputs[0], **kwargs[0])
replicas = replicate(network, device_ids[:len(inputs)])
outputs = parallel_apply(replicas, func_name, inputs, kwargs, device_ids[:len(replicas)])
return gather(outputs, output_device)
return wrapper

View File

@ -1,19 +1,22 @@
""" """
batch 模块实现了 fastNLP 所需的 Batch batch 模块实现了 fastNLP 所需的 :class:`~fastNLP.core.batch.DataSetIter`
""" """
__all__ = [ __all__ = [
"Batch" "BatchIter",
"DataSetIter",
"TorchLoaderIter",
] ]
import atexit import atexit
from queue import Empty, Full
import numpy as np import numpy as np
import torch import torch
import torch.multiprocessing as mp import torch.utils.data
from numbers import Number
from .sampler import RandomSampler from .sampler import SequentialSampler
from .dataset import DataSet
_python_is_exit = False _python_is_exit = False
@ -26,160 +29,189 @@ def _set_python_is_exit():
atexit.register(_set_python_is_exit) atexit.register(_set_python_is_exit)
class Batch(object): class DataSetGetter:
""" def __init__(self, dataset: DataSet, as_numpy=False):
别名:class:`fastNLP.Batch` :class:`fastNLP.core.batch.Batch` self.dataset = dataset
self.inputs = {n: f for n, f in dataset.get_all_fields().items() if f.is_input}
self.targets = {n: f for n, f in dataset.get_all_fields().items() if f.is_target}
self.as_numpy = as_numpy
self.idx_list = list(range(len(dataset)))
Batch 用于从 `DataSet` 中按一定的顺序, 依次按 ``batch_size`` 的大小将数据取出 def __getitem__(self, idx: int):
# mapping idx to sampled idx
idx = self.idx_list[idx]
inputs = {n:f.get(idx) for n, f in self.inputs.items()}
targets = {n:f.get(idx) for n, f in self.targets.items()}
return idx, inputs, targets
def __len__(self):
return len(self.dataset)
def collate_fn(self, batch: list):
# TODO 支持在DataSet中定义collate_fn因为有时候可能需要不同的field之间融合比如BERT的场景
batch_x = {n:[] for n in self.inputs.keys()}
batch_y = {n:[] for n in self.targets.keys()}
indices = []
for idx, x, y in batch:
indices.append(idx)
for n, v in x.items():
batch_x[n].append(v)
for n, v in y.items():
batch_y[n].append(v)
def pad_batch(batch_dict, field_array):
for n, vlist in batch_dict.items():
f = field_array[n]
if f.padder is None:
batch_dict[n] = np.array(vlist)
else:
data = f.pad(vlist)
if not self.as_numpy:
try:
data, flag = _to_tensor(data, f.dtype)
except TypeError as e:
print(f"Field {n} cannot be converted to torch.tensor.")
raise e
batch_dict[n] = data
return batch_dict
return (indices,
pad_batch(batch_x, self.inputs),
pad_batch(batch_y, self.targets))
def set_idx_list(self, idx_list):
if len(idx_list) != len(self.idx_list):
raise ValueError
self.idx_list = idx_list
def __getattr__(self, item):
if hasattr(self.dataset, item):
return getattr(self.dataset, item)
else:
raise AttributeError("'DataSetGetter' object has no attribute '{}'".format(item))
class SamplerAdapter(torch.utils.data.Sampler):
def __init__(self, sampler, dataset):
self.sampler = sampler
self.dataset = dataset
def __iter__(self):
return iter(self.sampler(self.dataset))
class BatchIter:
def __init__(self):
self.dataiter = None
self.num_batches = None
self.cur_batch_indices = None
self.batch_size = None
def init_iter(self):
pass
@staticmethod
def get_num_batches(num_samples, batch_size, drop_last):
num_batches = num_samples // batch_size
if not drop_last and (num_samples % batch_size > 0):
num_batches += 1
return num_batches
def __iter__(self):
self.init_iter()
for indices, batch_x, batch_y in self.dataiter:
self.cur_batch_indices = indices
yield batch_x, batch_y
def get_batch_indices(self):
return self.cur_batch_indices
def __len__(self):
return self.num_batches
@property
def dataset(self):
return self.dataiter.dataset
class DataSetIter(BatchIter):
"""
别名:class:`fastNLP.DataSetIter` :class:`fastNLP.core.batch.DataSetIter`
DataSetIter 用于从 `DataSet` 中按一定的顺序, 依次按 ``batch_size`` 的大小将数据取出
组成 `x` `y`:: 组成 `x` `y`::
batch = Batch(data_set, batch_size=16, sampler=SequentialSampler()) batch = DataSetIter(data_set, batch_size=16, sampler=SequentialSampler())
num_batch = len(batch) num_batch = len(batch)
for batch_x, batch_y in batch: for batch_x, batch_y in batch:
# do stuff ... # do stuff ...
:param dataset: :class:`~fastNLP.DataSet` 对象, 数据集 :param dataset: :class:`~fastNLP.DataSet` 对象, 数据集
:param int batch_size: 取出的batch大小 :param int batch_size: 取出的batch大小
:param sampler: 规定使用的 :class:`~fastNLP.Sampler` 方式. 若为 ``None`` , 使用 :class:`~fastNLP.RandomSampler`. :param sampler: 规定使用的 :class:`~fastNLP.Sampler` 方式. 若为 ``None`` , 使用 :class:`~fastNLP.SequentialSampler`.
Default: ``None`` Default: ``None``
:param bool as_numpy: 若为 ``True`` , 输出batch为 numpy.array. 否则为 :class:`torch.Tensor`. :param bool as_numpy: 若为 ``True`` , 输出batch为 numpy.array. 否则为 :class:`torch.Tensor`.
Default: ``False``
:param bool prefetch: 若为 ``True`` 使用多进程预先取出下一batch.
Default: ``False`` Default: ``False``
:param int num_workers: 使用多少个进程来预处理数据
:param bool pin_memory: 是否将产生的tensor使用pin memory, 可能会加快速度
:param bool drop_last: 如果最后一个batch没有batch_size这么多sample就扔掉最后一个
:param timeout:
:param worker_init_fn: 在每个worker启动时调用该函数会传入一个值该值是worker的index
""" """
def __init__(self, dataset, batch_size=1, sampler=None, as_numpy=False,
def __init__(self, dataset, batch_size, sampler=None, as_numpy=False, prefetch=False): num_workers=0, pin_memory=False, drop_last=False,
self.dataset = dataset timeout=0, worker_init_fn=None):
super().__init__()
assert isinstance(dataset, DataSet)
sampler = SamplerAdapter(sampler=sampler or SequentialSampler(), dataset=dataset)
dataset = DataSetGetter(dataset, as_numpy)
collate_fn = dataset.collate_fn if hasattr(dataset, 'collate_fn') else None
self.dataiter = torch.utils.data.DataLoader(
dataset=dataset, batch_size=batch_size, sampler=sampler,
collate_fn=collate_fn, num_workers=num_workers,
pin_memory=pin_memory, drop_last=drop_last,
timeout=timeout, worker_init_fn=worker_init_fn)
self.num_batches = self.get_num_batches(len(dataset), batch_size, drop_last)
self.batch_size = batch_size self.batch_size = batch_size
if sampler is None:
sampler = RandomSampler()
self.sampler = sampler
self.as_numpy = as_numpy
self.idx_list = None
self.curidx = 0
self.num_batches = len(dataset) // batch_size + int(len(dataset) % batch_size != 0)
self.cur_batch_indices = None
self.prefetch = prefetch
self.lengths = 0
def fetch_one(self):
if self.curidx >= len(self.idx_list):
return None
else:
endidx = min(self.curidx + self.batch_size, len(self.idx_list))
batch_x, batch_y = {}, {}
indices = self.idx_list[self.curidx:endidx]
self.cur_batch_indices = indices
for field_name, field in self.dataset.get_all_fields().items():
if field.is_target or field.is_input:
batch = field.get(indices)
if not self.as_numpy and field.padder is not None:
batch = _to_tensor(batch, field.dtype)
if field.is_target:
batch_y[field_name] = batch
if field.is_input:
batch_x[field_name] = batch
self.curidx = endidx
return batch_x, batch_y
def __iter__(self):
"""
Iterate on dataset, fetch batch data. Fetch process don't block the iterate process
:return:
"""
if self.prefetch:
return self._run_batch_iter(self)
def batch_iter():
self.init_iter()
while 1:
res = self.fetch_one()
if res is None:
break
yield res
return batch_iter()
def init_iter(self):
self.idx_list = self.sampler(self.dataset)
self.curidx = 0
self.lengths = self.dataset.get_length()
def __len__(self):
return self.num_batches
def get_batch_indices(self):
"""
取得当前batch在DataSet中所在的index下标序列
:return list(int) indexes: 下标序列
"""
return self.cur_batch_indices
@staticmethod
def _run_fetch(batch, q):
try:
global _python_is_exit
batch.init_iter()
# print('start fetch')
while 1:
res = batch.fetch_one()
# print('fetch one')
while 1:
try:
q.put(res, timeout=3)
break
except Full:
if _python_is_exit:
return
if res is None:
# print('fetch done, waiting processing')
break
# print('fetch exit')
except Exception as e:
q.put(e)
finally:
q.join()
@staticmethod
def _run_batch_iter(batch):
q = mp.JoinableQueue(maxsize=10)
fetch_p = mp.Process(target=Batch._run_fetch, args=(batch, q))
fetch_p.daemon = True
fetch_p.start()
# print('fork fetch process')
while 1:
try:
res = q.get(timeout=1)
q.task_done()
# print('get fetched')
if res is None:
break
elif isinstance(res, Exception):
raise res
yield res
except Empty as e:
if fetch_p.is_alive():
continue
else:
break
fetch_p.terminate()
fetch_p.join()
# print('iter done')
def _to_tensor(batch, dtype): class TorchLoaderIter(BatchIter):
def __init__(self, dataset):
super().__init__()
assert isinstance(dataset, torch.utils.data.DataLoader)
self.dataiter = dataset
self.num_batches = self.get_num_batches(len(dataset), dataset.batch_size, dataset.drop_last)
self.batch_size = dataset.batch_size
class OnlineDataGettter:
# TODO
pass
class OnlineDataIter(BatchIter):
# TODO
def __init__(self, dataset, batch_size=1, buffer_size=10000, sampler=None, as_numpy=False,
num_workers=0, pin_memory=False, drop_last=False,
timeout=0, worker_init_fn=None, **kwargs):
super().__init__()
def _to_tensor(batch, field_dtype):
try: try:
if dtype in (int, np.int8, np.int16, np.int32, np.int64): if field_dtype is not None and isinstance(field_dtype, type)\
batch = torch.LongTensor(batch) and issubclass(field_dtype, Number) \
if dtype in (float, np.float32, np.float64): and not isinstance(batch, torch.Tensor):
batch = torch.FloatTensor(batch) if issubclass(batch.dtype.type, np.floating):
except: new_batch = torch.as_tensor(batch).float() # 默认使用float32
pass elif issubclass(batch.dtype.type, np.integer):
return batch new_batch = torch.as_tensor(batch).long() # 复用内存地址,避免复制
else:
new_batch = torch.as_tensor(batch)
return new_batch, True
else:
return batch, False
except Exception as e:
raise e

View File

@ -2,11 +2,11 @@ r"""
callback模块实现了 fastNLP 中的许多 callback 用于增强 :class:`~fastNLP.Trainer` callback模块实现了 fastNLP 中的许多 callback 用于增强 :class:`~fastNLP.Trainer`
虽然Trainer本身已经集成了一些功能但仍然不足以囊括训练过程中可能需要到的功能 虽然Trainer本身已经集成了一些功能但仍然不足以囊括训练过程中可能需要到的功能
比如负采样learning rate decay, Early Stop等 比如负采样learning rate decay early stop等
为了解决这个问题fastNLP引入了callback的机制Callback 是一种在Trainer训练过程中特定阶段会运行的函数集合 为了解决这个问题fastNLP引入了callback的机制:class:`~fastNLP.Callback` 是一种在Trainer训练过程中特定阶段会运行的函数集合
关于Trainer的详细文档请参见 :doc:`trainer 模块<fastNLP.core.trainer>` 关于 :class:`~fastNLP.Trainer` 的详细文档请参见 :doc:`trainer 模块<fastNLP.core.trainer>`
我们将 :meth:`~fastNLP.Train.train` 这个函数内部分为以下的阶段在对应阶段会触发相应的调用:: 我们将 :meth:`~fastNLP.Trainer.train` 这个函数内部分为以下的阶段在对应阶段会触发相应的调用::
callback.on_train_begin() # 开始进行训练 callback.on_train_begin() # 开始进行训练
for i in range(1, n_epochs+1): for i in range(1, n_epochs+1):
@ -31,8 +31,8 @@ callback模块实现了 fastNLP 中的许多 callback 类,用于增强 :class:
callback.on_train_end() # 训练结束 callback.on_train_end() # 训练结束
callback.on_exception() # 这是一个特殊的步骤在训练过程中遭遇exception会跳转到这里。 callback.on_exception() # 这是一个特殊的步骤在训练过程中遭遇exception会跳转到这里。
如下面的例子所示我们可以使用内置的 callback 或者继承 :class:`~fastNLP.core.callback.Callback` 如下面的例子所示我们可以使用内置的 callback 组件或者继承 :class:`~fastNLP.core.callback.Callback`
定义自己的 callback :: 定义自己的 callback 组件::
from fastNLP import Callback, EarlyStopCallback, Trainer, CrossEntropyLoss, AccuracyMetric from fastNLP import Callback, EarlyStopCallback, Trainer, CrossEntropyLoss, AccuracyMetric
from fastNLP.models import CNNText from fastNLP.models import CNNText
@ -66,6 +66,8 @@ import os
import torch import torch
from copy import deepcopy from copy import deepcopy
import sys
from .utils import _save_model
try: try:
from tensorboardX import SummaryWriter from tensorboardX import SummaryWriter
@ -113,7 +115,7 @@ class Callback(object):
@property @property
def n_steps(self): def n_steps(self):
"""Trainer一共会运行多少步""" """Trainer一共会采多少个batch。当Trainer中update_every设置为非1的值时该值不等于update的次数"""
return self._trainer.n_steps return self._trainer.n_steps
@property @property
@ -181,7 +183,7 @@ class Callback(object):
:param dict batch_x: DataSet中被设置为input的field的batch :param dict batch_x: DataSet中被设置为input的field的batch
:param dict batch_y: DataSet中被设置为target的field的batch :param dict batch_y: DataSet中被设置为target的field的batch
:param list(int) indices: 这次采样使用到的indices可以通过DataSet[indices]获取出这个batch采出的Instance在一些 :param list(int) indices: 这次采样使用到的indices可以通过DataSet[indices]获取出这个batch采出的Instance在一些
情况下可以帮助定位是哪个Sample导致了错误在Trainer的prefetch为False时可用 情况下可以帮助定位是哪个Sample导致了错误当num_workers=0时有效
:return: :return:
""" """
pass pass
@ -399,10 +401,11 @@ class GradientClipCallback(Callback):
self.clip_value = clip_value self.clip_value = clip_value
def on_backward_end(self): def on_backward_end(self):
if self.parameters is None: if self.step%self.update_every==0:
self.clip_fun(self.model.parameters(), self.clip_value) if self.parameters is None:
else: self.clip_fun(self.model.parameters(), self.clip_value)
self.clip_fun(self.parameters, self.clip_value) else:
self.clip_fun(self.parameters, self.clip_value)
class EarlyStopCallback(Callback): class EarlyStopCallback(Callback):
@ -445,10 +448,10 @@ class FitlogCallback(Callback):
并将验证结果写入到fitlog中这些数据集的结果是根据dev上最好的结果报道的即如果dev在第3个epoch取得了最佳 并将验证结果写入到fitlog中这些数据集的结果是根据dev上最好的结果报道的即如果dev在第3个epoch取得了最佳
fitlog中记录的关于这些数据集的结果就是来自第三个epoch的结果 fitlog中记录的关于这些数据集的结果就是来自第三个epoch的结果
:param DataSet,dict(DataSet) data: 传入DataSet对象会使用多个Trainer中的metric对数据进行验证如果需要传入多个 :param ~fastNLP.DataSet,Dict[~fastNLP.DataSet] data: 传入DataSet对象会使用多个Trainer中的metric对数据进行验证如果需要传入多个
DataSet请通过dict的方式传入dict的key将作为对应dataset的name传递给fitlog若tester不为None时data需要通过 DataSet请通过dict的方式传入dict的key将作为对应dataset的name传递给fitlog若tester不为None时data需要通过
dict的方式传入如果仅传入DataSet, 则被命名为test dict的方式传入如果仅传入DataSet, 则被命名为test
:param Tester tester: Tester对象将在on_valid_end时调用tester中的DataSet会被称为为`test` :param ~fastNLP.Tester tester: Tester对象将在on_valid_end时调用tester中的DataSet会被称为为`test`
:param int log_loss_every: 多少个step记录一次loss(记录的是这几个batch的loss平均值)如果数据集较大建议将该值设置得 :param int log_loss_every: 多少个step记录一次loss(记录的是这几个batch的loss平均值)如果数据集较大建议将该值设置得
大一些不然会导致log文件巨大默认为0, 即不要记录loss 大一些不然会导致log文件巨大默认为0, 即不要记录loss
:param int verbose: 是否在终端打印evaluation的结果0不打印 :param int verbose: 是否在终端打印evaluation的结果0不打印
@ -548,7 +551,7 @@ class LRScheduler(Callback):
else: else:
raise ValueError(f"Expect torch.optim.lr_scheduler for LRScheduler. Got {type(lr_scheduler)}.") raise ValueError(f"Expect torch.optim.lr_scheduler for LRScheduler. Got {type(lr_scheduler)}.")
def on_epoch_begin(self): def on_epoch_end(self):
self.scheduler.step(self.epoch) self.scheduler.step(self.epoch)
@ -671,7 +674,7 @@ class TensorboardCallback(Callback):
.. warning:: .. warning::
fastNLP 已停止对此功能的维护请等待 fastNLP 兼容 PyTorch1.1 的下一个版本 fastNLP 已停止对此功能的维护请等待 fastNLP 兼容 PyTorch1.1 的下一个版本
或者使用和 fastNLP 高度配合的 fitlog参见 :doc:`/user/with_fitlog` 或者使用和 fastNLP 高度配合的 fitlog参见 :doc:`/tutorials/tutorial_10_fitlog`
""" """
@ -736,6 +739,132 @@ class TensorboardCallback(Callback):
del self._summary_writer del self._summary_writer
class WarmupCallback(Callback):
"""
按一定的周期调节Learning rate的大小
:param int,float warmup: 如果warmup为int则在该step之前learning rate根据schedule的策略变化; 如果warmup为float
如0.1, 则前10%的step是按照schedule策略调整learning rate
:param str schedule: 以哪种方式调整linear: 前warmup的step上升到指定的learning rate(从Trainer中的optimizer处获取的),
warmup的step下降到0 constant前warmup的step上升到指定learning rate后面的step保持learning rate.
"""
def __init__(self, warmup=0.1, schedule='constant'):
super().__init__()
self.warmup = max(warmup, 0.)
self.initial_lrs = [] # 存放param_group的learning rate
if schedule == 'constant':
self.get_lr = self._get_constant_lr
elif schedule == 'linear':
self.get_lr = self._get_linear_lr
else:
raise RuntimeError("Only support 'linear', 'constant'.")
def _get_constant_lr(self, progress):
if progress<self.warmup:
return progress/self.warmup
return 1
def _get_linear_lr(self, progress):
if progress<self.warmup:
return progress/self.warmup
return max((progress - 1.) / (self.warmup - 1.), 0.)
def on_train_begin(self):
self.t_steps = (len(self.trainer.train_data) // (self.batch_size*self.update_every) +
int(len(self.trainer.train_data) % (self.batch_size*self.update_every)!= 0)) * self.n_epochs
if self.warmup>1:
self.warmup = self.warmup/self.t_steps
self.t_steps = max(2, self.t_steps) # 不能小于2
# 获取param_group的初始learning rate
for group in self.optimizer.param_groups:
self.initial_lrs.append(group['lr'])
def on_backward_end(self):
if self.step%self.update_every==0:
progress = (self.step/self.update_every)/self.t_steps
for lr, group in zip(self.initial_lrs, self.optimizer.param_groups):
group['lr'] = lr * self.get_lr(progress)
class SaveModelCallback(Callback):
"""
由于Trainer在训练过程中只会保存最佳的模型 该callback可实现多种方式的结果存储
会根据训练开始的时间戳在save_dir下建立文件夹再在文件夹下存放多个模型
-save_dir
-2019-07-03-15-06-36
-epoch:0_step:20_{metric_key}:{evaluate_performance}.pt # metric是给定的metric_key, evaluate_performance是性能
-epoch:1_step:40_{metric_key}:{evaluate_performance}.pt
-2019-07-03-15-10-00
-epoch:0_step:20_{metric_key}:{evaluate_performance}.pt # metric是给定的metric_key, evaluate_perfomance是性能
:param str save_dir: 将模型存放在哪个目录下会在该目录下创建以时间戳命名的目录并存放模型
:param int top: 保存dev表现top多少模型-1为保存所有模型
:param bool only_param: 是否只保存模型d饿权重
:param save_on_exception: 发生exception时是否保存一份发生exception的模型模型名称为epoch:x_step:x_Exception:{exception_name}.
"""
def __init__(self, save_dir, top=3, only_param=False, save_on_exception=False):
super().__init__()
if not os.path.isdir(save_dir):
raise IsADirectoryError("{} is not a directory.".format(save_dir))
self.save_dir = save_dir
if top < 0:
self.top = sys.maxsize
else:
self.top = top
self._ordered_save_models = [] # List[Tuple], Tuple[0]是metric Tuple[1]是path。metric是依次变好的所以从头删
self.only_param = only_param
self.save_on_exception = save_on_exception
def on_train_begin(self):
self.save_dir = os.path.join(self.save_dir, self.trainer.start_time)
def on_valid_end(self, eval_result, metric_key, optimizer, is_better_eval):
metric_value = list(eval_result.values())[0][metric_key]
self._save_this_model(metric_value)
def _insert_into_ordered_save_models(self, pair):
# pair:(metric_value, model_name)
# 返回save的模型pair与删除的模型pair. pair中第一个元素是metric的值第二个元素是模型的名称
index = -1
for _pair in self._ordered_save_models:
if _pair[0]>=pair[0] and self.trainer.increase_better:
break
if not self.trainer.increase_better and _pair[0]<=pair[0]:
break
index += 1
save_pair = None
if len(self._ordered_save_models)<self.top or (len(self._ordered_save_models)>=self.top and index!=-1):
save_pair = pair
self._ordered_save_models.insert(index+1, pair)
delete_pair = None
if len(self._ordered_save_models)>self.top:
delete_pair = self._ordered_save_models.pop(0)
return save_pair, delete_pair
def _save_this_model(self, metric_value):
name = "epoch:{}_step:{}_{}:{:.6f}.pt".format(self.epoch, self.step, self.trainer.metric_key, metric_value)
save_pair, delete_pair = self._insert_into_ordered_save_models((metric_value, name))
if save_pair:
try:
_save_model(self.model, model_name=name, save_dir=self.save_dir, only_param=self.only_param)
except Exception as e:
print(f"The following exception:{e} happens when save model to {self.save_dir}.")
if delete_pair:
try:
delete_model_path = os.path.join(self.save_dir, delete_pair[1])
if os.path.exists(delete_model_path):
os.remove(delete_model_path)
except Exception as e:
print(f"Fail to delete model {name} at {self.save_dir} caused by exception:{e}.")
def on_exception(self, exception):
if self.save_on_exception:
name = "epoch:{}_step:{}_Exception:{}.pt".format(self.epoch, self.step, exception.__class__.__name__)
_save_model(self.model, model_name=name, save_dir=self.save_dir, only_param=self.only_param)
class CallbackException(BaseException): class CallbackException(BaseException):
""" """
当需要通过callback跳出训练的时候可以通过抛出CallbackException并在on_exception中捕获这个值 当需要通过callback跳出训练的时候可以通过抛出CallbackException并在on_exception中捕获这个值

View File

@ -1,7 +1,7 @@
""" """
:class:`~fastNLP.core.dataset.DataSet` 是fastNLP中用于承载数据的容器可以将DataSet看做是一个表格 :class:`~fastNLP.core.dataset.DataSet` 是fastNLP中用于承载数据的容器可以将DataSet看做是一个表格
每一行是一个sample (在fastNLP中被称为 :mod:`~.instance` ) 每一行是一个sample (在fastNLP中被称为 :mod:`~fastNLP.core.instance` )
每一列是一个feature (在fastNLP中称为 :mod:`.field` ) 每一列是一个feature (在fastNLP中称为 :mod:`~fastNLP.core.field` )
.. csv-table:: Following is a demo layout of DataSet .. csv-table:: Following is a demo layout of DataSet
:header: "sentence", "words", "seq_len" :header: "sentence", "words", "seq_len"
@ -13,57 +13,64 @@
在fastNLP内部每一行是一个 :class:`~fastNLP.Instance` 对象 每一列是一个 :class:`~fastNLP.FieldArray` 对象 在fastNLP内部每一行是一个 :class:`~fastNLP.Instance` 对象 每一列是一个 :class:`~fastNLP.FieldArray` 对象
1 DataSet的创建 ----------------------------
创建DataSet主要有以下的3种方式 1.DataSet的创建
----------------------------
创建DataSet主要有以下的3种方式
1.1 传入dict 1.1 传入dict
----------------------------
Example:: .. code-block::
from fastNLP import DataSet from fastNLP import DataSet
data = {'sentence':["This is the first instance .", "Second instance .", "Third instance ."], data = {'sentence':["This is the first instance .", "Second instance .", "Third instance ."],
'words': [['this', 'is', 'the', 'first', 'instance', '.'], ['Second', 'instance', '.'], ['Third', 'instance', '.'], 'words': [['this', 'is', 'the', 'first', 'instance', '.'], ['Second', 'instance', '.'], ['Third', 'instance', '.'],
'seq_len': [6, 3, 3]} 'seq_len': [6, 3, 3]}
dataset = DataSet(data) dataset = DataSet(data)
# 传入的dict的每个key的value应该为具有相同长度的list # 传入的dict的每个key的value应该为具有相同长度的list
1.2 通过构建Instance 1.2 通过 Instance 构建
----------------------------
Example:: .. code-block::
from fastNLP import DataSet from fastNLP import DataSet
from fastNLP import Instance from fastNLP import Instance
dataset = DataSet() dataset = DataSet()
instance = Instance(sentence="This is the first instance", instance = Instance(sentence="This is the first instance",
words=['this', 'is', 'the', 'first', 'instance', '.'], words=['this', 'is', 'the', 'first', 'instance', '.'],
seq_len=6) seq_len=6)
dataset.append(instance) dataset.append(instance)
# 可以继续append更多内容但是append的instance应该和第一个instance拥有完全相同的field # 可以继续append更多内容但是append的instance应该和第一个instance拥有完全相同的field
1.3 通过list(Instance) 1.3 通过 List[Instance] 构建
--------------------------------------
Example:: .. code-block::
from fastNLP import DataSet from fastNLP import DataSet
from fastNLP import Instance from fastNLP import Instance
instances = [] instances = []
instances.append(Instance(sentence="This is the first instance", winstances.append(Instance(sentence="This is the first instance",
words=['this', 'is', 'the', 'first', 'instance', '.'], ords=['this', 'is', 'the', 'first', 'instance', '.'],
seq_len=6)) seq_len=6))
instances.append(Instance(sentence="Second instance .", instances.append(Instance(sentence="Second instance .",
words=['Second', 'instance', '.'], words=['Second', 'instance', '.'],
seq_len=3)) seq_len=3))
dataset = DataSet(instances) dataset = DataSet(instances)
--------------------------------------
2.DataSet与预处理
--------------------------------------
2 DataSet与预处理 常见的预处理有如下几种
常见的预处理有如下几种
2.1 从某个文本文件读取内容 # 2.1 从某个文本文件读取内容
--------------------------------------
.. todo:: .. code-block::
引用DataLoader
Example::
from fastNLP import DataSet from fastNLP import DataSet
from fastNLP import Instance from fastNLP import Instance
@ -78,21 +85,13 @@
sent, label = line.strip().split('\t') sent, label = line.strip().split('\t')
dataset.append(Instance(sentence=sent, label=label)) dataset.append(Instance(sentence=sent, label=label))
2.2 index, 返回结果为对DataSet对象的浅拷贝 .. note::
直接读取特定数据集的数据请参考 :doc:`/tutorials/tutorial_2_load_dataset`
Example:: 2.2 对DataSet中的内容处理
--------------------------------------
import numpy as np .. code-block::
from fastNLP import DataSet
dataset = DataSet({'a': np.arange(10), 'b': [[_] for _ in range(10)]})
d[0] # 使用一个下标获取一个instance
>>{'a': 0 type=int,'b': [2] type=list} # 得到一个instance
d[1:3] # 使用slice获取一个新的DataSet
>>DataSet({'a': 1 type=int, 'b': [2] type=list}, {'a': 2 type=int, 'b': [2] type=list})
2.3 对DataSet中的内容处理
Example::
from fastNLP import DataSet from fastNLP import DataSet
data = {'sentence':["This is the first instance .", "Second instance .", "Third instance ."]} data = {'sentence':["This is the first instance .", "Second instance .", "Third instance ."]}
@ -108,9 +107,10 @@
return words return words
dataset.apply(get_words, new_field_name='words') dataset.apply(get_words, new_field_name='words')
2.4 删除DataSet的内容 2.3 删除DataSet的内容
--------------------------------------
Example:: .. code-block::
from fastNLP import DataSet from fastNLP import DataSet
dataset = DataSet({'a': list(range(-5, 5))}) dataset = DataSet({'a': list(range(-5, 5))})
@ -124,16 +124,18 @@
dataset.delete_field('a') dataset.delete_field('a')
2.5 遍历DataSet的内容 2.4 遍历DataSet的内容
--------------------------------------
Example:: .. code-block::
for instance in dataset: for instance in dataset:
# do something # do something
2.6 一些其它操作 2.5 一些其它操作
--------------------------------------
Example:: .. code-block::
# 检查是否存在名为'a'的field # 检查是否存在名为'a'的field
dataset.has_field('a') # 或 ('a' in dataset) dataset.has_field('a') # 或 ('a' in dataset)
@ -141,21 +143,25 @@
dataset.rename_field('a', 'b') dataset.rename_field('a', 'b')
# DataSet的长度 # DataSet的长度
len(dataset) len(dataset)
--------------------------------------
3.DataSet与自然语言处理(NLP)
--------------------------------------
3 DataSet与自然语言处理(NLP) 在目前深度学习的模型中大都依赖于随机梯度下降法(SGD)进行模型的优化随机梯度下降需要将数据切分成一个个的 batch
在目前深度学习的模型中大都依赖于随机梯度下降法(SGD)进行模型的优化随机梯度下降需要将数据切分成一个一个的Batch 一个batch进行一次前向计算(forward)与梯度后向传播(backward)在自然语言处理的场景下往往还需要对数据进行pad这是
一个Batch进行一次前向计算(forward)与梯度后向传播(backward)在自然语言处理的场景下往往还需要对数据进行pad这是 由于句子的长度一般是不同的但是一次batch中的每个field都必须是一个tensor所以需要将所有句子都补齐到相同的长度
由于句子的长度一般是不同的但是一次Batch中的每个field都必须是一个tensor所以需要将所有句子都补齐到相同的长度
3.1 DataSet与Batch 3.1 DataSet与DataSetIter
--------------------------------------
我们先看fastNLP中如何将数据分成一个一个的Batch的例子, 这里我们使用随机生成的数据来模拟一个二分类文本分类任务 我们先看fastNLP中如何将数据分成一个一个的batch的例子, 这里我们使用随机生成的数据来模拟一个二分类文本分类任务
words和characters是输入labels是文本类别 words和characters是输入labels是文本类别
Example:: .. code-block::
from fastNLP import DataSet from fastNLP import DataSet
from fastNLP import Batch from fastNLP import DataSetIter
from fastNLP import SequentialSampler from fastNLP import SequentialSampler
from fastNLP import EngChar2DPadder from fastNLP import EngChar2DPadder
@ -175,7 +181,7 @@
d.set_target('label') d.set_target('label')
d.set_input('words', 'chars') d.set_input('words', 'chars')
for batch_x, batch_y in Batch(d, sampler=SequentialSampler(), batch_size=2): for batch_x, batch_y in DataSetIter(d, sampler=SequentialSampler(), batch_size=2):
print("batch_x:", batch_x) print("batch_x:", batch_x)
print("batch_y:", batch_y) print("batch_y:", batch_y)
break break
@ -194,23 +200,26 @@
# [ 0, 0, 0, 0, 0]]])} # [ 0, 0, 0, 0, 0]]])}
# {'label': tensor([0, 0])} # {'label': tensor([0, 0])}
其中 :class:`~fastNLP.Batch` 是用于从DataSet中按照batch_size为大小取出batch的迭代器 其中 :class:`~fastNLP.DataSetIter` 是用于从DataSet中按照batch_size为大小取出batch的迭代器
:class:`~fastNLP.SequentialSampler` 用于指示 Batch 以怎样的 :class:`~fastNLP.SequentialSampler` 用于指示 :class:`~fastNLP.DataSetIter` 以怎样的
顺序从DataSet中取出instance以组成一个batch 顺序从DataSet中取出instance以组成一个batch
更详细的说明请参照 :class:`~fastNLP.Batch` :class:`~fastNLP.SequentialSampler` 文档 更详细的说明请参照 :class:`~fastNLP.DataSetIter` :class:`~fastNLP.SequentialSampler` 文档
通过DataSet.set_input('words', 'chars'), fastNLP将认为'words''chars'这两个field都是input并将它们都放入迭代器 通过 ``DataSet.set_input('words', 'chars')`` , fastNLP将认为 `words` `chars` 这两个field都是input并将它们都放入迭代器
生成的第一个dict中; DataSet.set_target('labels'), fastNLP将认为'labels'这个field是target并将其放入到迭代器的第 生成的第一个dict中; ``DataSet.set_target('labels')`` , fastNLP将认为 `labels` 这个field是target并将其放入到迭代器的第
二个dict中如上例中所打印结果分为input和target的原因是由于它们在被 :class:`~fastNLP.Trainer` 所使用时会有所差异 二个dict中如上例中所打印结果分为input和target的原因是由于它们在被 :class:`~fastNLP.Trainer` 所使用时会有所差异
详见 :class:`~fastNLP.Trainer` 详见 :class:`~fastNLP.Trainer`
当把某个field设置为'target'或者'input'的时候(两者不是互斥的可以同时设为input和target)fastNLP不仅仅只是将其放 当把某个field设置为 `target` 或者 `input` 的时候(两者不是互斥的可以同时设为两种)fastNLP不仅仅只是将其放
置到不同的dict中而还会对被设置为input或target的field进行类型检查类型检查的目的是为了看能否把该field转为 置到不同的dict中而还会对被设置为 `input` `target` field 进行类型检查类型检查的目的是为了看能否把该 field 转为
pytorch的torch.LongTensor或torch.FloatTensor类型(也可以在Batch中设置输出numpy类型参考 :class:`~fastNLP.Batch` )如上例所示 pytorch的 :class:`torch.LongTensor` :class:`torch.FloatTensor` 类型
fastNLP已将wordschars和label转为了Tensor类型如果field在每个instance都拥有相同的维度(不能超过两维)且最内层 (也可以在 :class:`~fastNLP.DataSetIter` 中设置输出numpy类型参考 :class:`~fastNLP.DataSetIter` )
的元素都为相同的type(int, float, np.int*, np.float*)则fastNLP默认将对该field进行pad也支持全为str的field作为
target和input这种情况下fastNLP默认不进行pad另外当某个field已经被设置为了target或者input后之后append的 如上例所示fastNLP已将 `words` `chars` `label` 转为了 :class:`Tensor` 类型
instance对应的field必须要和前面已有的内容一致否则会报错 如果 field 在每个 `instance` 都拥有相同的维度(不能超过两维)且最内层的元素都为相同的 type(int, float, np.int*, np.float*)
则fastNLP默认将对该 field 进行pad也支持全为str的field作为target和input这种情况下fastNLP默认不进行pad
另外当某个 field 已经被设置为了 target 或者 input 之后 `append`
`instance` 对应的 field 必须要和前面已有的内容一致否则会报错
可以查看field的dtype:: 可以查看field的dtype::
@ -229,6 +238,7 @@
错误:: 错误::
from fastNLP import DataSet from fastNLP import DataSet
d = DataSet({'data': [1, 'a']}) d = DataSet({'data': [1, 'a']})
d.set_input('data') d.set_input('data')
>> RuntimeError: Mixed data types in Field data: [<class 'str'>, <class 'int'>] >> RuntimeError: Mixed data types in Field data: [<class 'str'>, <class 'int'>]
@ -243,6 +253,7 @@
当某个field被设置为忽略type之后fastNLP将不对其进行pad 当某个field被设置为忽略type之后fastNLP将不对其进行pad
3.2 DataSet与pad 3.2 DataSet与pad
--------------------------------------
在fastNLP里pad是与一个field绑定的即不同的field可以使用不同的pad方式比如在英文任务中word需要的pad和 在fastNLP里pad是与一个field绑定的即不同的field可以使用不同的pad方式比如在英文任务中word需要的pad和
character的pad方式往往是不同的fastNLP是通过一个叫做 :class:`~fastNLP.Padder` 的子类来完成的 character的pad方式往往是不同的fastNLP是通过一个叫做 :class:`~fastNLP.Padder` 的子类来完成的
@ -252,7 +263,7 @@
如果 :class:`~fastNLP.AutoPadder` :class:`~fastNLP.EngChar2DPadder` 无法满足需求 如果 :class:`~fastNLP.AutoPadder` :class:`~fastNLP.EngChar2DPadder` 无法满足需求
也可以自己写一个 :class:`~fastNLP.Padder` 也可以自己写一个 :class:`~fastNLP.Padder`
Example:: .. code-block::
from fastNLP import DataSet from fastNLP import DataSet
from fastNLP import EngChar2DPadder from fastNLP import EngChar2DPadder
@ -285,7 +296,8 @@ from .field import AutoPadder
from .field import FieldArray from .field import FieldArray
from .instance import Instance from .instance import Instance
from .utils import _get_func_signature from .utils import _get_func_signature
from .field import AppendToTargetOrInputException
from .field import SetInputOrTargetException
class DataSet(object): class DataSet(object):
""" """
@ -416,13 +428,13 @@ class DataSet(object):
""" """
将一个instance对象append到DataSet后面 将一个instance对象append到DataSet后面
:param instance: :class:`~fastNLP.Instance` 类型若DataSet不为空则instance应该拥有和DataSet完全一样的field :param ~fastNLP.Instance instance: 若DataSet不为空则instance应该拥有和DataSet完全一样的field
""" """
if len(self.field_arrays) == 0: if len(self.field_arrays) == 0:
# DataSet has no field yet # DataSet has no field yet
for name, field in instance.fields.items(): for name, field in instance.fields.items():
field = field.tolist() if isinstance(field, np.ndarray) else field # field = field.tolist() if isinstance(field, np.ndarray) else field
self.field_arrays[name] = FieldArray(name, [field]) # 第一个样本必须用list包装起来 self.field_arrays[name] = FieldArray(name, [field]) # 第一个样本必须用list包装起来
else: else:
if len(self.field_arrays) != len(instance.fields): if len(self.field_arrays) != len(instance.fields):
@ -431,14 +443,18 @@ class DataSet(object):
.format(len(self.field_arrays), len(instance.fields))) .format(len(self.field_arrays), len(instance.fields)))
for name, field in instance.fields.items(): for name, field in instance.fields.items():
assert name in self.field_arrays assert name in self.field_arrays
self.field_arrays[name].append(field) try:
self.field_arrays[name].append(field)
except AppendToTargetOrInputException as e:
print(f"Cannot append to field:{name}.")
raise e
def add_fieldarray(self, field_name, fieldarray): def add_fieldarray(self, field_name, fieldarray):
""" """
将fieldarray添加到DataSet中. 将fieldarray添加到DataSet中.
:param str field_name: 新加入的field的名称 :param str field_name: 新加入的field的名称
:param fieldarray: :class:`~fastNLP.FieldArray` 类型需要加入DataSet的field的内容 :param ~fastNLP.core.FieldArray fieldarray: 需要加入DataSet的field的内容
:return: :return:
""" """
if not isinstance(fieldarray, FieldArray): if not isinstance(fieldarray, FieldArray):
@ -454,8 +470,7 @@ class DataSet(object):
:param str field_name: 新增的field的名称 :param str field_name: 新增的field的名称
:param list fields: 需要新增的field的内容 :param list fields: 需要新增的field的内容
:param None, padder: :class:`~fastNLP.Padder` 类型 :param None,~fastNLP.Padder padder: 如果为None,则不进行pad默认使用 :class:`~fastNLP.AutoPadder` 自动判断是否需要做pad
如果为None,则不进行pad默认使用 :class:`~fastNLP.AutoPadder` 自动判断是否需要做pad
:param bool is_input: 新加入的field是否是input :param bool is_input: 新加入的field是否是input
:param bool is_target: 新加入的field是否是target :param bool is_target: 新加入的field是否是target
:param bool ignore_type: 是否忽略对新加入的field的类型检查 :param bool ignore_type: 是否忽略对新加入的field的类型检查
@ -517,7 +532,7 @@ class DataSet(object):
""" """
返回一个dictkey为field_name, value为对应的 :class:`~fastNLP.FieldArray` 返回一个dictkey为field_name, value为对应的 :class:`~fastNLP.FieldArray`
:return: dict: 返回如上所述的字典 :return dict: 返回如上所述的字典
""" """
return self.field_arrays return self.field_arrays
@ -525,7 +540,7 @@ class DataSet(object):
""" """
返回一个list包含所有 field 的名字 返回一个list包含所有 field 的名字
:return: list: 返回如上所述的列表 :return list: 返回如上所述的列表
""" """
return sorted(self.field_arrays.keys()) return sorted(self.field_arrays.keys())
@ -549,6 +564,7 @@ class DataSet(object):
self.field_arrays[new_name].name = new_name self.field_arrays[new_name].name = new_name
else: else:
raise KeyError("DataSet has no field named {}.".format(old_name)) raise KeyError("DataSet has no field named {}.".format(old_name))
return self
def set_target(self, *field_names, flag=True): def set_target(self, *field_names, flag=True):
""" """
@ -565,7 +581,11 @@ class DataSet(object):
assert isinstance(flag, bool), "Only bool type supported." assert isinstance(flag, bool), "Only bool type supported."
for name in field_names: for name in field_names:
if name in self.field_arrays: if name in self.field_arrays:
self.field_arrays[name].is_target = flag try:
self.field_arrays[name].is_target = flag
except SetInputOrTargetException as e:
print(f"Cannot set field:{name} as target.")
raise e
else: else:
raise KeyError("{} is not a valid field name.".format(name)) raise KeyError("{} is not a valid field name.".format(name))
@ -581,7 +601,11 @@ class DataSet(object):
""" """
for name in field_names: for name in field_names:
if name in self.field_arrays: if name in self.field_arrays:
self.field_arrays[name].is_input = flag try:
self.field_arrays[name].is_input = flag
except SetInputOrTargetException as e:
print(f"Cannot set field:{name} as input, exception happens at the {e.index} value.")
raise e
else: else:
raise KeyError("{} is not a valid field name.".format(name)) raise KeyError("{} is not a valid field name.".format(name))
@ -610,7 +634,7 @@ class DataSet(object):
dataset.set_padder('chars', padder) # 则chars这个field会使用EngChar2DPadder进行pad操作 dataset.set_padder('chars', padder) # 则chars这个field会使用EngChar2DPadder进行pad操作
:param str field_name: 设置field的padding方式为padder :param str field_name: 设置field的padding方式为padder
:param None, Padder padder: 设置为None即删除padder, 即对该field不进行pad操作 :param None,~fastNLP.Padder padder: 设置为None即删除padder, 即对该field不进行pad操作
""" """
if field_name not in self.field_arrays: if field_name not in self.field_arrays:
raise KeyError("There is no field named {}.".format(field_name)) raise KeyError("There is no field named {}.".format(field_name))
@ -658,7 +682,7 @@ class DataSet(object):
2. is_target: bool, 如果为True则将名为 `new_field_name` 的field设置为target 2. is_target: bool, 如果为True则将名为 `new_field_name` 的field设置为target
3. ignore_type: bool, 如果为True则将名为 `new_field_name` 的field的ignore_type设置为true, 忽略其类型 3. ignore_type: bool, 如果为True则将名为 `new_field_name` 的field的ignore_type设置为true, 忽略其类型
:return: list(Any), 里面的元素为func的返回值所以list长度为DataSet的长度 :return List[Any]: 里面的元素为func的返回值所以list长度为DataSet的长度
""" """
assert len(self) != 0, "Null DataSet cannot use apply_field()." assert len(self) != 0, "Null DataSet cannot use apply_field()."
@ -685,7 +709,7 @@ class DataSet(object):
""" """
将results作为加入到新的field中field名称为new_field_name 将results作为加入到新的field中field名称为new_field_name
:param list(str) results: 一般是apply*()之后的结果 :param List[str] results: 一般是apply*()之后的结果
:param str new_field_name: 新加入的field的名称 :param str new_field_name: 新加入的field的名称
:param dict kwargs: 用户apply*()时传入的自定义参数 :param dict kwargs: 用户apply*()时传入的自定义参数
:return: :return:
@ -728,7 +752,7 @@ class DataSet(object):
3. ignore_type: bool, 如果为True则将 `new_field_name` 的field的ignore_type设置为true, 忽略其类型 3. ignore_type: bool, 如果为True则将 `new_field_name` 的field的ignore_type设置为true, 忽略其类型
:return: list(Any), 里面的元素为func的返回值所以list长度为DataSet的长度 :return List[Any]: 里面的元素为func的返回值所以list长度为DataSet的长度
""" """
assert len(self) != 0, "Null DataSet cannot use apply()." assert len(self) != 0, "Null DataSet cannot use apply()."
idx = -1 idx = -1
@ -748,7 +772,20 @@ class DataSet(object):
self._add_apply_field(results, new_field_name, kwargs) self._add_apply_field(results, new_field_name, kwargs)
return results return results
def add_seq_len(self, field_name:str, new_field_name='seq_len'):
"""
将使用len()直接对field_name中每个元素作用将其结果作为seqence length, 并放入seq_len这个field
:param field_name: str.
:return:
"""
if self.has_field(field_name=field_name):
self.apply_field(len, field_name, new_field_name=new_field_name)
else:
raise KeyError(f"Field:{field_name} not found.")
return self
def drop(self, func, inplace=True): def drop(self, func, inplace=True):
""" """
func接受一个Instance返回bool值返回值为True时该Instance会被移除或者加入到返回的DataSet中 func接受一个Instance返回bool值返回值为True时该Instance会被移除或者加入到返回的DataSet中
@ -774,17 +811,19 @@ class DataSet(object):
else: else:
return DataSet() return DataSet()
def split(self, ratio): def split(self, ratio, shuffle=True):
""" """
将DataSet按照ratio的比例拆分返回两个DataSet 将DataSet按照ratio的比例拆分返回两个DataSet
:param float ratio: 0<ratio<1, 返回的第一个DataSet拥有 `ratio` 这么多数据第二个DataSet拥有 `(1-ratio)` 这么多数据 :param float ratio: 0<ratio<1, 返回的第一个DataSet拥有 `(1-ratio)` 这么多数据第二个DataSet拥有`ratio`这么多数据
:return: [DataSet, DataSet] :param bool shuffle: 在split前是否shuffle一下
:return: [ :class:`~fastNLP.读取后的DataSet` , :class:`~fastNLP.读取后的DataSet` ]
""" """
assert isinstance(ratio, float) assert isinstance(ratio, float)
assert 0 < ratio < 1 assert 0 < ratio < 1
all_indices = [_ for _ in range(len(self))] all_indices = [_ for _ in range(len(self))]
np.random.shuffle(all_indices) if shuffle:
np.random.shuffle(all_indices)
split = int(ratio * len(self)) split = int(ratio * len(self))
dev_indices = all_indices[:split] dev_indices = all_indices[:split]
train_indices = all_indices[split:] train_indices = all_indices[split:]
@ -802,7 +841,7 @@ class DataSet(object):
@classmethod @classmethod
def read_csv(cls, csv_path, headers=None, sep=",", dropna=True): def read_csv(cls, csv_path, headers=None, sep=",", dropna=True):
""" r"""
.. warning:: .. warning::
此方法会在下个版本移除请使用 :class:`fastNLP.io.CSVLoader` 此方法会在下个版本移除请使用 :class:`fastNLP.io.CSVLoader`
@ -813,7 +852,7 @@ class DataSet(object):
与csv文件中每行的元素个数相同 与csv文件中每行的元素个数相同
:param str sep: 分割符 :param str sep: 分割符
:param bool dropna: 是否忽略与header数量不一致行 :param bool dropna: 是否忽略与header数量不一致行
:return: 一个 :class:`~fastNLP.DataSet` 类型的对象 :return: 读取后的 :class:`~fastNLP.读取后的DataSet`
""" """
warnings.warn('DataSet.read_csv is deprecated, use CSVLoader instead', warnings.warn('DataSet.read_csv is deprecated, use CSVLoader instead',
category=DeprecationWarning) category=DeprecationWarning)
@ -853,11 +892,11 @@ class DataSet(object):
@staticmethod @staticmethod
def load(path): def load(path):
""" r"""
从保存的DataSet pickle文件的路径中读取DataSet 从保存的DataSet pickle文件的路径中读取DataSet
:param str path: 从哪里读取DataSet :param str path: 从哪里读取DataSet
:return: 一个 :class:`~fastNLP.DataSet` 类型的对象 :return: 读取后的 :class:`~fastNLP.读取后的DataSet`
""" """
with open(path, 'rb') as f: with open(path, 'rb') as f:
d = pickle.load(f) d = pickle.load(f)

View File

@ -1,251 +1,164 @@
"""
field模块实现了 FieldArray 和若干 Padder FieldArray :class:`~fastNLP.DataSet` 中一列的存储方式
原理部分请参考 :doc:`fastNLP.core.dataset`
"""
__all__ = [
"FieldArray",
"Padder",
"AutoPadder",
"EngChar2DPadder"
]
from copy import deepcopy
from numbers import Number
import torch
import numpy as np import numpy as np
from typing import Any
from abc import abstractmethod
from copy import deepcopy
from collections import Counter
class SetInputOrTargetException(Exception):
def __init__(self, msg, index=None, field_name=None):
super().__init__(msg)
self.msg = msg
self.index = index # 标示在哪个数据遭遇到问题了
self.field_name = field_name # 标示当前field的名称
class FieldArray(object): class AppendToTargetOrInputException(Exception):
""" def __init__(self, msg, index=None, field_name=None):
别名:class:`fastNLP.FieldArray` :class:`fastNLP.core.field.FieldArray` super().__init__(msg)
self.msg = msg
self.index = index # 标示在哪个数据遭遇到问题了
self.field_name = field_name # 标示当前field的名称
FieldArray 是用于保存 :class:`~fastNLP.DataSet` 中一个field的类型 class FieldArray:
def __init__(self, name, content, is_target=False, is_input=False, padder=None, ignore_type=False):
:param str name: FieldArray的名称 if len(content)==0:
:param list,numpy.ndarray content: 列表的元素可以为listintfloat raise RuntimeError("Empty fieldarray is not allowed.")
:param bool is_target: 这个field是否是一个target field _content = content
:param bool is_input: 这个field是否是一个input field try:
:param padder: :class:`~fastNLP.Padder` 类型赋值给fieldarray的padder的对象会被deepcopy一份需要修改padder参数必须通过 _content = list(_content)
fieldarray.set_pad_val()默认为None即使用 :class:`~fastNLP.AutoPadder` except BaseException as e:
:param bool ignore_type: 是否忽略该field的type一般如果这个field不需要转为torch.FloatTensor或torch.LongTensor, print(f"Cannot convert content(of type:{type(content)}) into list.")
就可以设置为True具体意义请参考 :class:`~fastNLP.DataSet` raise e
"""
def __init__(self, name, content, is_target=None, is_input=None, padder=None, ignore_type=False):
self.name = name self.name = name
if isinstance(content, list): self.content = _content
# 如果DataSet使用dict初始化, content 可能是二维list/二维array/三维list self._ignore_type = ignore_type
# 如果DataSet使用list of Instance 初始化, content可能是 [list]/[array]/[2D list] # 根据input的情况设置inputtarget等
for idx, item in enumerate(content): self._cell_ndim = None # 多少维度
# 这是使用list of Instance 初始化时第一个样本FieldArray(name, [field]) self.dtype = None # 最内层的element都是什么类型的
# 将[np.array] 转化为 list of list self._is_input = False
# 也可以支持[array, array, array]的情况 self._is_target = False
if isinstance(item, np.ndarray):
content[idx] = content[idx].tolist() if is_input:
elif isinstance(content, np.ndarray): self.is_input = is_input
content = content.tolist() # convert np.ndarray into 2-D list if is_target:
else: self.is_target = is_target
raise TypeError("content in FieldArray can only be list or numpy.ndarray, got {}.".format(type(content)))
if len(content) == 0:
raise RuntimeError("Cannot initialize FieldArray with empty list.")
self.content = content # 1维 或 2维 或 3维 list, 形状可能不对齐
self.content_dim = None # 表示content是多少维的list
if padder is None: if padder is None:
padder = AutoPadder(pad_val=0) padder = AutoPadder(pad_val=0)
else: else:
assert isinstance(padder, Padder), "padder must be of type Padder." assert isinstance(padder, Padder), "padder must be of type fastNLP.Padder."
padder = deepcopy(padder) padder = deepcopy(padder)
self.set_padder(padder) self.set_padder(padder)
self.ignore_type = ignore_type
@property
self.BASIC_TYPES = (int, float, str) # content中可接受的Python基本类型这里没有np.array def ignore_type(self):
return self._ignore_type
self.pytype = None
self.dtype = None @ignore_type.setter
self._is_input = None def ignore_type(self, value):
self._is_target = None if value:
self._cell_ndim = None
if is_input is not None or is_target is not None: self.dtype = None
self.is_input = is_input self._ignore_type = value
self.is_target = is_target
def _set_dtype(self):
if self.ignore_type is False:
self.pytype = self._type_detection(self.content)
self.dtype = self._map_to_np_type(self.pytype)
@property @property
def is_input(self): def is_input(self):
return self._is_input return self._is_input
@is_input.setter @is_input.setter
def is_input(self, value): def is_input(self, value):
""" """
field_array.is_input = True / False 时被调用 field_array.is_input = True / False 时被调用
""" """
if value is True: # 如果(value为True)且(_is_input和_is_target都是False)且(ignore_type为False)
self._set_dtype() if value is True and \
self._is_target is False and \
self._ignore_type is False:
self._check_dtype_and_ndim()
if value is False and self._is_target is False:
self.dtype = None
self._cell_ndim = None
self._is_input = value self._is_input = value
@property @property
def is_target(self): def is_target(self):
return self._is_target return self._is_target
@is_target.setter @is_target.setter
def is_target(self, value): def is_target(self, value):
""" """
field_array.is_target = True / False 时被调用 field_array.is_target = True / False 时被调用
""" """
if value is True: if value is True and \
self._set_dtype() self._is_input is False and \
self._ignore_type is False:
self._check_dtype_and_ndim()
if value is False and self._is_input is False:
self.dtype = None
self._cell_ndim = None
self._is_target = value self._is_target = value
def _type_detection(self, content):
"""
当该field被设置为is_input或者is_target时被调用
def _check_dtype_and_ndim(self):
""" """
if len(content) == 0: 检查当前content所有的element是否是同一个类型且是否每个元素具有相同的维度通过的话设置_cell_ndim与_ele_type属性没有
raise RuntimeError("Empty list in Field {}.".format(self.name)) 通过将直接报错.
type_set = set([type(item) for item in content])
if list in type_set:
if len(type_set) > 1:
# list 跟 非list 混在一起
raise RuntimeError("Mixed data types in Field {}: {}".format(self.name, list(type_set)))
# >1维list
inner_type_set = set()
for l in content:
[inner_type_set.add(type(obj)) for obj in l]
if list not in inner_type_set:
# 二维list
self.content_dim = 2
return self._basic_type_detection(inner_type_set)
else:
if len(inner_type_set) == 1:
# >2维list
inner_inner_type_set = set()
for _2d_list in content:
for _1d_list in _2d_list:
[inner_inner_type_set.add(type(obj)) for obj in _1d_list]
if list in inner_inner_type_set:
raise RuntimeError("FieldArray cannot handle 4-D or more-D list.")
# 3维list
self.content_dim = 3
return self._basic_type_detection(inner_inner_type_set)
else:
# list 跟 非list 混在一起
raise RuntimeError("Mixed data types in Field {}: {}".format(self.name, list(inner_type_set)))
else:
# 一维list
for content_type in type_set:
if content_type not in self.BASIC_TYPES:
raise RuntimeError("Unexpected data type in Field '{}'. Expect one of {}. Got {}.".format(
self.name, self.BASIC_TYPES, content_type))
self.content_dim = 1
return self._basic_type_detection(type_set)
def _basic_type_detection(self, type_set):
"""
:param type_set: a set of Python types
:return: one of self.BASIC_TYPES
"""
if len(type_set) == 1:
return type_set.pop()
elif len(type_set) == 2:
# 有多个basic type; 可能需要up-cast
if float in type_set and int in type_set:
# up-cast int to float
return float
else:
# str 跟 int 或者 float 混在一起
raise RuntimeError("Mixed data types in Field {}: {}".format(self.name, list(type_set)))
else:
# str, int, float混在一起
raise RuntimeError("Mixed data types in Field {}: {}".format(self.name, list(type_set)))
def _1d_list_check(self, val):
"""如果不是1D list就报错
"""
type_set = set((type(obj) for obj in val))
if any(obj not in self.BASIC_TYPES for obj in type_set):
raise ValueError("Mixed data types in Field {}: {}".format(self.name, list(type_set)))
self._basic_type_detection(type_set)
# otherwise: _basic_type_detection will raise error
return True
def _2d_list_check(self, val):
"""如果不是2D list 就报错
"""
type_set = set(type(obj) for obj in val)
if list(type_set) != [list]:
raise ValueError("Mixed data types in Field {}: {}".format(self.name, type_set))
inner_type_set = set()
for l in val:
for obj in l:
inner_type_set.add(type(obj))
self._basic_type_detection(inner_type_set)
return True
@staticmethod
def _map_to_np_type(basic_type):
type_mapping = {int: np.int64, float: np.float64, str: np.str, np.ndarray: np.ndarray}
return type_mapping[basic_type]
def __repr__(self):
return "FieldArray {}: {}".format(self.name, self.content.__repr__())
def append(self, val):
"""将val append到这个field的尾部。如果这个field已经被设置为input或者target则在append之前会检查该类型是否与已有
的内容是匹配的
:param Any val: 需要append的值 :return:
""" """
if self.ignore_type is False: cell_0 = self.content[0]
if isinstance(val, list): index = 0
pass try:
elif isinstance(val, tuple): # 确保最外层是list type_0, dim_0 = _get_ele_type_and_dim(cell_0)
val = list(val) for cell in self.content[1:]:
elif isinstance(val, np.ndarray): index += 1
val = val.tolist() type_i, dim_i = _get_ele_type_and_dim(cell)
elif any((isinstance(val, t) for t in self.BASIC_TYPES)): if type_i!=type_0:
pass raise SetInputOrTargetException("Type:{} in index {} is different from the first element with type:{}."
else: ".".format(type_i, index, type_0))
raise RuntimeError( if dim_0!=dim_i:
"Unexpected data type {}. Should be list, np.array, or {}".format(type(val), self.BASIC_TYPES)) raise SetInputOrTargetException("Dimension:{} in index {} is different from the first element with "
"dimension:{}.".format(dim_i, index, dim_0))
if self.is_input is True or self.is_target is True: self._cell_ndim = dim_0
if type(val) == list: self.dtype = type_0
if len(val) == 0: except SetInputOrTargetException as e:
raise ValueError("Cannot append an empty list.") e.index = index
if self.content_dim == 2 and self._1d_list_check(val): raise e
# 1维list检查
pass def append(self, val:Any):
elif self.content_dim == 3 and self._2d_list_check(val): """
# 2维list检查 :param val: 把该val append到fieldarray
pass :return:
else: """
raise RuntimeError( if (self._is_target or self._is_input) and self._ignore_type is False:
"Dimension not matched: expect dim={}, got {}.".format(self.content_dim - 1, val)) type_, dim_ = _get_ele_type_and_dim(val)
elif type(val) in self.BASIC_TYPES and self.content_dim == 1: if self.dtype!=type_:
# scalar检查 raise AppendToTargetOrInputException(f"Value(type:{type_}) are of different types with "
if type(val) == float and self.pytype == int: f"previous values(type:{self.dtype}).")
self.pytype = float if self._cell_ndim!=dim_:
self.dtype = self._map_to_np_type(self.pytype) raise AppendToTargetOrInputException(f"Value(dim:{dim_}) are of different dimensions with "
else: f"previous values(dim:{self._cell_ndim}).")
raise RuntimeError( self.content.append(val)
"Unexpected data type {}. Should be list, np.array, or {}".format(type(val), self.BASIC_TYPES)) else:
self.content.append(val) self.content.append(val)
def __getitem__(self, indices): def __getitem__(self, indices):
return self.get(indices, pad=False) return self.get(indices, pad=False)
def __setitem__(self, idx, val): def __setitem__(self, idx, val):
assert isinstance(idx, int) assert isinstance(idx, int)
if (self._is_target or self._is_input) and self.ignore_type is False: # 需要检测类型
type_, dim_ = _get_ele_type_and_dim(val)
if self.dtype!=type_:
raise RuntimeError(f"Value(type:{type_}) are of different types with "
f"other values(type:{self.dtype}).")
if self._cell_ndim!=dim_:
raise RuntimeError(f"Value(dim:{dim_}) are of different dimensions with "
f"previous values(dim:{self._cell_ndim}).")
self.content[idx] = val self.content[idx] = val
def get(self, indices, pad=True): def get(self, indices, pad=True):
""" """
根据给定的indices返回内容 根据给定的indices返回内容
@ -257,14 +170,17 @@ class FieldArray(object):
if isinstance(indices, int): if isinstance(indices, int):
return self.content[indices] return self.content[indices]
if self.is_input is False and self.is_target is False: if self.is_input is False and self.is_target is False:
raise RuntimeError("Please specify either is_input or is_target is True for {}".format(self.name)) raise RuntimeError("Please specify either is_input or is_target to True for {}".format(self.name))
contents = [self.content[i] for i in indices] contents = [self.content[i] for i in indices]
if self.padder is None or pad is False: if self.padder is None or pad is False:
return np.array(contents) return np.array(contents)
else: else:
return self.padder(contents, field_name=self.name, field_ele_dtype=self.dtype) return self.pad(contents)
def pad(self, contents):
return self.padder(contents, field_name=self.name, field_ele_dtype=self.dtype, dim=self._cell_ndim)
def set_padder(self, padder): def set_padder(self, padder):
""" """
设置padder在这个field进行pad的时候用这个padder进行pad如果为None则不进行pad 设置padder在这个field进行pad的时候用这个padder进行pad如果为None则不进行pad
@ -276,7 +192,7 @@ class FieldArray(object):
self.padder = deepcopy(padder) self.padder = deepcopy(padder)
else: else:
self.padder = None self.padder = None
def set_pad_val(self, pad_val): def set_pad_val(self, pad_val):
""" """
修改padder的pad_val. 修改padder的pad_val.
@ -286,7 +202,7 @@ class FieldArray(object):
if self.padder is not None: if self.padder is not None:
self.padder.set_pad_val(pad_val) self.padder.set_pad_val(pad_val)
return self return self
def __len__(self): def __len__(self):
""" """
Returns the size of FieldArray. Returns the size of FieldArray.
@ -294,7 +210,7 @@ class FieldArray(object):
:return int length: :return int length:
""" """
return len(self.content) return len(self.content)
def to(self, other): def to(self, other):
""" """
将other的属性复制给本FieldArray(other必须为FieldArray类型). 将other的属性复制给本FieldArray(other必须为FieldArray类型).
@ -303,22 +219,225 @@ class FieldArray(object):
:param other: :class:`~fastNLP.FieldArray` 从哪个field拷贝属性 :param other: :class:`~fastNLP.FieldArray` 从哪个field拷贝属性
:return: :class:`~fastNLP.FieldArray` :return: :class:`~fastNLP.FieldArray`
""" """
assert isinstance(other, FieldArray), "Only support FieldArray type, not {}.".format(type(other)) assert isinstance(other, FieldArray), "Only supports fastNLP.FieldArray type, not {}.".format(type(other))
self.ignore_type = other.ignore_type
self.is_input = other.is_input self.is_input = other.is_input
self.is_target = other.is_target self.is_target = other.is_target
self.padder = other.padder self.padder = other.padder
self.ignore_type = other.ignore_type
return self return self
def split(self, sep:str=None, inplace:bool=True):
"""
依次对自身的元素使用.split()方法应该只有当本field的元素为str时该方法才有用将返回值
def _is_iterable(content): :param sep: 分割符如果为None则直接调用str.split()
:param inplace: 如果为True则将新生成值替换本field否则返回list
:return: List[List[str]] or self
"""
new_contents = []
for index, cell in enumerate(self.content):
try:
new_contents.append(cell.split(sep))
except Exception as e:
print(f"Exception happens when process value in index {index}.")
raise e
return self._after_process(new_contents, inplace=inplace)
def int(self, inplace:bool=True):
"""
将本field中的值调用int(cell). 支持field中内容为以下两种情况(1)['1', '2', ...](即field中每个值为str的)
(2) [['1', '2', ..], ['3', ..], ...](即field中每个值为一个listlist中的值会被依次转换)
:param inplace: 如果为True则将新生成值替换本field否则返回list
:return: List[int], List[List[int]], self
"""
new_contents = []
for index, cell in enumerate(self.content):
try:
if isinstance(cell, list):
new_contents.append([int(value) for value in cell])
else:
new_contents.append(int(cell))
except Exception as e:
print(f"Exception happens when process value in index {index}.")
print(e)
return self._after_process(new_contents, inplace=inplace)
def float(self, inplace=True):
"""
将本field中的值调用float(cell). 支持field中内容为以下两种情况(1)['1', '2', ...](即field中每个值为str的)
(2) [['1', '2', ..], ['3', ..], ...](即field中每个值为一个listlist中的值会被依次转换)
:param inplace: 如果为True则将新生成值替换本field否则返回list
:return:
"""
new_contents = []
for index, cell in enumerate(self.content):
try:
if isinstance(cell, list):
new_contents.append([float(value) for value in cell])
else:
new_contents.append(float(cell))
except Exception as e:
print(f"Exception happens when process value in index {index}.")
raise e
return self._after_process(new_contents, inplace=inplace)
def bool(self, inplace=True):
"""
将本field中的值调用bool(cell). 支持field中内容为以下两种情况(1)['1', '2', ...](即field中每个值为str的)
(2) [['1', '2', ..], ['3', ..], ...](即field中每个值为一个listlist中的值会被依次转换)
:param inplace: 如果为True则将新生成值替换本field否则返回list
:return:
"""
new_contents = []
for index, cell in enumerate(self.content):
try:
if isinstance(cell, list):
new_contents.append([bool(value) for value in cell])
else:
new_contents.append(bool(cell))
except Exception as e:
print(f"Exception happens when process value in index {index}.")
raise e
return self._after_process(new_contents, inplace=inplace)
def lower(self, inplace=True):
"""
将本field中的值调用cell.lower(). 支持field中内容为以下两种情况(1)['1', '2', ...](即field中每个值为str的)
(2) [['1', '2', ..], ['3', ..], ...](即field中每个值为一个listlist中的值会被依次转换)
:param inplace: 如果为True则将新生成值替换本field否则返回list
:return: List[int], List[List[int]], self
"""
new_contents = []
for index, cell in enumerate(self.content):
try:
if isinstance(cell, list):
new_contents.append([value.lower() for value in cell])
else:
new_contents.append(cell.lower())
except Exception as e:
print(f"Exception happens when process value in index {index}.")
raise e
return self._after_process(new_contents, inplace=inplace)
def upper(self, inplace=True):
"""
将本field中的值调用cell.lower(). 支持field中内容为以下两种情况(1)['1', '2', ...](即field中每个值为str的)
(2) [['1', '2', ..], ['3', ..], ...](即field中每个值为一个listlist中的值会被依次转换)
:param inplace: 如果为True则将新生成值替换本field否则返回list
:return: List[int], List[List[int]], self
"""
new_contents = []
for index, cell in enumerate(self.content):
try:
if isinstance(cell, list):
new_contents.append([value.upper() for value in cell])
else:
new_contents.append(cell.upper())
except Exception as e:
print(f"Exception happens when process value in index {index}.")
raise e
return self._after_process(new_contents, inplace=inplace)
def value_count(self):
"""
返回该field下不同value的数量多用于统计label数量
:return: Counter, key是labelvalue是出现次数
"""
count = Counter()
def cum(cell):
if _is_iterable(cell) and not isinstance(cell, str):
for cell_ in cell:
cum(cell_)
else:
count[cell] += 1
for cell in self.content:
cum(cell)
return count
def _after_process(self, new_contents, inplace):
"""
当调用处理函数之后决定是否要替换field
:param new_contents:
:param inplace:
:return: self或者生成的content
"""
if inplace:
self.content = new_contents
try:
self.is_input = self.is_input
self.is_target = self.is_input
except SetInputOrTargetException as e:
print("The newly generated field cannot be set as input or target.")
raise e
return self
else:
return new_contents
def _get_ele_type_and_dim(cell:Any, dim=0):
"""
识别cell的类别与dimension的数量
numpy scalar type:https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.scalars.html
:param cell:
:param dim:
:return:
"""
if isinstance(cell, (str, Number, np.bool_)):
if hasattr(cell, 'dtype'):
return cell.dtype.type, dim
return type(cell), dim
elif isinstance(cell, list):
dim += 1
res = [_get_ele_type_and_dim(cell_i, dim) for cell_i in cell]
types = set([i for i,j in res])
dims = set([j for i,j in res])
if len(types)>1:
raise SetInputOrTargetException("Mixed types detected: {}.".format(list(types)))
elif len(types)==0:
raise SetInputOrTargetException("Empty value encountered.")
if len(dims)>1:
raise SetInputOrTargetException("Mixed dimension detected: {}.".format(list(dims)))
return types.pop(), dims.pop()
elif isinstance(cell, torch.Tensor):
return cell.dtype, cell.dim() + dim # 如果是torch.mean的结果是0
elif isinstance(cell, np.ndarray):
if cell.dtype != np.dtype('O'): # 如果不是object的话说明是well-formatted的了
return cell.dtype.type, cell.ndim + dim # dtype.type返回的会是np.int32, np.float等
# 否则需要继续往下iterate
dim += 1
res = [_get_ele_type_and_dim(cell_i, dim) for cell_i in cell]
types = set([i for i,j in res])
dims = set([j for i,j in res])
if len(types)>1:
raise SetInputOrTargetException("Mixed types detected: {}.".format(list(types)))
elif len(types)==0:
raise SetInputOrTargetException("Empty value encountered.")
if len(dims)>1:
raise SetInputOrTargetException("Mixed dimension detected: {}.".format(list(dims)))
return types.pop(), dims.pop()
else: # 包含tuple, set, dict以及其它的类型
raise SetInputOrTargetException(f"Cannot process type:{type(cell)}.")
def _is_iterable(value):
# 检查是否是iterable的, duck typing
try: try:
_ = (e for e in content) iter(value)
except TypeError: return True
except BaseException as e:
return False return False
return True
class Padder: class Padder:
@ -327,32 +446,36 @@ class Padder:
所有padder都需要继承这个类并覆盖__call__方法 所有padder都需要继承这个类并覆盖__call__方法
用于对batch进行padding操作传入的element是inplace的即直接修改element可能导致数据变化建议inplace修改之前deepcopy一份 用于对batch进行padding操作传入的element是inplace的即直接修改element可能导致数据变化建议inplace修改之前deepcopy一份
.. py:function:: __call__(self, contents, field_name, field_ele_dtype): .. py:function:: __call__(self, contents, field_name, field_ele_dtype):
传入的是List内容假设有以下的DataSet
:param list(Any) contents: 传入的element是inplace的即直接修改element可能导致数据变化建议inplace修改之前 传入的是List内容假设有以下的DataSet
:param List[Any] contents: 传入的element是inplace的即直接修改element可能导致数据变化建议inplace修改之前
deepcopy一份 deepcopy一份
:param str, field_name: field的名称 :param str, field_name: field的名称
:param np.int64,np.float64,np.str,None, field_ele_dtype: 该field的内层元素的类型如果该field的ignore_type为True该这个值为None :param np.int64,np.float64,np.str,None, field_ele_dtype: 该field的内层元素的类型如果该field的ignore_type为True该这个值为None
:return: np.array([padded_element]) :return: np.array([padded_element])
""" """
def __init__(self, pad_val=0, **kwargs): def __init__(self, pad_val=0, **kwargs):
self.pad_val = pad_val self.pad_val = pad_val
def set_pad_val(self, pad_val): def set_pad_val(self, pad_val):
self.pad_val = pad_val self.pad_val = pad_val
def __call__(self, contents, field_name, field_ele_dtype): @abstractmethod
def __call__(self, contents, field_name, field_ele_dtype, dim:int):
""" """
传入的是List内容假设有以下的DataSet 传入的是List内容假设有以下的DataSet
:param list(Any) contents: 传入的element是inplace的即直接修改element可能导致数据变化建议inplace修改之前 :param List[Any] contents: 传入的element是inplace的即直接修改element可能导致数据变化建议inplace修改之前
deepcopy一份 deepcopy一份
:param str, field_name: field的名称 :param str, field_name: field的名称
:param np.int64,np.float64,np.str,None, field_ele_dtype: 该field的内层元素的类型如果该field的ignore_type为True该这个值为None :param np.int64,np.float64,np.str,None, field_ele_dtype: 该field的内层元素的类型如果该field的ignore_type为True
该这个值为None
:param dim: 这个field的维度当ignore_type为True时该值为None
:return: np.array([padded_element]) :return: np.array([padded_element])
Example:: Example::
@ -394,50 +517,86 @@ class AutoPadder(Padder):
根据contents的数据自动判定是否需要做padding 根据contents的数据自动判定是否需要做padding
1 如果元素类型(元素类型是指field中最里层元素的数据类型, 可以通过FieldArray.dtype查看比如['This', 'is', ...]的元素类 1 如果元素类型(元素类型是指field中最里层元素的数据类型, 可以通过FieldArray.dtype查看比如['This', 'is', ...]的元素类
型为np.str, [[1,2], ...]的元素类型为np.int64)的数据不为(np.int64, np.float64)则不会进行pad 型为str, [[1,2], ...]的元素类型为int)的数据不为数值类型则不会进行pad
2 如果元素类型为(np.int64, np.float64), 2 如果元素类型为数值类型,比如np.int64, np.float64, int, float, torch.int64等
2.1 如果该field的内容为(np.int64, np.float64)比如为seq_len, 则不进行padding 2.1 如果该field的内容为数值类型(包括int, float等)比如为seq_len, 则不进行padding
2.2 如果该field的内容为List, 那么会将Batch中的List pad为一样长若该List下还有里层的List需要padding请使用其它padder 2.2 如果该field的内容等价于一维list, 那么会将Batch中的List pad为一样长
即如果Instance中field形如[1, 2, 3, ...]则可以pad若为[[1,2], [3,4, ...]]则不能进行pad
2.3 如果该field的内容等价于二维list那么会按照英语character padding的方式进行padding如果是character padding建议使用
:class: fastNLP.EngChar2DPadder.
2.4 如果该field的内容等价于三维list则如果每个instance在每个维度上相等会组成一个batch的tensor返回这种情况应该是为图片
的情况
3 其它情况不进行处理返回一个np.array类型
""" """
def __init__(self, pad_val=0): def __init__(self, pad_val=0):
"""
:param pad_val: int, padding的位置使用该index
"""
super().__init__(pad_val=pad_val) super().__init__(pad_val=pad_val)
def _is_two_dimension(self, contents): def __call__(self, contents, field_name, field_ele_dtype, dim):
""" if field_ele_dtype:
判断contents是不是只有两个维度[[1,2], [3]]是两个维度. [[[1,2], [3, 4, 5]], [[4,5]]]有三个维度 if dim>3:
:param contents: return np.array(contents)
:return: if isinstance(field_ele_dtype, type) and \
""" (issubclass(field_ele_dtype, np.number) or issubclass(field_ele_dtype, Number)):
value = contents[0] if dim==0:
if isinstance(value, (np.ndarray, list)): array = np.array(contents, dtype=field_ele_dtype)
value = value[0] elif dim==1:
if isinstance(value, (np.ndarray, list)): max_len = max(map(len, contents))
return False array = np.full((len(contents), max_len), self.pad_val, dtype=field_ele_dtype)
return True for i, content_i in enumerate(contents):
return False array[i, :len(content_i)] = content_i
elif dim==2:
def __call__(self, contents, field_name, field_ele_dtype): max_len = max(map(len, contents))
max_word_len = max([max([len(content_ii) for content_ii in content_i]) for
if not _is_iterable(contents[0]): content_i in contents])
array = np.array([content for content in contents], dtype=field_ele_dtype) array = np.full((len(contents), max_len, max_word_len), self.pad_val, dtype=field_ele_dtype)
elif field_ele_dtype in (np.int64, np.float64) and self._is_two_dimension(contents): for i, content_i in enumerate(contents):
max_len = max([len(content) for content in contents]) for j, content_ii in enumerate(content_i):
array = np.full((len(contents), max_len), self.pad_val, dtype=field_ele_dtype) array[i, j, :len(content_ii)] = content_ii
for i, content in enumerate(contents): else:
array[i][:len(content)] = content shape = np.shape(contents)
elif field_ele_dtype is None: if len(shape)==4: # 说明各dimension是相同的大小
array = np.array(contents) # 当ignore_type=True时直接返回contents array = np.array(contents, dtype=field_ele_dtype)
else: # should only be str else:
array = np.array([content for content in contents]) raise RuntimeError(f"Field:{field_name} has 3 dimensions, every sample should have the same shape.")
return array return array
elif str(field_ele_dtype).startswith('torch'):
if dim==0:
tensor = torch.tensor(contents).to(field_ele_dtype)
elif dim==1:
max_len = max(map(len, contents))
tensor = torch.full((len(contents), max_len), fill_value=self.pad_val, dtype=field_ele_dtype)
for i, content_i in enumerate(contents):
tensor[i, :len(content_i)] = torch.tensor(content_i)
elif dim==2:
max_len = max(map(len, contents))
max_word_len = max([max([len(content_ii) for content_ii in content_i]) for
content_i in contents])
tensor = torch.full((len(contents), max_len, max_word_len), fill_value=self.pad_val,
dtype=field_ele_dtype)
for i, content_i in enumerate(contents):
for j, content_ii in enumerate(content_i):
tensor[i, j, :len(content_ii)] = torch.tensor(content_ii)
else:
shapes = set([np.shape(content_i) for content_i in contents])
if len(shapes)>1:
raise RuntimeError(f"Field:{field_name} has 3 dimensions, every sample should have the same shape.")
shape = shapes.pop()
if len(shape)==3:
tensor = torch.full([len(contents)]+list(shape), fill_value=self.pad_val, dtype=field_ele_dtype)
for i, content_i in enumerate(contents):
tensor[i] = torch.tensor(content_i, dtype=field_ele_dtype)
else:
raise RuntimeError(f"Field:{field_name} has 3 dimensions, every sample should have the same shape.")
return tensor
else:
return np.array(contents) # 不进行任何操作
else:
return np.array(contents)
class EngChar2DPadder(Padder): class EngChar2DPadder(Padder):
@ -463,7 +622,7 @@ class EngChar2DPadder(Padder):
dataset.set_padder('chars', padder) # chars这个field的设置为了EnChar2DPadder dataset.set_padder('chars', padder) # chars这个field的设置为了EnChar2DPadder
""" """
def __init__(self, pad_val=0, pad_length=0): def __init__(self, pad_val=0, pad_length=0):
""" """
:param pad_val: int, pad的位置使用该index :param pad_val: int, pad的位置使用该index
@ -471,32 +630,10 @@ class EngChar2DPadder(Padder):
都pad或截取到该长度. 都pad或截取到该长度.
""" """
super().__init__(pad_val=pad_val) super().__init__(pad_val=pad_val)
self.pad_length = pad_length self.pad_length = pad_length
def _exactly_three_dims(self, contents, field_name): def __call__(self, contents, field_name, field_ele_dtype, dim):
"""
检查传入的contents是否刚好是3维如果不是3维就报错理论上第一个维度是batch第二个维度是word第三个维度是character
:param contents:
:param field_name: str
:return:
"""
if not isinstance(contents, list):
raise TypeError("contents should be a list, not {}.".format(type(contents)))
value = contents[0]
try:
value = value[0]
except:
raise ValueError("Field:{} only has one dimension.".format(field_name))
try:
value = value[0]
except:
raise ValueError("Field:{} only has two dimensions.".format(field_name))
if _is_iterable(value):
raise ValueError("Field:{} has more than 3 dimension.".format(field_name))
def __call__(self, contents, field_name, field_ele_dtype):
""" """
期望输入类似于 期望输入类似于
[ [
@ -510,11 +647,11 @@ class EngChar2DPadder(Padder):
:param field_ele_dtype :param field_ele_dtype
:return: :return:
""" """
if field_ele_dtype not in (np.int64, np.float64): if field_ele_dtype not in (np.int64, np.float64, int, float):
raise TypeError('dtype of Field:{} should be np.int64 or np.float64 to do 2D padding, get {}.'.format( raise TypeError('dtype of Field:{} should be np.int64 or np.float64 to do 2D padding, get {}.'.format(
field_name, field_ele_dtype field_name, field_ele_dtype
)) ))
self._exactly_three_dims(contents, field_name) assert dim==2, f"Field:{field_name} has {dim}, EngChar2DPadder only supports input with 2 dimensions."
if self.pad_length < 1: if self.pad_length < 1:
max_char_length = max([max(len(char_lst) for char_lst in word_lst) for word_lst in contents]) max_char_length = max([max(len(char_lst) for char_lst in word_lst) for word_lst in contents])
else: else:
@ -522,12 +659,12 @@ class EngChar2DPadder(Padder):
max_sent_length = max(len(word_lst) for word_lst in contents) max_sent_length = max(len(word_lst) for word_lst in contents)
batch_size = len(contents) batch_size = len(contents)
dtype = type(contents[0][0][0]) dtype = type(contents[0][0][0])
padded_array = np.full((batch_size, max_sent_length, max_char_length), fill_value=self.pad_val, padded_array = np.full((batch_size, max_sent_length, max_char_length), fill_value=self.pad_val,
dtype=dtype) dtype=dtype)
for b_idx, word_lst in enumerate(contents): for b_idx, word_lst in enumerate(contents):
for c_idx, char_lst in enumerate(word_lst): for c_idx, char_lst in enumerate(word_lst):
chars = char_lst[:max_char_length] chars = char_lst[:max_char_length]
padded_array[b_idx, c_idx, :len(chars)] = chars padded_array[b_idx, c_idx, :len(chars)] = chars
return padded_array return padded_array

View File

@ -20,12 +20,14 @@ from collections import defaultdict
import torch import torch
import torch.nn.functional as F import torch.nn.functional as F
from ..core.const import Const
from .utils import _CheckError from .utils import _CheckError
from .utils import _CheckRes from .utils import _CheckRes
from .utils import _build_args from .utils import _build_args
from .utils import _check_arg_dict_list from .utils import _check_arg_dict_list
from .utils import _check_function_or_method from .utils import _check_function_or_method
from .utils import _get_func_signature from .utils import _get_func_signature
from .utils import seq_len_to_mask
class LossBase(object): class LossBase(object):
@ -34,14 +36,23 @@ class LossBase(object):
""" """
def __init__(self): def __init__(self):
self.param_map = {} self._param_map = {} # key是fun的参数value是以该值从传入的dict取出value
self._checked = False self._checked = False
@property
def param_map(self):
if len(self._param_map) == 0: # 如果为空说明还没有初始化
func_spect = inspect.getfullargspec(self.get_loss)
func_args = [arg for arg in func_spect.args if arg != 'self']
for arg in func_args:
self._param_map[arg] = arg
return self._param_map
def get_loss(self, *args, **kwargs): def get_loss(self, *args, **kwargs):
raise NotImplementedError raise NotImplementedError
def _init_param_map(self, key_map=None, **kwargs): def _init_param_map(self, key_map=None, **kwargs):
"""检查key_map和其他参数map并将这些映射关系添加到self.param_map """检查key_map和其他参数map并将这些映射关系添加到self._param_map
:param dict key_map: 表示key的映射关系 :param dict key_map: 表示key的映射关系
:param kwargs: key word args里面的每一个的键-值对都会被构造成映射关系 :param kwargs: key word args里面的每一个的键-值对都会被构造成映射关系
@ -53,30 +64,30 @@ class LossBase(object):
raise TypeError("key_map must be `dict`, got {}.".format(type(key_map))) raise TypeError("key_map must be `dict`, got {}.".format(type(key_map)))
for key, value in key_map.items(): for key, value in key_map.items():
if value is None: if value is None:
self.param_map[key] = key self._param_map[key] = key
continue continue
if not isinstance(key, str): if not isinstance(key, str):
raise TypeError(f"key in key_map must be `str`, not `{type(key)}`.") raise TypeError(f"key in key_map must be `str`, not `{type(key)}`.")
if not isinstance(value, str): if not isinstance(value, str):
raise TypeError(f"value in key_map must be `str`, not `{type(value)}`.") raise TypeError(f"value in key_map must be `str`, not `{type(value)}`.")
self.param_map[key] = value self._param_map[key] = value
value_counter[value].add(key) value_counter[value].add(key)
for key, value in kwargs.items(): for key, value in kwargs.items():
if value is None: if value is None:
self.param_map[key] = key self._param_map[key] = key
continue continue
if not isinstance(value, str): if not isinstance(value, str):
raise TypeError(f"in {key}={value}, value must be `str`, not `{type(value)}`.") raise TypeError(f"in {key}={value}, value must be `str`, not `{type(value)}`.")
self.param_map[key] = value self._param_map[key] = value
value_counter[value].add(key) value_counter[value].add(key)
for value, key_set in value_counter.items(): for value, key_set in value_counter.items():
if len(key_set) > 1: if len(key_set) > 1:
raise ValueError(f"Several parameters:{key_set} are provided with one output {value}.") raise ValueError(f"Several parameters:{key_set} are provided with one output {value}.")
# check consistence between signature and param_map # check consistence between signature and _param_map
func_spect = inspect.getfullargspec(self.get_loss) func_spect = inspect.getfullargspec(self.get_loss)
func_args = [arg for arg in func_spect.args if arg != 'self'] func_args = [arg for arg in func_spect.args if arg != 'self']
for func_param, input_param in self.param_map.items(): for func_param, input_param in self._param_map.items():
if func_param not in func_args: if func_param not in func_args:
raise NameError( raise NameError(
f"Parameter `{func_param}` is not in {_get_func_signature(self.get_loss)}. Please check the " f"Parameter `{func_param}` is not in {_get_func_signature(self.get_loss)}. Please check the "
@ -86,22 +97,7 @@ class LossBase(object):
# if func_spect.varargs: # if func_spect.varargs:
# raise NameError(f"Delete `*{func_spect.varargs}` in {get_func_signature(self.get_loss)}(Do not use " # raise NameError(f"Delete `*{func_spect.varargs}` in {get_func_signature(self.get_loss)}(Do not use "
# f"positional argument.).") # f"positional argument.).")
def _fast_param_map(self, pred_dict, target_dict):
"""Only used as inner function. When the pred_dict, target is unequivocal. Don't need users to pass key_map.
such as pred_dict has one element, target_dict has one element
:param pred_dict:
:param target_dict:
:return: dict, if dict is not {}, pass it to self.evaluate. Otherwise do mapping.
"""
fast_param = {}
if len(self.param_map) == 2 and len(pred_dict) == 1 and len(target_dict) == 1:
fast_param['pred'] = list(pred_dict.values())[0]
fast_param['target'] = list(target_dict.values())[0]
return fast_param
return fast_param
def __call__(self, pred_dict, target_dict, check=False): def __call__(self, pred_dict, target_dict, check=False):
""" """
:param dict pred_dict: 模型的forward函数返回的dict :param dict pred_dict: 模型的forward函数返回的dict
@ -109,55 +105,43 @@ class LossBase(object):
:param Boolean check: 每一次执行映射函数的时候是否检查映射表默认为不检查 :param Boolean check: 每一次执行映射函数的时候是否检查映射表默认为不检查
:return: :return:
""" """
fast_param = self._fast_param_map(pred_dict, target_dict)
if fast_param:
loss = self.get_loss(**fast_param)
return loss
if not self._checked: if not self._checked:
# 1. check consistence between signature and param_map # 1. check consistence between signature and _param_map
func_spect = inspect.getfullargspec(self.get_loss) func_spect = inspect.getfullargspec(self.get_loss)
func_args = set([arg for arg in func_spect.args if arg != 'self']) func_args = set([arg for arg in func_spect.args if arg != 'self'])
for func_arg, input_arg in self.param_map.items(): for func_arg, input_arg in self._param_map.items():
if func_arg not in func_args: if func_arg not in func_args:
raise NameError(f"`{func_arg}` not in {_get_func_signature(self.get_loss)}.") raise NameError(f"`{func_arg}` not in {_get_func_signature(self.get_loss)}.")
# 2. only part of the param_map are passed, left are not # 2. only part of the _param_map are passed, left are not
for arg in func_args: for arg in func_args:
if arg not in self.param_map: if arg not in self._param_map:
self.param_map[arg] = arg # This param does not need mapping. self._param_map[arg] = arg # This param does not need mapping.
self._evaluate_args = func_args self._evaluate_args = func_args
self._reverse_param_map = {input_arg: func_arg for func_arg, input_arg in self.param_map.items()} self._reverse_param_map = {input_arg: func_arg for func_arg, input_arg in self._param_map.items()}
# need to wrap inputs in dict.
mapped_pred_dict = {} mapped_pred_dict = {}
mapped_target_dict = {} mapped_target_dict = {}
duplicated = [] for input_arg, mapped_arg in self._reverse_param_map.items():
for input_arg in set(list(pred_dict.keys()) + list(target_dict.keys())):
not_duplicate_flag = 0
if input_arg in self._reverse_param_map:
mapped_arg = self._reverse_param_map[input_arg]
not_duplicate_flag += 1
else:
mapped_arg = input_arg
if input_arg in pred_dict: if input_arg in pred_dict:
mapped_pred_dict[mapped_arg] = pred_dict[input_arg] mapped_pred_dict[mapped_arg] = pred_dict[input_arg]
not_duplicate_flag += 1
if input_arg in target_dict: if input_arg in target_dict:
mapped_target_dict[mapped_arg] = target_dict[input_arg] mapped_target_dict[mapped_arg] = target_dict[input_arg]
not_duplicate_flag += 1
if not_duplicate_flag == 3:
duplicated.append(input_arg)
# missing # missing
if not self._checked: if not self._checked:
duplicated = []
for input_arg, mapped_arg in self._reverse_param_map.items():
if input_arg in pred_dict and input_arg in target_dict:
duplicated.append(input_arg)
check_res = _check_arg_dict_list(self.get_loss, [mapped_pred_dict, mapped_target_dict]) check_res = _check_arg_dict_list(self.get_loss, [mapped_pred_dict, mapped_target_dict])
# replace missing. # replace missing.
missing = check_res.missing missing = check_res.missing
replaced_missing = list(missing) replaced_missing = list(missing)
for idx, func_arg in enumerate(missing): for idx, func_arg in enumerate(missing):
# Don't delete `` in this information, nor add `` # Don't delete `` in this information, nor add ``
replaced_missing[idx] = f"{self.param_map[func_arg]}" + f"(assign to `{func_arg}` " \ replaced_missing[idx] = f"{self._param_map[func_arg]}" + f"(assign to `{func_arg}` " \
f"in `{self.__class__.__name__}`)" f"in `{self.__class__.__name__}`)"
check_res = _CheckRes(missing=replaced_missing, check_res = _CheckRes(missing=replaced_missing,
@ -170,6 +154,8 @@ class LossBase(object):
if check_res.missing or check_res.duplicated: if check_res.missing or check_res.duplicated:
raise _CheckError(check_res=check_res, raise _CheckError(check_res=check_res,
func_signature=_get_func_signature(self.get_loss)) func_signature=_get_func_signature(self.get_loss))
self._checked = True
refined_args = _build_args(self.get_loss, **mapped_pred_dict, **mapped_target_dict) refined_args = _build_args(self.get_loss, **mapped_pred_dict, **mapped_target_dict)
loss = self.get_loss(**refined_args) loss = self.get_loss(**refined_args)
@ -204,15 +190,11 @@ class LossFunc(LossBase):
super(LossFunc, self).__init__() super(LossFunc, self).__init__()
_check_function_or_method(func) _check_function_or_method(func)
self.get_loss = func
if key_map is not None: if key_map is not None:
if not isinstance(key_map, dict): if not isinstance(key_map, dict):
raise RuntimeError(f"Loss error: key_map except a {type({})} but got a {type(key_map)}") raise RuntimeError(f"Loss error: key_map except a {type({})} but got a {type(key_map)}")
self.param_map = key_map self._init_param_map(key_map, **kwargs)
if len(kwargs) > 0:
for key, val in kwargs.items():
self.param_map.update({key: val})
self.get_loss = func
class CrossEntropyLoss(LossBase): class CrossEntropyLoss(LossBase):
@ -223,7 +205,10 @@ class CrossEntropyLoss(LossBase):
:param pred: 参数映射表中 `pred` 的映射关系None表示映射关系为 `pred` -> `pred` :param pred: 参数映射表中 `pred` 的映射关系None表示映射关系为 `pred` -> `pred`
:param target: 参数映射表中 `target` 的映射关系None表示映射关系为 `target` -> `target` :param target: 参数映射表中 `target` 的映射关系None表示映射关系为 `target` -> `target`
:param padding_idx: padding的index在计算loss时将忽略target中标号为padding_idx的内容 :param seq_len: 句子的长度, 长度之外的token不会计算loss
:param padding_idx: padding的index在计算loss时将忽略target中标号为padding_idx的内容, 可以通过该值代替
传入seq_len.
:param str reduction: 支持 `mean` `sum` `none` .
Example:: Example::
@ -231,15 +216,25 @@ class CrossEntropyLoss(LossBase):
""" """
def __init__(self, pred=None, target=None, padding_idx=-100): def __init__(self, pred=None, target=None, seq_len=None, padding_idx=-100, reduction='mean'):
# TODO 需要做一些检查F.cross_entropy在计算时如果pred是(16, 10 ,4), target的形状按道理应该是(16, 10), 但实际需要164
super(CrossEntropyLoss, self).__init__() super(CrossEntropyLoss, self).__init__()
self._init_param_map(pred=pred, target=target) self._init_param_map(pred=pred, target=target, seq_len=seq_len)
self.padding_idx = padding_idx self.padding_idx = padding_idx
assert reduction in ('mean', 'sum', 'none')
self.reduction = reduction
def get_loss(self, pred, target): def get_loss(self, pred, target, seq_len=None):
if pred.dim() > 2:
if pred.size(1) != target.size(1):
pred = pred.transpose(1, 2)
pred = pred.reshape(-1, pred.size(-1))
target = target.reshape(-1)
if seq_len is not None:
mask = seq_len_to_mask(seq_len).reshape(-1).eq(0)
target = target.masked_fill(mask, self.padding_idx)
return F.cross_entropy(input=pred, target=target, return F.cross_entropy(input=pred, target=target,
ignore_index=self.padding_idx) ignore_index=self.padding_idx, reduction=self.reduction)
class L1Loss(LossBase): class L1Loss(LossBase):
@ -250,15 +245,18 @@ class L1Loss(LossBase):
:param pred: 参数映射表中 `pred` 的映射关系None表示映射关系为 `pred` -> `pred` :param pred: 参数映射表中 `pred` 的映射关系None表示映射关系为 `pred` -> `pred`
:param target: 参数映射表中 `target` 的映射关系None表示映射关系为 `target` >`target` :param target: 参数映射表中 `target` 的映射关系None表示映射关系为 `target` >`target`
:param str reduction: 支持'mean''sum''none'.
""" """
def __init__(self, pred=None, target=None): def __init__(self, pred=None, target=None, reduction='mean'):
super(L1Loss, self).__init__() super(L1Loss, self).__init__()
self._init_param_map(pred=pred, target=target) self._init_param_map(pred=pred, target=target)
assert reduction in ('mean', 'sum', 'none')
self.reduction = reduction
def get_loss(self, pred, target): def get_loss(self, pred, target):
return F.l1_loss(input=pred, target=target) return F.l1_loss(input=pred, target=target, reduction=self.reduction)
class BCELoss(LossBase): class BCELoss(LossBase):
@ -267,16 +265,19 @@ class BCELoss(LossBase):
二分类交叉熵损失函数 二分类交叉熵损失函数
:param pred: 参数映射表中`pred`的映射关系None表示映射关系为`pred`->`pred` :param pred: 参数映射表中 `pred` 的映射关系None表示映射关系为 `pred` -> `pred`
:param target: 参数映射表中`target`的映射关系None表示映射关系为`target`->`target` :param target: 参数映射表中 `target` 的映射关系None表示映射关系为 `target` -> `target`
:param str reduction: 支持 `mean` `sum` `none` .
""" """
def __init__(self, pred=None, target=None): def __init__(self, pred=None, target=None, reduction='mean'):
super(BCELoss, self).__init__() super(BCELoss, self).__init__()
self._init_param_map(pred=pred, target=target) self._init_param_map(pred=pred, target=target)
assert reduction in ('mean', 'sum', 'none')
self.reduction = reduction
def get_loss(self, pred, target): def get_loss(self, pred, target):
return F.binary_cross_entropy(input=pred, target=target) return F.binary_cross_entropy(input=pred, target=target, reduction=self.reduction)
class NLLLoss(LossBase): class NLLLoss(LossBase):
@ -285,16 +286,22 @@ class NLLLoss(LossBase):
负对数似然损失函数 负对数似然损失函数
:param pred: 参数映射表中`pred`的映射关系None表示映射关系为`pred`->`pred` :param pred: 参数映射表中 `pred` 的映射关系None表示映射关系为 `pred` -> `pred`
:param target: 参数映射表中`target`的映射关系None表示映射关系为`target`->`target` :param target: 参数映射表中 `target` 的映射关系None表示映射关系为 `target` -> `target`
:param ignore_idx: ignore的index在计算loss时将忽略target中标号为ignore_idx的内容, 可以通过该值代替
传入seq_len.
:param str reduction: 支持 `mean` `sum` `none` .
""" """
def __init__(self, pred=None, target=None): def __init__(self, pred=None, target=None, ignore_idx=-100, reduction='mean'):
super(NLLLoss, self).__init__() super(NLLLoss, self).__init__()
self._init_param_map(pred=pred, target=target) self._init_param_map(pred=pred, target=target)
assert reduction in ('mean', 'sum', 'none')
self.reduction = reduction
self.ignore_idx = ignore_idx
def get_loss(self, pred, target): def get_loss(self, pred, target):
return F.nll_loss(input=pred, target=target) return F.nll_loss(input=pred, target=target, ignore_index=self.ignore_idx, reduction=self.reduction)
class LossInForward(LossBase): class LossInForward(LossBase):
@ -306,7 +313,7 @@ class LossInForward(LossBase):
:param str loss_key: 在forward函数中loss的键名默认为loss :param str loss_key: 在forward函数中loss的键名默认为loss
""" """
def __init__(self, loss_key='loss'): def __init__(self, loss_key=Const.LOSS):
super().__init__() super().__init__()
if not isinstance(loss_key, str): if not isinstance(loss_key, str):
raise TypeError(f"Only str allowed for loss_key, got {type(loss_key)}.") raise TypeError(f"Only str allowed for loss_key, got {type(loss_key)}.")

View File

@ -6,7 +6,7 @@ __all__ = [
"MetricBase", "MetricBase",
"AccuracyMetric", "AccuracyMetric",
"SpanFPreRecMetric", "SpanFPreRecMetric",
"SQuADMetric" "ExtractiveQAMetric"
] ]
import inspect import inspect
@ -22,18 +22,19 @@ from .utils import _check_arg_dict_list
from .utils import _get_func_signature from .utils import _get_func_signature
from .utils import seq_len_to_mask from .utils import seq_len_to_mask
from .vocabulary import Vocabulary from .vocabulary import Vocabulary
from abc import abstractmethod
class MetricBase(object): class MetricBase(object):
""" """
所有metrics的基类,所有的传入到Trainer, Tester的Metric需要继承自该对象需要覆盖写入evaluate(), get_metric()方法 所有metrics的基类,所有的传入到Trainer, Tester的Metric需要继承自该对象需要覆盖写入evaluate(), get_metric()方法
evaluate(xxx)中传入的是一个batch的数据 evaluate(xxx)中传入的是一个batch的数据
get_metric(xxx)当所有数据处理完毕调用该方法得到最终的metric值 get_metric(xxx)当所有数据处理完毕调用该方法得到最终的metric值
以分类问题中Accuracy计算为例 以分类问题中Accuracy计算为例
假设model的forward返回dict中包含'pred'这个key, 并且该key需要用于Accuracy:: 假设model的forward返回dict中包含 `pred` 这个key, 并且该key需要用于Accuracy::
class Model(nn.Module): class Model(nn.Module):
def __init__(xxx): def __init__(xxx):
@ -42,7 +43,7 @@ class MetricBase(object):
# do something # do something
return {'pred': pred, 'other_keys':xxx} # pred's shape: batch_size x num_classes return {'pred': pred, 'other_keys':xxx} # pred's shape: batch_size x num_classes
假设dataset中'label'这个field是需要预测的值并且该field被设置为了target 假设dataset中 `label` 这个field是需要预测的值并且该field被设置为了target
对应的AccMetric可以按如下的定义, version1, 只使用这一次:: 对应的AccMetric可以按如下的定义, version1, 只使用这一次::
class AccMetric(MetricBase): class AccMetric(MetricBase):
@ -115,17 +116,28 @@ class MetricBase(object):
""" """
def __init__(self): def __init__(self):
self.param_map = {} # key is param in function, value is input param. self._param_map = {} # key is param in function, value is input param.
self._checked = False self._checked = False
@property
def param_map(self):
if len(self._param_map) == 0: # 如果为空说明还没有初始化
func_spect = inspect.getfullargspec(self.evaluate)
func_args = [arg for arg in func_spect.args if arg != 'self']
for arg in func_args:
self._param_map[arg] = arg
return self._param_map
@abstractmethod
def evaluate(self, *args, **kwargs): def evaluate(self, *args, **kwargs):
raise NotImplementedError raise NotImplementedError
@abstractmethod
def get_metric(self, reset=True): def get_metric(self, reset=True):
raise NotImplemented raise NotImplemented
def _init_param_map(self, key_map=None, **kwargs): def _init_param_map(self, key_map=None, **kwargs):
"""检查key_map和其他参数map并将这些映射关系添加到self.param_map """检查key_map和其他参数map并将这些映射关系添加到self._param_map
:param dict key_map: 表示key的映射关系 :param dict key_map: 表示key的映射关系
:param kwargs: key word args里面的每一个的键-值对都会被构造成映射关系 :param kwargs: key word args里面的每一个的键-值对都会被构造成映射关系
@ -137,30 +149,30 @@ class MetricBase(object):
raise TypeError("key_map must be `dict`, got {}.".format(type(key_map))) raise TypeError("key_map must be `dict`, got {}.".format(type(key_map)))
for key, value in key_map.items(): for key, value in key_map.items():
if value is None: if value is None:
self.param_map[key] = key self._param_map[key] = key
continue continue
if not isinstance(key, str): if not isinstance(key, str):
raise TypeError(f"key in key_map must be `str`, not `{type(key)}`.") raise TypeError(f"key in key_map must be `str`, not `{type(key)}`.")
if not isinstance(value, str): if not isinstance(value, str):
raise TypeError(f"value in key_map must be `str`, not `{type(value)}`.") raise TypeError(f"value in key_map must be `str`, not `{type(value)}`.")
self.param_map[key] = value self._param_map[key] = value
value_counter[value].add(key) value_counter[value].add(key)
for key, value in kwargs.items(): for key, value in kwargs.items():
if value is None: if value is None:
self.param_map[key] = key self._param_map[key] = key
continue continue
if not isinstance(value, str): if not isinstance(value, str):
raise TypeError(f"in {key}={value}, value must be `str`, not `{type(value)}`.") raise TypeError(f"in {key}={value}, value must be `str`, not `{type(value)}`.")
self.param_map[key] = value self._param_map[key] = value
value_counter[value].add(key) value_counter[value].add(key)
for value, key_set in value_counter.items(): for value, key_set in value_counter.items():
if len(key_set) > 1: if len(key_set) > 1:
raise ValueError(f"Several parameters:{key_set} are provided with one output {value}.") raise ValueError(f"Several parameters:{key_set} are provided with one output {value}.")
# check consistence between signature and param_map # check consistence between signature and _param_map
func_spect = inspect.getfullargspec(self.evaluate) func_spect = inspect.getfullargspec(self.evaluate)
func_args = [arg for arg in func_spect.args if arg != 'self'] func_args = [arg for arg in func_spect.args if arg != 'self']
for func_param, input_param in self.param_map.items(): for func_param, input_param in self._param_map.items():
if func_param not in func_args: if func_param not in func_args:
raise NameError( raise NameError(
f"Parameter `{func_param}` is not in {_get_func_signature(self.evaluate)}. Please check the " f"Parameter `{func_param}` is not in {_get_func_signature(self.evaluate)}. Please check the "
@ -175,7 +187,7 @@ class MetricBase(object):
:return: dict, if dict is not {}, pass it to self.evaluate. Otherwise do mapping. :return: dict, if dict is not {}, pass it to self.evaluate. Otherwise do mapping.
""" """
fast_param = {} fast_param = {}
if len(self.param_map) == 2 and len(pred_dict) == 1 and len(target_dict) == 1: if len(self._param_map) == 2 and len(pred_dict) == 1 and len(target_dict) == 1:
fast_param['pred'] = list(pred_dict.values())[0] fast_param['pred'] = list(pred_dict.values())[0]
fast_param['target'] = list(target_dict.values())[0] fast_param['target'] = list(target_dict.values())[0]
return fast_param return fast_param
@ -204,42 +216,35 @@ class MetricBase(object):
if not self._checked: if not self._checked:
if not callable(self.evaluate): if not callable(self.evaluate):
raise TypeError(f"{self.__class__.__name__}.evaluate has to be callable, not {type(self.evaluate)}.") raise TypeError(f"{self.__class__.__name__}.evaluate has to be callable, not {type(self.evaluate)}.")
# 1. check consistence between signature and param_map # 1. check consistence between signature and _param_map
func_spect = inspect.getfullargspec(self.evaluate) func_spect = inspect.getfullargspec(self.evaluate)
func_args = set([arg for arg in func_spect.args if arg != 'self']) func_args = set([arg for arg in func_spect.args if arg != 'self'])
for func_arg, input_arg in self.param_map.items(): for func_arg, input_arg in self._param_map.items():
if func_arg not in func_args: if func_arg not in func_args:
raise NameError(f"`{func_arg}` not in {_get_func_signature(self.evaluate)}.") raise NameError(f"`{func_arg}` not in {_get_func_signature(self.evaluate)}.")
# 2. only part of the param_map are passed, left are not # 2. only part of the _param_map are passed, left are not
for arg in func_args: for arg in func_args:
if arg not in self.param_map: if arg not in self._param_map:
self.param_map[arg] = arg # This param does not need mapping. self._param_map[arg] = arg # This param does not need mapping.
self._evaluate_args = func_args self._evaluate_args = func_args
self._reverse_param_map = {input_arg: func_arg for func_arg, input_arg in self.param_map.items()} self._reverse_param_map = {input_arg: func_arg for func_arg, input_arg in self._param_map.items()}
# need to wrap inputs in dict. # need to wrap inputs in dict.
mapped_pred_dict = {} mapped_pred_dict = {}
mapped_target_dict = {} mapped_target_dict = {}
duplicated = [] for input_arg, mapped_arg in self._reverse_param_map.items():
for input_arg in set(list(pred_dict.keys()) + list(target_dict.keys())):
not_duplicate_flag = 0
if input_arg in self._reverse_param_map:
mapped_arg = self._reverse_param_map[input_arg]
not_duplicate_flag += 1
else:
mapped_arg = input_arg
if input_arg in pred_dict: if input_arg in pred_dict:
mapped_pred_dict[mapped_arg] = pred_dict[input_arg] mapped_pred_dict[mapped_arg] = pred_dict[input_arg]
not_duplicate_flag += 1
if input_arg in target_dict: if input_arg in target_dict:
mapped_target_dict[mapped_arg] = target_dict[input_arg] mapped_target_dict[mapped_arg] = target_dict[input_arg]
not_duplicate_flag += 1
if not_duplicate_flag == 3:
duplicated.append(input_arg)
# missing # missing
if not self._checked: if not self._checked:
duplicated = []
for input_arg, mapped_arg in self._reverse_param_map.items():
if input_arg in pred_dict and input_arg in target_dict:
duplicated.append(input_arg)
check_res = _check_arg_dict_list(self.evaluate, [mapped_pred_dict, mapped_target_dict]) check_res = _check_arg_dict_list(self.evaluate, [mapped_pred_dict, mapped_target_dict])
# only check missing. # only check missing.
# replace missing. # replace missing.
@ -247,7 +252,7 @@ class MetricBase(object):
replaced_missing = list(missing) replaced_missing = list(missing)
for idx, func_arg in enumerate(missing): for idx, func_arg in enumerate(missing):
# Don't delete `` in this information, nor add `` # Don't delete `` in this information, nor add ``
replaced_missing[idx] = f"{self.param_map[func_arg]}" + f"(assign to `{func_arg}` " \ replaced_missing[idx] = f"{self._param_map[func_arg]}" + f"(assign to `{func_arg}` " \
f"in `{self.__class__.__name__}`)" f"in `{self.__class__.__name__}`)"
check_res = _CheckRes(missing=replaced_missing, check_res = _CheckRes(missing=replaced_missing,
@ -260,10 +265,10 @@ class MetricBase(object):
if check_res.missing or check_res.duplicated: if check_res.missing or check_res.duplicated:
raise _CheckError(check_res=check_res, raise _CheckError(check_res=check_res,
func_signature=_get_func_signature(self.evaluate)) func_signature=_get_func_signature(self.evaluate))
self._checked = True
refined_args = _build_args(self.evaluate, **mapped_pred_dict, **mapped_target_dict) refined_args = _build_args(self.evaluate, **mapped_pred_dict, **mapped_target_dict)
self.evaluate(**refined_args) self.evaluate(**refined_args)
self._checked = True
return return
@ -409,6 +414,37 @@ def _bmeso_tag_to_spans(tags, ignore_labels=None):
] ]
def _bioes_tag_to_spans(tags, ignore_labels=None):
"""
给定一个tags的lis比如['O', 'B-singer', 'I-singer', 'E-singer', 'O', 'O']
返回[('singer', (1, 4))] (左闭右开区间)
:param tags: List[str],
:param ignore_labels: List[str], 在该list中的label将被忽略
:return: List[Tuple[str, List[int, int]]]. [(label[start, end])]
"""
ignore_labels = set(ignore_labels) if ignore_labels else set()
spans = []
prev_bioes_tag = None
for idx, tag in enumerate(tags):
tag = tag.lower()
bioes_tag, label = tag[:1], tag[2:]
if bioes_tag in ('b', 's'):
spans.append((label, [idx, idx]))
elif bioes_tag in ('i', 'e') and prev_bioes_tag in ('b', 'i') and label == spans[-1][0]:
spans[-1][1][1] = idx
elif bioes_tag == 'o':
pass
else:
spans.append((label, [idx, idx]))
prev_bioes_tag = bioes_tag
return [(span[0], (span[1][0], span[1][1] + 1))
for span in spans
if span[0] not in ignore_labels
]
def _bio_tag_to_spans(tags, ignore_labels=None): def _bio_tag_to_spans(tags, ignore_labels=None):
""" """
给定一个tags的lis比如['O', 'B-singer', 'I-singer', 'I-singer', 'O', 'O'] 给定一个tags的lis比如['O', 'B-singer', 'I-singer', 'I-singer', 'O', 'O']
@ -442,7 +478,7 @@ class SpanFPreRecMetric(MetricBase):
别名:class:`fastNLP.SpanFPreRecMetric` :class:`fastNLP.core.metrics.SpanFPreRecMetric` 别名:class:`fastNLP.SpanFPreRecMetric` :class:`fastNLP.core.metrics.SpanFPreRecMetric`
在序列标注问题中以span的方式计算F, pre, rec. 在序列标注问题中以span的方式计算F, pre, rec.
比如中文Part of speech中会以character的方式进行标注句子'中国在亚洲'对应的POS可能为(以BMES为例) 比如中文Part of speech中会以character的方式进行标注句子 `中国在亚洲` 对应的POS可能为(以BMES为例)
['B-NN', 'E-NN', 'S-DET', 'B-NN', 'E-NN']该metric就是为类似情况下的F1计算 ['B-NN', 'E-NN', 'S-DET', 'B-NN', 'E-NN']该metric就是为类似情况下的F1计算
最后得到的metric结果为:: 最后得到的metric结果为::
@ -466,15 +502,15 @@ class SpanFPreRecMetric(MetricBase):
:param tag_vocab: 标签的 :class:`~fastNLP.Vocabulary` 支持的标签为"B"(没有label)"B-xxx"(xxx为某种label比如POS中的NN) :param tag_vocab: 标签的 :class:`~fastNLP.Vocabulary` 支持的标签为"B"(没有label)"B-xxx"(xxx为某种label比如POS中的NN)
在解码时会将相同xxx的认为是同一个label比如['B-NN', 'E-NN']会被合并为一个'NN'. 在解码时会将相同xxx的认为是同一个label比如['B-NN', 'E-NN']会被合并为一个'NN'.
:param str pred: 用该key在evaluate()时从传入dict中取出prediction数据 为None则使用'pred'取数据 :param str pred: 用该key在evaluate()时从传入dict中取出prediction数据 为None则使用 `pred` 取数据
:param str target: 用该key在evaluate()时从传入dict中取出target数据 为None则使用'target'取数据 :param str target: 用该key在evaluate()时从传入dict中取出target数据 为None则使用 `target` 取数据
:param str seq_len: 用该key在evaluate()时从传入dict中取出sequence length数据为None则使用'seq_len'取数据 :param str seq_len: 用该key在evaluate()时从传入dict中取出sequence length数据为None则使用 `seq_len` 取数据
:param str encoding_type: 目前支持bio, bmes :param str encoding_type: 目前支持bio, bmes, bmeso, bioes
:param list ignore_labels: str 组成的list. 这个list中的class不会被用于计算例如在POS tagging时传入['NN']则不会计算'NN' :param list ignore_labels: str 组成的list. 这个list中的class不会被用于计算例如在POS tagging时传入['NN']则不会计算'NN'
个label 个label
:param bool only_gross: 是否只计算总的f1, precision, recall的值如果为False不仅返回总的f1, pre, rec, 还会返回每个 :param bool only_gross: 是否只计算总的f1, precision, recall的值如果为False不仅返回总的f1, pre, rec, 还会返回每个
label的f1, pre, rec label的f1, pre, rec
:param str f_type: 'micro''macro'. 'micro':通过先计算总体的TPFN和FP的数量再计算f, precision, recall; 'macro': :param str f_type: `micro` `macro` . `micro` :通过先计算总体的TPFN和FP的数量再计算f, precision, recall; `macro` :
分布计算每个类别的f, precision, recall然后做平均各类别f的权重相同 分布计算每个类别的f, precision, recall然后做平均各类别f的权重相同
:param float beta: f_beta分数 :math:`f_{beta} = \frac{(1 + {beta}^{2})*(pre*rec)}{({beta}^{2}*pre + rec)}` . :param float beta: f_beta分数 :math:`f_{beta} = \frac{(1 + {beta}^{2})*(pre*rec)}{({beta}^{2}*pre + rec)}` .
常用为beta=0.5, 1, 2. 若为0.5则精确率的权重高于召回率若为1则两者平等若为2则召回率权重高于精确率 常用为beta=0.5, 1, 2. 若为0.5则精确率的权重高于召回率若为1则两者平等若为2则召回率权重高于精确率
@ -497,6 +533,8 @@ class SpanFPreRecMetric(MetricBase):
self.tag_to_span_func = _bio_tag_to_spans self.tag_to_span_func = _bio_tag_to_spans
elif self.encoding_type == 'bmeso': elif self.encoding_type == 'bmeso':
self.tag_to_span_func = _bmeso_tag_to_spans self.tag_to_span_func = _bmeso_tag_to_spans
elif self.encoding_type == 'bioes':
self.tag_to_span_func = _bioes_tag_to_spans
else: else:
raise ValueError("Only support 'bio', 'bmes', 'bmeso' type.") raise ValueError("Only support 'bio', 'bmes', 'bmeso' type.")
@ -698,11 +736,11 @@ def _pred_topk(y_prob, k=1):
return y_pred_topk, y_prob_topk return y_pred_topk, y_prob_topk
class SQuADMetric(MetricBase): class ExtractiveQAMetric(MetricBase):
r""" r"""
别名:class:`fastNLP.SQuADMetric` :class:`fastNLP.core.metrics.SQuADMetric` 别名:class:`fastNLP.ExtractiveQAMetric` :class:`fastNLP.core.metrics.ExtractiveQAMetric`
SQuAD数据集metric 抽取式QA如SQuAD的metric.
:param pred1: 参数映射表中 `pred1` 的映射关系None表示映射关系为 `pred1` -> `pred1` :param pred1: 参数映射表中 `pred1` 的映射关系None表示映射关系为 `pred1` -> `pred1`
:param pred2: 参数映射表中 `pred2` 的映射关系None表示映射关系为 `pred2` -> `pred2` :param pred2: 参数映射表中 `pred2` 的映射关系None表示映射关系为 `pred2` -> `pred2`
@ -718,7 +756,7 @@ class SQuADMetric(MetricBase):
def __init__(self, pred1=None, pred2=None, target1=None, target2=None, def __init__(self, pred1=None, pred2=None, target1=None, target2=None,
beta=1, right_open=True, print_predict_stat=False): beta=1, right_open=True, print_predict_stat=False):
super(SQuADMetric, self).__init__() super(ExtractiveQAMetric, self).__init__()
self._init_param_map(pred1=pred1, pred2=pred2, target1=target1, target2=target2) self._init_param_map(pred1=pred1, pred2=pred2, target1=target1, target2=target2)

View File

@ -5,10 +5,14 @@ optimizer 模块定义了 fastNLP 中所需的各种优化器,一般做为 :cl
__all__ = [ __all__ = [
"Optimizer", "Optimizer",
"SGD", "SGD",
"Adam" "Adam",
"AdamW"
] ]
import torch import torch
import math
import torch
from torch.optim.optimizer import Optimizer as TorchOptimizer
class Optimizer(object): class Optimizer(object):
@ -36,6 +40,23 @@ class Optimizer(object):
""" """
return [param for param in params if param.requires_grad] return [param for param in params if param.requires_grad]
class NullOptimizer(Optimizer):
"""
当不希望Trainer更新optimizer时传入本optimizer但请确保通过callback的方式对参数进行了更新
"""
def __init__(self):
super().__init__(None)
def construct_from_pytorch(self, model_params):
pass
def __getattr__(self, item):
def pass_func(*args, **kwargs):
pass
return pass_func
class SGD(Optimizer): class SGD(Optimizer):
""" """
@ -80,3 +101,117 @@ class Adam(Optimizer):
return torch.optim.Adam(self._get_require_grads_param(model_params), **self.settings) return torch.optim.Adam(self._get_require_grads_param(model_params), **self.settings)
else: else:
return torch.optim.Adam(self._get_require_grads_param(self.model_params), **self.settings) return torch.optim.Adam(self._get_require_grads_param(self.model_params), **self.settings)
class AdamW(TorchOptimizer):
r"""
别名:class:`fastNLP.AdamW` :class:`fastNLP.core.optimizer.AdamW`
对AdamW的实现该实现应该会在pytorch更高版本中出现https://github.com/pytorch/pytorch/pull/21250这里提前加入
.. todo::
翻译成中文
The original Adam algorithm was proposed in `Adam: A Method for Stochastic Optimization`_.
The AdamW variant was proposed in `Decoupled Weight Decay Regularization`_.
:param params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
:param lr (float, optional): learning rate (default: 1e-3)
:param betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.99))
:param eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
:param weight_decay (float, optional): weight decay coefficient (default: 1e-2)
algorithm from the paper `On the Convergence of Adam and Beyond`_
(default: False)
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _Decoupled Weight Decay Regularization:
https://arxiv.org/abs/1711.05101
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,
weight_decay=1e-2, amsgrad=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay, amsgrad=amsgrad)
super(AdamW, self).__init__(params, defaults)
def __setstate__(self, state):
super(AdamW, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsgrad', False)
def step(self, closure=None):
"""Performs a single optimization step.
:param closure: (callable, optional) A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
# Perform stepweight decay
p.data.mul_(1 - group['lr'] * group['weight_decay'])
# Perform optimization step
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
amsgrad = group['amsgrad']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsgrad:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsgrad:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsgrad:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss

View File

@ -6,20 +6,20 @@ from collections import defaultdict
import torch import torch
from . import Batch from . import DataSetIter
from . import DataSet from . import DataSet
from . import SequentialSampler from . import SequentialSampler
from .utils import _build_args from .utils import _build_args, _move_dict_value_to_device, _get_model_device
class Predictor(object): class Predictor(object):
""" """
An interface for predicting outputs based on trained models. 一个根据训练模型预测输出的预测器Predictor
It does not care about evaluations of the model, which is different from Tester. 与测试器Tester不同的是predictor不关心模型性能的评价指标只做inference
This is a high-level model wrapper to be called by FastNLP. 这是一个fastNLP调用的高级模型包装器它与TrainerTester不共享任何操作
This class does not share any operations with Trainer and Tester.
Currently, Predictor does not support GPU. :param torch.nn.Module network: 用来完成预测任务的模型
""" """
def __init__(self, network): def __init__(self, network):
@ -30,22 +30,23 @@ class Predictor(object):
self.batch_size = 1 self.batch_size = 1
self.batch_output = [] self.batch_output = []
def predict(self, data, seq_len_field_name=None): def predict(self, data: DataSet, seq_len_field_name=None):
"""Perform inference using the trained model. """用已经训练好的模型进行inference.
:param data: a DataSet object. :param fastNLP.DataSet data: 待预测的数据集
:param str seq_len_field_name: field name indicating sequence lengths :param str seq_len_field_name: 表示序列长度信息的field名字
:return: list of batch outputs :return: dict dict里面的内容为模型预测的结果
""" """
if not isinstance(data, DataSet): if not isinstance(data, DataSet):
raise ValueError("Only Dataset class is allowed, not {}.".format(type(data))) raise ValueError("Only Dataset class is allowed, not {}.".format(type(data)))
if seq_len_field_name is not None and seq_len_field_name not in data.field_arrays: if seq_len_field_name is not None and seq_len_field_name not in data.field_arrays:
raise ValueError("Field name {} not found in DataSet {}.".format(seq_len_field_name, data)) raise ValueError("Field name {} not found in DataSet {}.".format(seq_len_field_name, data))
prev_training = self.network.training
self.network.eval() self.network.eval()
network_device = _get_model_device(self.network)
batch_output = defaultdict(list) batch_output = defaultdict(list)
data_iterator = Batch(data, batch_size=self.batch_size, sampler=SequentialSampler(), as_numpy=False, data_iterator = DataSetIter(data, batch_size=self.batch_size, sampler=SequentialSampler(), as_numpy=False)
prefetch=False)
if hasattr(self.network, "predict"): if hasattr(self.network, "predict"):
predict_func = self.network.predict predict_func = self.network.predict
@ -54,6 +55,7 @@ class Predictor(object):
with torch.no_grad(): with torch.no_grad():
for batch_x, _ in data_iterator: for batch_x, _ in data_iterator:
_move_dict_value_to_device(batch_x, _, device=network_device)
refined_batch_x = _build_args(predict_func, **batch_x) refined_batch_x = _build_args(predict_func, **batch_x)
prediction = predict_func(**refined_batch_x) prediction = predict_func(**refined_batch_x)
@ -73,4 +75,5 @@ class Predictor(object):
else: else:
batch_output[key].append(value) batch_output[key].append(value)
self.network.train(prev_training)
return batch_output return batch_output

View File

@ -62,16 +62,27 @@ class BucketSampler(Sampler):
带Bucket的 `Random Sampler`. 可以随机地取出长度相似的元素 带Bucket的 `Random Sampler`. 可以随机地取出长度相似的元素
:param int num_buckets: bucket的数量 :param int num_buckets: bucket的数量
:param int batch_size: batch的大小 :param int batch_size: batch的大小. 默认为NoneTrainer在调用BucketSampler时会将该值正确设置如果是非Trainer场景使用
要显示传递该值
:param str seq_len_field_name: 对应序列长度的 `field` 的名字 :param str seq_len_field_name: 对应序列长度的 `field` 的名字
""" """
def __init__(self, num_buckets=10, batch_size=32, seq_len_field_name='seq_len'): def __init__(self, num_buckets=10, batch_size=None, seq_len_field_name='seq_len'):
self.num_buckets = num_buckets self.num_buckets = num_buckets
self.batch_size = batch_size self.batch_size = batch_size
self.seq_len_field_name = seq_len_field_name self.seq_len_field_name = seq_len_field_name
def set_batch_size(self, batch_size):
"""
:param int batch_size: 每个batch的大小
:return:
"""
self.batch_size = batch_size
def __call__(self, data_set): def __call__(self, data_set):
if self.batch_size is None:
raise RuntimeError("batch_size is None.")
seq_lens = data_set.get_all_fields()[self.seq_len_field_name].content seq_lens = data_set.get_all_fields()[self.seq_len_field_name].content
total_sample_num = len(seq_lens) total_sample_num = len(seq_lens)

View File

@ -1,7 +1,7 @@
""" """
tester模块实现了 fastNLP 所需的Tester类能在提供数据模型以及metric的情况下进行性能测试 tester模块实现了 fastNLP 所需的Tester类能在提供数据模型以及metric的情况下进行性能测试
Example:: .. code-block::
import numpy as np import numpy as np
import torch import torch
@ -32,12 +32,10 @@ Tester在验证进行之前会调用model.eval()提示当前进入了evaluation
""" """
import warnings
import torch import torch
import torch.nn as nn import torch.nn as nn
from .batch import Batch from .batch import BatchIter, DataSetIter
from .dataset import DataSet from .dataset import DataSet
from .metrics import _prepare_metrics from .metrics import _prepare_metrics
from .sampler import SequentialSampler from .sampler import SequentialSampler
@ -48,6 +46,8 @@ from .utils import _move_dict_value_to_device
from .utils import _get_func_signature from .utils import _get_func_signature
from .utils import _get_model_device from .utils import _get_model_device
from .utils import _move_model_to_device from .utils import _move_model_to_device
from ._parallel_utils import _data_parallel_wrapper
from functools import partial
__all__ = [ __all__ = [
"Tester" "Tester"
@ -60,15 +60,14 @@ class Tester(object):
Tester是在提供数据模型以及metric的情况下进行性能测试的类需要传入模型数据以及metric进行验证 Tester是在提供数据模型以及metric的情况下进行性能测试的类需要传入模型数据以及metric进行验证
:param data: 需要测试的数据集 :class:`~fastNLP.DataSet` 类型 :param ~fastNLP.DataSet data: 需要测试的数据集
:param torch.nn.module model: 使用的模型 :param torch.nn.module model: 使用的模型
:param metrics: :class:`~fastNLP.core.metrics.MetricBase` 或者一个列表的 :class:`~fastNLP.core.metrics.MetricBase` :param ~fastNLP.core.metrics.MetricBase,List[~fastNLP.core.metrics.MetricBase] metrics: 测试时使用的metrics
:param int batch_size: evaluation时使用的batch_size有多大 :param int batch_size: evaluation时使用的batch_size有多大
:param str,int,torch.device,list(int) device: 将模型load到哪个设备默认为None即Trainer不对模型 :param str,int,torch.device,list(int) device: 将模型load到哪个设备默认为None即Trainer不对模型
的计算位置进行管理支持以下的输入: 的计算位置进行管理支持以下的输入:
1. str: ['cpu', 'cuda', 'cuda:0', 'cuda:1', ...] 依次为'cpu', 可见的第一个GPU中, 可见的第一个GPU中, 1. str: ['cpu', 'cuda', 'cuda:0', 'cuda:1', ...] 依次为'cpu', 可见的第一个GPU中,可见的第一个GPU中,可见的第二个GPU中;
可见的第二个GPU中;
2. torch.device将模型装载到torch.device上 2. torch.device将模型装载到torch.device上
@ -82,7 +81,7 @@ class Tester(object):
:param int verbose: 如果为0不输出任何信息; 如果为1打印出验证结果 :param int verbose: 如果为0不输出任何信息; 如果为1打印出验证结果
""" """
def __init__(self, data, model, metrics, batch_size=16, device=None, verbose=1): def __init__(self, data, model, metrics, batch_size=16, num_workers=0, device=None, verbose=1):
super(Tester, self).__init__() super(Tester, self).__init__()
if not isinstance(data, DataSet): if not isinstance(data, DataSet):
@ -96,23 +95,35 @@ class Tester(object):
self._model = _move_model_to_device(model, device=device) self._model = _move_model_to_device(model, device=device)
self.batch_size = batch_size self.batch_size = batch_size
self.verbose = verbose self.verbose = verbose
# 如果是DataParallel将没有办法使用predict方法 if isinstance(data, DataSet):
if isinstance(self._model, nn.DataParallel): self.data_iterator = DataSetIter(
if hasattr(self._model.module, 'predict') and not hasattr(self._model, 'predict'): dataset=data, batch_size=batch_size, num_workers=num_workers, sampler=SequentialSampler())
warnings.warn("Cannot use DataParallel to test your model, because your model offer predict() function," elif isinstance(data, BatchIter):
" while DataParallel has no predict() function.") self.data_iterator = data
self._model = self._model.module
# check predict
if hasattr(self._model, 'predict'):
self._predict_func = self._model.predict
if not callable(self._predict_func):
_model_name = model.__class__.__name__
raise TypeError(f"`{_model_name}.predict` must be callable to be used "
f"for evaluation, not `{type(self._predict_func)}`.")
else: else:
self._predict_func = self._model.forward raise TypeError("data type {} not support".format(type(data)))
# check predict
if (hasattr(self._model, 'predict') and callable(self._model.predict)) or \
(isinstance(self._model, nn.DataParallel) and hasattr(self._model.module, 'predict') and
callable(self._model.module.predict)):
if isinstance(self._model, nn.DataParallel):
self._predict_func_wrapper = partial(_data_parallel_wrapper('predict',
self._model.device_ids,
self._model.output_device),
network=self._model.module)
self._predict_func = self._model.module.predict
else:
self._predict_func = self._model.predict
self._predict_func_wrapper = self._model.predict
else:
if isinstance(self._model, nn.DataParallel):
self._predict_func_wrapper = self._model.forward
self._predict_func = self._model.module.forward
else:
self._predict_func = self._model.forward
self._predict_func_wrapper = self._model.forward
def test(self): def test(self):
"""开始进行验证,并返回验证结果。 """开始进行验证,并返回验证结果。
@ -124,7 +135,7 @@ class Tester(object):
self._model_device = _get_model_device(self._model) self._model_device = _get_model_device(self._model)
network = self._model network = self._model
self._mode(network, is_test=True) self._mode(network, is_test=True)
data_iterator = Batch(self.data, self.batch_size, sampler=SequentialSampler(), as_numpy=False) data_iterator = self.data_iterator
eval_results = {} eval_results = {}
try: try:
with torch.no_grad(): with torch.no_grad():
@ -169,7 +180,7 @@ class Tester(object):
def _data_forward(self, func, x): def _data_forward(self, func, x):
"""A forward pass of the model. """ """A forward pass of the model. """
x = _build_args(func, **x) x = _build_args(func, **x)
y = func(**x) y = self._predict_func_wrapper(**x)
return y return y
def _format_eval_results(self, results): def _format_eval_results(self, results):

View File

@ -11,288 +11,310 @@ Trainer在fastNLP中用于组织单任务的训练过程可以避免用户在
(5) 保存获得更好验证性能的模型 (5) 保存获得更好验证性能的模型
1 Trainer的基本使用
下面的例子是使用神经网络来进行预测一个序列中是否有偶数个1
Example:: ----------------------------
1. Trainer的基本使用
----------------------------
import numpy as np 下面的例子是使用神经网络来进行预测一个序列中是否有偶数个1
from torch import nn
import torch
import torch.nn.functional as F
from torch.optim import SGD
from fastNLP import DataSet .. code-block:: python
from fastNLP import Trainer
from fastNLP import CrossEntropyLoss
from fastNLP import AccuracyMetric
from fastNLP.modules.decoder import MLP
# 模型 import numpy as np
class Model(nn.Module): from torch import nn
def __init__(self, input_num): import torch
super().__init__() import torch.nn.functional as F
self.fcs = MLP([input_num, 40, 40, 2], 'relu') from torch.optim import SGD
def forward(self, x): from fastNLP import DataSet
x = self.fcs(x) from fastNLP import Trainer
return {'pred': x} from fastNLP import CrossEntropyLoss
model = Model(10) from fastNLP import AccuracyMetric
from fastNLP.modules.decoder import MLP
# 生成数据 # 模型
def generate_psedo_dataset(num_samples): class Model(nn.Module):
dataset = DataSet() def __init__(self, input_num):
data = np.random.randint(2, size=(num_samples, 10)) super().__init__()
label = np.sum(data, axis=1)%2 self.fcs = MLP([input_num, 40, 40, 2], 'relu')
dataset = DataSet({'x':data.astype(float), 'label': label})
dataset.set_input('x')
dataset.set_target('label')
return dataset
tr_dataset = generate_psedo_dataset(1000)
dev_data = generate_psedo_dataset(100)
# 训练 def forward(self, x):
trainer = Trainer(tr_dataset, model, loss=CrossEntropyLoss(target='label'), x = self.fcs(x)
optimizer=SGD(model.parameters(), lr=0.1),n_epochs=1000, return {'pred': x}
dev_data = dev_data, metrics=AccuracyMetric(target='label')) model = Model(10)
trainer.train()
由上面的例子可以看出通过使用Trainer可以使得训练部分的代码大幅减少 # 生成数据
使用Trainer需要满足以下几个条件: def generate_psedo_dataset(num_samples):
dataset = DataSet()
data = np.random.randint(2, size=(num_samples, 10))
label = np.sum(data, axis=1)%2
dataset = DataSet({'x':data.astype(float), 'label': label})
dataset.set_input('x')
dataset.set_target('label')
return dataset
tr_dataset = generate_psedo_dataset(1000)
dev_data = generate_psedo_dataset(100)
# 训练
trainer = Trainer(tr_dataset, model, loss=CrossEntropyLoss(target='label'),
optimizer=SGD(model.parameters(), lr=0.1),n_epochs=1000,
dev_data = dev_data, metrics=AccuracyMetric(target='label'))
trainer.train()
由上面的例子可以看出通过使用Trainer可以使得训练部分的代码大幅减少
使用Trainer需要满足以下几个条件:
1.1 模型 1.1 模型
1 模型的forward()的参数名需要与DataSet中的名字对应实际上fastNLP在将DataSet中的数据传递给模型forward() ----------------------------
通过匹配名称实现的所以上例中如果Model的forward函数修改为forward(self, data), 则DataSet中的'x'这个field就应该
改名为'data'
2 传递给forward()的参数是DataSet中被设置为input的那些field但如果forward()中没有对应的参数则不会将数据传递 1 模型的forward()的参数名需要与DataSet中的名字对应实际上fastNLP在将DataSet中的数据传递给模型forward()
给forward()例如DataSet中'x1', 'x2'都是input但是模型的函数为forward(self, x1), 那么'x2'不会传递给forward() 通过匹配名称实现的所以上例中如果Model的forward函数修改为forward(self, data), 则DataSet中的'x'这个field就应该
改名为'data'
3 模型的forward()返回值需要为一个dict 2 传递给forward()的参数是DataSet中被设置为input的那些field但如果forward()中没有对应的参数则不会将数据传递
给forward()例如DataSet中'x1', 'x2'都是input但是模型的函数为forward(self, x1), 那么'x2'不会传递给forward()
3 模型的forward()返回值需要为一个dict
1.2 Loss 1.2 Loss
fastNLP中的为了不限制forward函数的返回内容数量(比如一些复杂任务需要返回多个内容如Dependency Parsing ----------------------------
:mod:`Loss<fastNLP.core.losses>` :mod:`Metric<fastNLP.core.metrics>` 都使用了通过名称来匹配相应内容的策略如上面的例子中
Example:: fastNLP中的为了不限制forward函数的返回内容数量(比如一些复杂任务需要返回多个内容如Dependency Parsing
:mod:`Loss<fastNLP.core.losses>` :mod:`Metric<fastNLP.core.metrics>` 都使用了通过名称来匹配相应内容的策略如上面的例子中
trainer = Trainer(tr_dataset, model, loss=CrossEntropyLoss(target='label'), .. code-block:: python
optimizer=SGD(model.parameters(), lr=0.1),n_epochs=1000,
dev_data = dev_data, metrics=AccuracyMetric(target='label'))
loss被设置为了 :class:`~fastNLP.CrossEntropyLoss` , 但在初始化的时候传入了target='label'这个参数 trainer = Trainer(tr_dataset, model, loss=CrossEntropyLoss(target='label'),
:class:`~fastNLP.CrossEntropyLoss` 的初始化参数为(pred=None, target=None, padding_idx=-100) optimizer=SGD(model.parameters(), lr=0.1),n_epochs=1000,
dev_data = dev_data, metrics=AccuracyMetric(target='label'))
这里的两个参数分别为计算CrossEntropy时需要使用到的模型的预测值与真实值
其中 `pred` 一般来自于模型forward()的返回结果`target` 一般是来自于DataSet中被设置为target的field
由于每个人对真实值或者model的返回值取名并不一样所以fastNLP的 :mod:`Loss<fastNLP.core.losses>` 提供一种类似于映射的机制来匹配对应的值
比如这里 :class:`~fastNLP.CrossEntropyLoss` 将尝试找到名为'label'的内容来作为真实值得到loss
而pred=None, :class:`~fastNLP.CrossEntropyLoss` 使用'pred'作为名称匹配预测值
正好forward的返回值也叫pred所以这里不需要申明pred
尽管fastNLP使用了映射机制来使得loss的计算变得比较灵活但有些情况下loss必须在模型中进行计算比如使用了CRF的模型 loss被设置为了 :class:`~fastNLP.CrossEntropyLoss` , 但在初始化的时候传入了target='label'这个参数
fastNLP中提供了 :class:`~fastNLP.LossInForward` 这个loss :class:`~fastNLP.CrossEntropyLoss` 的初始化参数为(pred=None, target=None, padding_idx=-100)
这个loss的原理是直接在forward()的返回结果中找到loss_key(默认寻找'loss')指定的那个tensor并使用它作为loss
如果Trainer初始化没有提供loss则默认使用 :class:`~fastNLP.LossInForward` 这里的两个参数分别为计算CrossEntropy时需要使用到的模型的预测值与真实值
其中 `pred` 一般来自于模型forward()的返回结果`target` 一般是来自于DataSet中被设置为target的field
.. todo:: 由于每个人对真实值或者model的返回值取名并不一样所以fastNLP的 :mod:`Loss<fastNLP.core.losses>` 提供一种类似于映射的机制来匹配对应的值
补充一个例子 详细例子可以参照 比如这里 :class:`~fastNLP.CrossEntropyLoss` 将尝试找到名为'label'的内容来作为真实值得到loss
而pred=None, :class:`~fastNLP.CrossEntropyLoss` 使用'pred'作为名称匹配预测值
正好forward的返回值也叫pred所以这里不需要申明pred
尽管fastNLP使用了映射机制来使得loss的计算变得比较灵活但有些情况下loss必须在模型中进行计算比如使用了CRF的模型
fastNLP中提供了 :class:`~fastNLP.LossInForward` 这个loss
这个loss的原理是直接在forward()的返回结果中找到loss_key(默认寻找'loss')指定的那个tensor并使用它作为loss
如果Trainer初始化没有提供loss则默认使用 :class:`~fastNLP.LossInForward`
.. todo::
补充一个例子 详细例子可以参照
1.3 Metric 1.3 Metric
:mod:`Metric<fastNLP.core.metrics>` 使用了与上述Loss一样的策略即使用名称进行匹配 ----------------------------
AccuracyMetric(target='label')的情况与CrossEntropyLoss 是同理的
在进行验证时可能用到的计算与forward()中不太一致没有办法直接从forward()的结果中得到预测值这时模型可以提供一个predict()方法
如果提供的模型具有predict方法则在模型验证时将调用predict()方法获取预测结果
传入到predict()的参数也是从DataSet中被设置为input的field中选择出来的;
与forward()一样返回值需要为一个dict
.. todo::
补充一个例子 具体例子可以参考
2 Trainer的代码检查 :mod:`Metric<fastNLP.core.metrics>` 使用了与上述Loss一样的策略即使用名称进行匹配
由于在fastNLP中采取了映射的机制所以难免可能存在对应出错的情况Trainer提供一种映射检查机制可以通过check_code_level来进行控制 AccuracyMetric(target='label')的情况与CrossEntropyLoss 是同理的
比如下面的例子中由于各种原因产生的报错
在进行验证时可能用到的计算与forward()中不太一致没有办法直接从forward()的结果中得到预测值这时模型可以提供一个predict()方法
如果提供的模型具有predict方法则在模型验证时将调用predict()方法获取预测结果
传入到predict()的参数也是从DataSet中被设置为input的field中选择出来的;
与forward()一样返回值需要为一个dict
.. todo::
补充一个例子 具体例子可以参考
----------------------------
2. Trainer的代码检查
----------------------------
由于在fastNLP中采取了映射的机制所以难免可能存在对应出错的情况Trainer提供一种映射检查机制可以通过check_code_level来进行控制
比如下面的例子中由于各种原因产生的报错
Example2.1 Example2.1
:: ----------------------------
import numpy as np
from torch import nn
import torch
from torch.optim import SGD
from fastNLP import Trainer
from fastNLP import DataSet
class Model(nn.Module): .. code-block:: python
def __init__(self):
super().__init__()
self.fc = nn.Linear(1, 1)
def forward(self, x, b):
loss = torch.mean((self.fc(x)-b)**2)
return {'loss': loss}
model = Model()
dataset = DataSet({'a': np.arange(10), 'b':np.arange(10)*2}) import numpy as np
dataset.set_input('a', 'b') from torch import nn
import torch
from torch.optim import SGD
from fastNLP import Trainer
from fastNLP import DataSet
trainer = Trainer(dataset, model, loss=None, optimizer=SGD(model.parameters(), lr=0.001)) class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(1, 1)
def forward(self, x, b):
loss = torch.mean((self.fc(x)-b)**2)
return {'loss': loss}
model = Model()
trainer = Trainer(dataset, model, SGD(model.parameters())) dataset = DataSet({'a': np.arange(10), 'b':np.arange(10)*2})
# 会报以下的错误 dataset.set_input('a', 'b')
# input fields after batch(if batch size is 2):
# a: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
# b: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
# There is no target field.
# ....
# NameError:
# Problems occurred when calling Model.forward(self, x, b)
# missing param: ['x']
# unused field: ['a']
# Suggestion: You need to provide ['x'] in DataSet and set it as input.
这里就是由于在Trainer初始化的时候fastNLP会尝试使用一个batch_size=2的batch去运行一遍forward()以及backward()这里有两类 trainer = Trainer(dataset, model, loss=None, optimizer=SGD(model.parameters(), lr=0.001))
信息可以为你提供参考
1 'input fields after batch...'这部分显示的是train dataset经过Batch操作后每个field对应的类型以及进行shape这里 trainer = Trainer(dataset, model, SGD(model.parameters()))
因为train dataset没有target所以没有显示根据这里可以看出是否正确将需要的内容设置为了input或target # 会报以下的错误
# input fields after batch(if batch size is 2):
# a: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
# b: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2])
# There is no target field.
# ....
# NameError:
# Problems occurred when calling Model.forward(self, x, b)
# missing param: ['x']
# unused field: ['a']
# Suggestion: You need to provide ['x'] in DataSet and set it as input.
2 NameErrorNameError发生在映射出错的情况这里报错的原因是由于尝试进行forward计算时(可以通过Model.forward(self, x, b)判断 这里就是由于在Trainer初始化的时候fastNLP会尝试使用一个batch_size=2的batch去运行一遍forward()以及backward()这里有两类
出当前是在调取forward)却没有获取到forward()函数中需要的'x'在报错信息中同时指出了缺'x''a'没有被使用那么可能 信息可以为你提供参考
就是由于field的名称不对这里将dataset中'a'这个field的名称改为'x'或者model的参数从'x'修改为'a'都可以解决问题
下面的例子是由于loss计算的时候找不到需要的值 1 'input fields after batch...'这部分显示的是train dataset经过Batch操作后每个field对应的类型以及进行shape这里
因为train dataset没有target所以没有显示根据这里可以看出是否正确将需要的内容设置为了input或target
2 NameErrorNameError发生在映射出错的情况这里报错的原因是由于尝试进行forward计算时(可以通过Model.forward(self, x, b)判断
出当前是在调取forward)却没有获取到forward()函数中需要的'x'在报错信息中同时指出了缺'x''a'没有被使用那么可能
就是由于field的名称不对这里将dataset中'a'这个field的名称改为'x'或者model的参数从'x'修改为'a'都可以解决问题
下面的例子是由于loss计算的时候找不到需要的值
Example2.2 Example2.2
:: ----------------------------
import numpy as np .. code-block:: python
from torch import nn
from torch.optim import SGD
from fastNLP import Trainer
from fastNLP import DataSet
from fastNLP import L1Loss
import torch
class Model(nn.Module): import numpy as np
def __init__(self): from torch import nn
super().__init__() from torch.optim import SGD
self.fc = nn.Linear(1, 1) from fastNLP import Trainer
def forward(self, a): from fastNLP import DataSet
return {'pred_b': self.fc(a.unsqueeze(1)).squeeze(1), 'No use':1} from fastNLP import L1Loss
import torch
model = Model() class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(1, 1)
def forward(self, a):
return {'pred_b': self.fc(a.unsqueeze(1)).squeeze(1), 'No use':1}
dataset = DataSet({'a': np.arange(10, dtype=float), 'b':np.arange(10, dtype=float)*2}) model = Model()
dataset.set_input('a') dataset = DataSet({'a': np.arange(10, dtype=float), 'b':np.arange(10, dtype=float)*2})
dataset.set_target('b')
trainer = Trainer(dataset, model, loss=L1Loss(target='label'), optimizer=SGD(model.parameters(), lr=0.001)) dataset.set_input('a')
# 报错信息如下 dataset.set_target('b')
# input fields after batch(if batch size is 2):
# a: (1)type:torch.Tensor (2)dtype:torch.float32, (3)shape:torch.Size([2])
# target fields after batch(if batch size is 2):
# b: (1)type:torch.Tensor (2)dtype:torch.float32, (3)shape:torch.Size([2])
# ....
# NameError:
# Problems occurred when calling L1Loss.get_loss(self, pred, target)
# missing param: ['pred(assign to `pred` in `L1Loss`)', 'label(assign to `target` in `L1Loss`)']
# unused field: ['b']
# unused param: ['pred_b', 'No use']
# target field: ['b']
# param from Model.forward(self, a): ['pred_b', 'No use']
# Suggestion: (1). Check key assignment for `target` when initialize L1Loss. Or provide `label` in DataSet or output of Model.forward(self, a).
# (2). Check key assignment for `pred` when initialize L1Loss. Or provide `pred` in DataSet or output of Model.forward(self, a).
报错信息也包含两部分: trainer = Trainer(dataset, model, loss=L1Loss(target='label'), optimizer=SGD(model.parameters(), lr=0.001))
# 报错信息如下
# input fields after batch(if batch size is 2):
# a: (1)type:torch.Tensor (2)dtype:torch.float32, (3)shape:torch.Size([2])
# target fields after batch(if batch size is 2):
# b: (1)type:torch.Tensor (2)dtype:torch.float32, (3)shape:torch.Size([2])
# ....
# NameError:
# Problems occurred when calling L1Loss.get_loss(self, pred, target)
# missing param: ['pred(assign to `pred` in `L1Loss`)', 'label(assign to `target` in `L1Loss`)']
# unused field: ['b']
# unused param: ['pred_b', 'No use']
# target field: ['b']
# param from Model.forward(self, a): ['pred_b', 'No use']
# Suggestion: (1). Check key assignment for `target` when initialize L1Loss. Or provide `label` in DataSet or output of Model.forward(self, a).
# (2). Check key assignment for `pred` when initialize L1Loss. Or provide `pred` in DataSet or output of Model.forward(self, a).
1 第一部分与上面是一样的 报错信息也包含两部分:
2 这里报错的原因是由于计算loss的时候找不到相应的值(通过L1Loss.get_loss(self, pred, target)判断出来的) 1 第一部分与上面是一样的
报错的原因是因为 `pred` `label` (我们在初始化L1Loss时将target指定为了label)都没有找到
这里'unused field'是DataSet中出现了但却没有被设置为input或者target的field
'unused param'是forward()中返回且没有被使用到的内容'target field'是被设置为了target的field;
'param from Model.forward(self, a)'是forward()返回的所有key"Suggestion"是关于当前错误处理的建议
但是在一些情况下比如forward()返回值只有一个target也只有一个fastNLP不会进行匹配而直接将forward()的结果作为pred, 2 这里报错的原因是由于计算loss的时候找不到相应的值(通过L1Loss.get_loss(self, pred, target)判断出来的)
将DataSet中的target设置为target上面的例子在返回值中加入了一个'No use'则只是为了使得Loss去匹配结果 报错的原因是因为 `pred` `label` (我们在初始化L1Loss时将target指定为了label)都没有找到
这里'unused field'是DataSet中出现了但却没有被设置为input或者target的field
'unused param'是forward()中返回且没有被使用到的内容'target field'是被设置为了target的field;
'param from Model.forward(self, a)'是forward()返回的所有key"Suggestion"是关于当前错误处理的建议
但是在一些情况下比如forward()返回值只有一个target也只有一个fastNLP不会进行匹配而直接将forward()的结果作为pred,
将DataSet中的target设置为target上面的例子在返回值中加入了一个'No use'则只是为了使得Loss去匹配结果
下面是带有dev dataset时如果出现错误会发生的报错 下面是带有dev dataset时如果出现错误会发生的报错
Example2.3 Example2.3
:: ----------------------------
.. code-block:: python
import numpy as np
from torch import nn
from torch.optim import SGD
from fastNLP import Trainer
from fastNLP import DataSet
from fastNLP import AccuracyMetric
import torch
class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(1, 1)
def forward(self, a, b):
loss = torch.mean((self.fc(a.float().unsqueeze(1))-b.float())**2)
return {'loss': loss}
def predict(self, a): # 使用predict()进行验证
return {'output':self.fc(a.float().unsqueeze(1))} #这里return的值不包含'pred'这个key
model = Model()
dataset = DataSet({'a': np.arange(10), 'b':np.arange(10)*2})
dev_data = DataSet({'a': np.arange(10, 20), 'b':np.arange(10, 20)*2})
dataset.set_input('a', 'b')
dev_data.set_input('a') # 这里没有设置target
trainer = Trainer(dataset, model, loss=None, optimizer=SGD(model.parameters(), lr=0.001),
dev_data=dev_data, metrics=AccuracyMetric())
# 报错信息
# ...
# NameError:
# Problems occurred when calling AccuracyMetric.evaluate(self, pred, target, seq_len=None)
# missing param: ['pred(assign to `pred` in `AccuracyMetric`)', 'target(assign to `target` in `AccuracyMetric`)']
# unused param: ['output']
# target field: []
# param from Model.predict(self, a): ['output']
# Suggestion: (1). Check key assignment for `pred` when initialize AccuracyMetric. Or provide `pred` in DataSet or output of Model.predict(self, a).
# (2). Check key assignment for `target` when initialize AccuracyMetric. Or provide `target` in DataSet or output of Model.predict(self, a).
报错信息和前面都是类似的但是可以通过'AccuracyMetric.evaluate(self, pred, target, seq_len=None)'看出这里是evaluation
的时候发生了错误这样避免了需要在完成一整个epoch的训练才能发现evaluation弄错的情况这里的修改是通过在初始化metric的时候
指明通过'output'获取`pred`, 即AccuracyMetric(pred='output')
可以通过check_code_level调节检查的强度默认为0即进行检查
----------------------------
3. Trainer与callback
----------------------------
虽然Trainer本身已经集成了一些功能但仍然不足以囊括训练过程中可能需要到的功能比如负采样learning rate decay, Early Stop等
为了解决这个问题fastNLP引入了callback的机制:class:`~fastNLP.Callback` 是一种在Trainer训练过程中特定阶段会运行的函数集合
所有的 :class:`~fastNLP.Callback` 都具有on_*(比如on_train_start, on_backward_begin)等函数
如果 Callback 实现了该函数则Trainer运行至对应阶段会进行调用例如::
from fastNLP import Callback, EarlyStopCallback, Trainer, CrossEntropyLoss, AccuracyMetric
from fastNLP.models import CNNText
start_time = time.time()
import numpy as np class MyCallback(Callback):
from torch import nn def on_epoch_end(self):
from torch.optim import SGD print('{:d}ms\n\n'.format(round((time.time()-start_time)*1000)))
from fastNLP import Trainer
from fastNLP import DataSet
from fastNLP import AccuracyMetric
import torch
class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(1, 1)
def forward(self, a, b):
loss = torch.mean((self.fc(a.float().unsqueeze(1))-b.float())**2)
return {'loss': loss}
def predict(self, a): # 使用predict()进行验证
return {'output':self.fc(a.float().unsqueeze(1))} #这里return的值不包含'pred'这个key
model = Model()
dataset = DataSet({'a': np.arange(10), 'b':np.arange(10)*2})
dev_data = DataSet({'a': np.arange(10, 20), 'b':np.arange(10, 20)*2})
dataset.set_input('a', 'b')
dev_data.set_input('a') # 这里没有设置target
trainer = Trainer(dataset, model, loss=None, optimizer=SGD(model.parameters(), lr=0.001),
dev_data=dev_data, metrics=AccuracyMetric())
# 报错信息
# ...
# NameError:
# Problems occurred when calling AccuracyMetric.evaluate(self, pred, target, seq_len=None)
# missing param: ['pred(assign to `pred` in `AccuracyMetric`)', 'target(assign to `target` in `AccuracyMetric`)']
# unused param: ['output']
# target field: []
# param from Model.predict(self, a): ['output']
# Suggestion: (1). Check key assignment for `pred` when initialize AccuracyMetric. Or provide `pred` in DataSet or output of Model.predict(self, a).
# (2). Check key assignment for `target` when initialize AccuracyMetric. Or provide `target` in DataSet or output of Model.predict(self, a).
报错信息和前面都是类似的但是可以通过'AccuracyMetric.evaluate(self, pred, target, seq_len=None)'看出这里是evaluation
的时候发生了错误这样避免了需要在完成一整个epoch的训练才能发现evaluation弄错的情况这里的修改是通过在初始化metric的时候
指明通过'output'获取`pred`, 即AccuracyMetric(pred='output')
可以通过check_code_level调节检查的强度默认为0即进行检查
3 Trainer与callback
虽然Trainer本身已经集成了一些功能但仍然不足以囊括训练过程中可能需要到的功能比如负采样learning rate decay, Early Stop等
为了解决这个问题fastNLP引入了callback的机制:class:`~fastNLP.Callback` 是一种在Trainer训练过程中特定阶段会运行的函数集合
所有的 :class:`~fastNLP.Callback` 都具有on_*(比如on_train_start, on_backward_begin)等函数
如果 Callback 实现了该函数则Trainer运行至对应阶段会进行调用例如::
from fastNLP import Callback, EarlyStopCallback, Trainer, CrossEntropyLoss, AccuracyMetric model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)
from fastNLP.models import CNNText trainer = Trainer(model=model, train_data=train_data, dev_data=dev_data, loss=CrossEntropyLoss(),
metrics=AccuracyMetric(), callbacks=[MyCallback(),EarlyStopCallback(10)])
start_time = time.time() trainer.train()
class MyCallback(Callback):
def on_epoch_end(self):
print('{:d}ms\n\n'.format(round((time.time()-start_time)*1000)))
model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)
trainer = Trainer(model=model, train_data=train_data, dev_data=dev_data, loss=CrossEntropyLoss(),
metrics=AccuracyMetric(), callbacks=[MyCallback(),EarlyStopCallback(10)])
trainer.train()
这里我们通过继承 :class:`~fastNLP.Callback` 类定义了自己的 callback 并和内置的 :class:`~fastNLP.EarlyStopCallback`
一起传给了 :class:`~fastNLP.Trainer` 增强了 :class:`~fastNLP.Trainer` 的功能
fastNLP已经自带了很多callback函数供使用可以参考 :doc:`fastNLP.core.callback` 这里我们通过继承 :class:`~fastNLP.Callback` 类定义了自己的 callback 并和内置的 :class:`~fastNLP.EarlyStopCallback`
一起传给了 :class:`~fastNLP.Trainer` 增强了 :class:`~fastNLP.Trainer` 的功能
fastNLP已经自带了很多callback函数供使用可以参考 :doc:`fastNLP.core.callback`
""" """
__all__ = [ __all__ = [
@ -311,8 +333,9 @@ try:
from tqdm.auto import tqdm from tqdm.auto import tqdm
except: except:
from .utils import _pseudo_tqdm as tqdm from .utils import _pseudo_tqdm as tqdm
import warnings
from .batch import Batch from .batch import DataSetIter, BatchIter
from .callback import CallbackManager, CallbackException from .callback import CallbackManager, CallbackException
from .dataset import DataSet from .dataset import DataSet
from .losses import _prepare_losser from .losses import _prepare_losser
@ -320,7 +343,6 @@ from .metrics import _prepare_metrics
from .optimizer import Optimizer from .optimizer import Optimizer
from .sampler import Sampler from .sampler import Sampler
from .sampler import RandomSampler from .sampler import RandomSampler
from .sampler import SequentialSampler
from .tester import Tester from .tester import Tester
from .utils import _CheckError from .utils import _CheckError
from .utils import _build_args from .utils import _build_args
@ -351,6 +373,8 @@ class Trainer(object):
:param int batch_size: 训练和验证的时候的batch大小 :param int batch_size: 训练和验证的时候的batch大小
:param loss: 使用的 :class:`~fastNLP.core.losses.LossBase` 对象当为None时默认使用 :class:`~fastNLP.LossInForward` :param loss: 使用的 :class:`~fastNLP.core.losses.LossBase` 对象当为None时默认使用 :class:`~fastNLP.LossInForward`
:param sampler: Batch数据生成的顺序 :class:`~fastNLP.Sampler` 类型如果为None默认使用 :class:`~fastNLP.RandomSampler` :param sampler: Batch数据生成的顺序 :class:`~fastNLP.Sampler` 类型如果为None默认使用 :class:`~fastNLP.RandomSampler`
:param drop_last: 如果最后一个batch没有正好为batch_size这么多数据就扔掉最后一个batch
:param num_workers: int, 有多少个线程来进行数据pad处理
:param update_every: int, 多少步更新一次梯度用于希望累计梯度的场景比如需要128的batch_size, 但是直接设为128 :param update_every: int, 多少步更新一次梯度用于希望累计梯度的场景比如需要128的batch_size, 但是直接设为128
会导致内存不足通过设置batch_size=32, update_every=4达到目的当optimizer为None时该参数无效 会导致内存不足通过设置batch_size=32, update_every=4达到目的当optimizer为None时该参数无效
:param int n_epochs: 需要优化迭代多少次 :param int n_epochs: 需要优化迭代多少次
@ -367,7 +391,6 @@ class Trainer(object):
:param int validate_every: 多少个step在验证集上验证一次; 如果为-1则每个epoch结束验证一次仅在传入dev_data时有效 :param int validate_every: 多少个step在验证集上验证一次; 如果为-1则每个epoch结束验证一次仅在传入dev_data时有效
:param str,None save_path: 将模型保存路径如果为None则不保存模型如果dev_data为None则保存最后一次迭代的模型 :param str,None save_path: 将模型保存路径如果为None则不保存模型如果dev_data为None则保存最后一次迭代的模型
保存的时候不仅保存了参数还保存了模型结构即便使用DataParallel这里也只保存模型 保存的时候不仅保存了参数还保存了模型结构即便使用DataParallel这里也只保存模型
:param prefetch: bool, 是否使用额外的进程对产生batch数据理论上会使得Batch迭代更快
:param bool use_tqdm: 是否使用tqdm来显示训练进度; 如果为False则将loss打印在终端中 :param bool use_tqdm: 是否使用tqdm来显示训练进度; 如果为False则将loss打印在终端中
:param str,int,torch.device,list(int) device: 将模型load到哪个设备默认为None即Trainer不对模型 :param str,int,torch.device,list(int) device: 将模型load到哪个设备默认为None即Trainer不对模型
的计算位置进行管理支持以下的输入: 的计算位置进行管理支持以下的输入:
@ -394,16 +417,17 @@ class Trainer(object):
""" """
def __init__(self, train_data, model, optimizer=None, loss=None, def __init__(self, train_data, model, optimizer=None, loss=None,
batch_size=32, sampler=None, update_every=1, batch_size=32, sampler=None, drop_last=False, update_every=1,
n_epochs=10, print_every=5, num_workers=0, n_epochs=10, print_every=5,
dev_data=None, metrics=None, metric_key=None, dev_data=None, metrics=None, metric_key=None,
validate_every=-1, save_path=None, validate_every=-1, save_path=None, use_tqdm=True, device=None, prefetch=False,
prefetch=False, use_tqdm=True, device=None, callbacks=None, check_code_level=0):
callbacks=None, if prefetch and num_workers==0:
check_code_level=0): num_workers = 1
if prefetch:
warnings.warn("prefetch is deprecated, will be removed in version 0.5.0, please use num_workers instead.")
super(Trainer, self).__init__() super(Trainer, self).__init__()
if not isinstance(train_data, DataSet):
raise TypeError(f"The type of train_data must be fastNLP.DataSet, got {type(train_data)}.")
if not isinstance(model, nn.Module): if not isinstance(model, nn.Module):
raise TypeError(f"The type of model must be torch.nn.Module, got {type(model)}.") raise TypeError(f"The type of model must be torch.nn.Module, got {type(model)}.")
@ -430,25 +454,37 @@ class Trainer(object):
if metric_key is not None: if metric_key is not None:
self.increase_better = False if metric_key[0] == "-" else True self.increase_better = False if metric_key[0] == "-" else True
self.metric_key = metric_key[1:] if metric_key[0] == "+" or metric_key[0] == "-" else metric_key self.metric_key = metric_key[1:] if metric_key[0] == "+" or metric_key[0] == "-" else metric_key
elif len(metrics) > 0: else:
self.metric_key = metrics[0].__class__.__name__.lower().strip('metric') self.metric_key = None
# prepare loss # prepare loss
losser = _prepare_losser(loss) losser = _prepare_losser(loss)
# sampler check # sampler check
if sampler is not None and not isinstance(sampler, Sampler): if sampler is not None and not isinstance(sampler, Sampler):
raise ValueError("The type of sampler should be fastNLP.BaseSampler, got {}.".format(type(sampler))) raise ValueError("The type of sampler should be fastNLP.BaseSampler, got {}.".format(type(sampler)))
if check_code_level > -1: if sampler is None:
sampler = RandomSampler()
elif hasattr(sampler, 'set_batch_size'):
sampler.set_batch_size(batch_size)
if isinstance(train_data, DataSet):
self.data_iterator = DataSetIter(
dataset=train_data, batch_size=batch_size, num_workers=num_workers, sampler=sampler, drop_last=drop_last)
elif isinstance(train_data, BatchIter):
self.data_iterator = train_data
else:
raise TypeError("train_data type {} not support".format(type(train_data)))
if check_code_level > -1 and isinstance(self.data_iterator, DataSetIter):
_check_code(dataset=train_data, model=model, losser=losser, metrics=metrics, dev_data=dev_data, _check_code(dataset=train_data, model=model, losser=losser, metrics=metrics, dev_data=dev_data,
metric_key=metric_key, check_level=check_code_level, metric_key=self.metric_key, check_level=check_code_level,
batch_size=min(batch_size, DEFAULT_CHECK_BATCH_SIZE)) batch_size=min(batch_size, DEFAULT_CHECK_BATCH_SIZE))
# _check_code 是 fastNLP 帮助你检查代码是否正确的方法 。如果你在错误栈中看到这行注释,请认真检查你的代码 # _check_code 是 fastNLP 帮助你检查代码是否正确的方法 。如果你在错误栈中看到这行注释,请认真检查你的代码
self.model = _move_model_to_device(model, device=device)
self.train_data = train_data self.train_data = train_data
self.dev_data = dev_data # If None, No validation. self.dev_data = dev_data # If None, No validation.
self.model = model
self.losser = losser self.losser = losser
self.metrics = metrics self.metrics = metrics
self.n_epochs = int(n_epochs) self.n_epochs = int(n_epochs)
@ -460,26 +496,22 @@ class Trainer(object):
self.best_dev_epoch = None self.best_dev_epoch = None
self.best_dev_step = None self.best_dev_step = None
self.best_dev_perf = None self.best_dev_perf = None
self.sampler = sampler if sampler is not None else RandomSampler()
self.prefetch = prefetch
self.n_steps = (len(self.train_data) // self.batch_size + int( self.n_steps = (len(self.train_data) // self.batch_size + int(
len(self.train_data) % self.batch_size != 0)) * self.n_epochs len(self.train_data) % self.batch_size != 0)) * int(drop_last==0) * self.n_epochs
self.model = _move_model_to_device(self.model, device=device)
if isinstance(optimizer, torch.optim.Optimizer): if isinstance(optimizer, torch.optim.Optimizer):
self.optimizer = optimizer self.optimizer = optimizer
elif isinstance(optimizer, Optimizer): elif isinstance(optimizer, Optimizer):
self.optimizer = optimizer.construct_from_pytorch(model.parameters()) self.optimizer = optimizer.construct_from_pytorch(self.model.parameters())
elif optimizer is None: elif optimizer is None:
self.optimizer = torch.optim.Adam(model.parameters(), lr=4e-3) self.optimizer = torch.optim.Adam(self.model.parameters(), lr=4e-3)
else: else:
raise TypeError("optimizer can only be torch.optim.Optimizer type, not {}.".format(type(optimizer))) raise TypeError("optimizer can only be torch.optim.Optimizer type, not {}.".format(type(optimizer)))
self.use_tqdm = use_tqdm self.use_tqdm = use_tqdm
self.pbar = None self.pbar = None
self.print_every = abs(self.print_every) self.print_every = abs(self.print_every)
if self.dev_data is not None: if self.dev_data is not None:
self.tester = Tester(model=self.model, self.tester = Tester(model=self.model,
data=self.dev_data, data=self.dev_data,
@ -493,15 +525,16 @@ class Trainer(object):
self.callback_manager = CallbackManager(env={"trainer": self}, self.callback_manager = CallbackManager(env={"trainer": self},
callbacks=callbacks) callbacks=callbacks)
def train(self, load_best_model=True, on_exception='ignore'): def train(self, load_best_model=True, on_exception='auto'):
""" """
使用该函数使Trainer开始训练 使用该函数使Trainer开始训练
:param bool load_best_model: 该参数只有在初始化提供了dev_data的情况下有效如果True, trainer将在返回之前重新加载dev表现 :param bool load_best_model: 该参数只有在初始化提供了dev_data的情况下有效如果True, trainer将在返回之前重新加载dev表现
最好的模型参数 最好的模型参数
:param str on_exception: 在训练过程遭遇exception并被 :py:class:Callback 的on_exception()处理后是否继续抛出异常 :param str on_exception: 在训练过程遭遇exception并被 :py:class:Callback 的on_exception()处理后是否继续抛出异常
支持'ignore''raise': 'ignore'将捕获异常写在Trainer.train()后面的代码将继续运行; 'raise'将异常抛出 支持'ignore','raise', 'auto': 'ignore'将捕获异常写在Trainer.train()后面的代码将继续运行; 'raise'将异常抛出;
'auto'将ignore以下两种Exception: CallbackException与KeyboardInterrupt, raise其它exception.
:return dict: 返回一个字典类型的数据, :return dict: 返回一个字典类型的数据,
内含以下内容:: 内含以下内容::
@ -530,12 +563,16 @@ class Trainer(object):
self.callback_manager.on_train_begin() self.callback_manager.on_train_begin()
self._train() self._train()
self.callback_manager.on_train_end() self.callback_manager.on_train_end()
except (CallbackException, KeyboardInterrupt, Exception) as e:
except BaseException as e:
self.callback_manager.on_exception(e) self.callback_manager.on_exception(e)
if on_exception=='raise': if on_exception == 'auto':
if not isinstance(e, (CallbackException, KeyboardInterrupt)):
raise e
elif on_exception == 'raise':
raise e raise e
if self.dev_data is not None and hasattr(self, 'best_dev_perf'): if self.dev_data is not None and self.best_dev_perf is not None:
print( print(
"\nIn Epoch:{}/Step:{}, got best dev performance:".format(self.best_dev_epoch, self.best_dev_step) + "\nIn Epoch:{}/Step:{}, got best dev performance:".format(self.best_dev_epoch, self.best_dev_step) +
self.tester._format_eval_results(self.best_dev_perf), ) self.tester._format_eval_results(self.best_dev_perf), )
@ -563,12 +600,14 @@ class Trainer(object):
self.step = 0 self.step = 0
self.epoch = 0 self.epoch = 0
start = time.time() start = time.time()
if isinstance(self.model, nn.DataParallel):
self._forward_func = self.model.module.forward
else:
self._forward_func = self.model.forward
with inner_tqdm(total=self.n_steps, postfix='loss:{0:<6.5f}', leave=False, dynamic_ncols=True) as pbar: with inner_tqdm(total=self.n_steps, postfix='loss:{0:<6.5f}', leave=False, dynamic_ncols=True) as pbar:
self.pbar = pbar self.pbar = pbar
avg_loss = 0 avg_loss = 0
data_iterator = Batch(self.train_data, batch_size=self.batch_size, sampler=self.sampler, as_numpy=False, data_iterator = self.data_iterator
prefetch=self.prefetch)
self.batch_per_epoch = data_iterator.num_batches self.batch_per_epoch = data_iterator.num_batches
for epoch in range(1, self.n_epochs + 1): for epoch in range(1, self.n_epochs + 1):
self.epoch = epoch self.epoch = epoch
@ -600,7 +639,7 @@ class Trainer(object):
if self.step % self.print_every == 0: if self.step % self.print_every == 0:
avg_loss = float(avg_loss) / self.print_every avg_loss = float(avg_loss) / self.print_every
if self.use_tqdm: if self.use_tqdm:
print_output = "loss:{0:<6.5f}".format(avg_loss) print_output = "loss:{:<6.5f}".format(avg_loss)
pbar.update(self.print_every) pbar.update(self.print_every)
else: else:
end = time.time() end = time.time()
@ -664,15 +703,15 @@ class Trainer(object):
"""Perform weight update on a model. """Perform weight update on a model.
""" """
if self.optimizer is not None and (self.step + 1) % self.update_every == 0: if self.step % self.update_every == 0:
self.optimizer.step() self.optimizer.step()
def _data_forward(self, network, x): def _data_forward(self, network, x):
x = _build_args(network.forward, **x) x = _build_args(self._forward_func, **x)
y = network(**x) y = network(**x)
if not isinstance(y, dict): if not isinstance(y, dict):
raise TypeError( raise TypeError(
f"The return value of {_get_func_signature(network.forward)} should be dict, got {type(y)}.") f"The return value of {_get_func_signature(self._forward_func)} should be dict, got {type(y)}.")
return y return y
def _grad_backward(self, loss): def _grad_backward(self, loss):
@ -682,7 +721,7 @@ class Trainer(object):
For PyTorch, just do "loss.backward()" For PyTorch, just do "loss.backward()"
""" """
if self.step % self.update_every == 0: if (self.step-1) % self.update_every == 0:
self.model.zero_grad() self.model.zero_grad()
loss.backward() loss.backward()
@ -741,7 +780,9 @@ class Trainer(object):
:return bool value: True means current results on dev set is the best. :return bool value: True means current results on dev set is the best.
""" """
indicator_val = _check_eval_results(metrics, self.metric_key, self.metrics) indicator, indicator_val = _check_eval_results(metrics, self.metric_key, self.metrics)
if self.metric_key is None:
self.metric_key = indicator
is_better = True is_better = True
if self.best_metric_indicator is None: if self.best_metric_indicator is None:
# first-time validation # first-time validation
@ -780,15 +821,34 @@ def _get_value_info(_dict):
strs.append(_str) strs.append(_str)
return strs return strs
from numbers import Number
from .batch import _to_tensor
def _check_code(dataset, model, losser, metrics, batch_size=DEFAULT_CHECK_BATCH_SIZE, def _check_code(dataset, model, losser, metrics, batch_size=DEFAULT_CHECK_BATCH_SIZE,
dev_data=None, metric_key=None, dev_data=None, metric_key=None,
check_level=0): check_level=0):
# check get_loss 方法 # check get_loss 方法
model_devcie = model.parameters().__next__().device model_devcie = _get_model_device(model=model)
batch = Batch(dataset=dataset, batch_size=batch_size, sampler=SequentialSampler()) def _iter():
for batch_count, (batch_x, batch_y) in enumerate(batch): start_idx = 0
while start_idx<len(dataset):
batch_x = {}
batch_y = {}
for field_name, field in dataset.get_all_fields().items():
indices = list(range(start_idx, min(start_idx+batch_size, len(dataset))))
if field.is_target or field.is_input:
batch = field.get(indices)
if field.dtype is not None and \
issubclass(field.dtype, Number) and not isinstance(batch, torch.Tensor):
batch, _ = _to_tensor(batch, field.dtype)
if field.is_target:
batch_y[field_name] = batch
if field.is_input:
batch_x[field_name] = batch
yield (batch_x, batch_y)
start_idx += batch_size
for batch_count, (batch_x, batch_y) in enumerate(_iter()):
_move_dict_value_to_device(batch_x, batch_y, device=model_devcie) _move_dict_value_to_device(batch_x, batch_y, device=model_devcie)
# forward check # forward check
if batch_count == 0: if batch_count == 0:
@ -810,8 +870,11 @@ def _check_code(dataset, model, losser, metrics, batch_size=DEFAULT_CHECK_BATCH_
print(info_str) print(info_str)
_check_forward_error(forward_func=model.forward, dataset=dataset, _check_forward_error(forward_func=model.forward, dataset=dataset,
batch_x=batch_x, check_level=check_level) batch_x=batch_x, check_level=check_level)
if isinstance(model, nn.DataParallel):
refined_batch_x = _build_args(model.forward, **batch_x) forward_func = model.module.forward
else:
forward_func = model.forward
refined_batch_x = _build_args(forward_func, **batch_x)
pred_dict = model(**refined_batch_x) pred_dict = model(**refined_batch_x)
func_signature = _get_func_signature(model.forward) func_signature = _get_func_signature(model.forward)
if not isinstance(pred_dict, dict): if not isinstance(pred_dict, dict):
@ -856,26 +919,16 @@ def _check_eval_results(metrics, metric_key, metric_list):
loss, metrics = metrics loss, metrics = metrics
if isinstance(metrics, dict): if isinstance(metrics, dict):
if len(metrics) == 1: metric_dict = list(metrics.values())[0] # 取第一个metric
# only single metric, just use it
metric_dict = list(metrics.values())[0]
metrics_name = list(metrics.keys())[0]
else:
metrics_name = metric_list[0].__class__.__name__
if metrics_name not in metrics:
raise RuntimeError(f"{metrics_name} is chosen to do validation, but got {metrics}")
metric_dict = metrics[metrics_name]
if len(metric_dict) == 1: if metric_key is None:
indicator_val, indicator = list(metric_dict.values())[0], list(metric_dict.keys())[0] indicator_val, indicator = list(metric_dict.values())[0], list(metric_dict.keys())[0]
elif len(metric_dict) > 1 and metric_key is None:
raise RuntimeError(
f"Got multiple metric keys: {metric_dict}, but metric_key is not set. Which one to use?")
else: else:
# metric_key is set # metric_key is set
if metric_key not in metric_dict: if metric_key not in metric_dict:
raise RuntimeError(f"metric key {metric_key} not found in {metric_dict}") raise RuntimeError(f"metric key {metric_key} not found in {metric_dict}")
indicator_val = metric_dict[metric_key] indicator_val = metric_dict[metric_key]
indicator = metric_key
else: else:
raise RuntimeError("Invalid metrics type. Expect {}, got {}".format((tuple, dict), type(metrics))) raise RuntimeError("Invalid metrics type. Expect {}, got {}".format((tuple, dict), type(metrics)))
return indicator_val return indicator, indicator_val

View File

@ -4,7 +4,6 @@ utils模块实现了 fastNLP 内部和外部所需的很多工具。其中用户
__all__ = [ __all__ = [
"cache_results", "cache_results",
"seq_len_to_mask", "seq_len_to_mask",
"Example",
] ]
import _pickle import _pickle
@ -16,34 +15,35 @@ from collections import Counter, namedtuple
import numpy as np import numpy as np
import torch import torch
import torch.nn as nn import torch.nn as nn
from typing import List
_CheckRes = namedtuple('_CheckRes', ['missing', 'unused', 'duplicated', 'required', 'all_needed', _CheckRes = namedtuple('_CheckRes', ['missing', 'unused', 'duplicated', 'required', 'all_needed',
'varargs']) 'varargs'])
class Example(dict): class Option(dict):
"""a dict can treat keys as attributes""" """a dict can treat keys as attributes"""
def __getattr__(self, item): def __getattr__(self, item):
try: try:
return self.__getitem__(item) return self.__getitem__(item)
except KeyError: except KeyError:
raise AttributeError(item) raise AttributeError(item)
def __setattr__(self, key, value): def __setattr__(self, key, value):
if key.startswith('__') and key.endswith('__'): if key.startswith('__') and key.endswith('__'):
raise AttributeError(key) raise AttributeError(key)
self.__setitem__(key, value) self.__setitem__(key, value)
def __delattr__(self, item): def __delattr__(self, item):
try: try:
self.pop(item) self.pop(item)
except KeyError: except KeyError:
raise AttributeError(item) raise AttributeError(item)
def __getstate__(self): def __getstate__(self):
return self return self
def __setstate__(self, state): def __setstate__(self, state):
self.update(state) self.update(state)
@ -164,6 +164,31 @@ def cache_results(_cache_fp, _refresh=False, _verbose=1):
return wrapper_ return wrapper_
def _save_model(model, model_name, save_dir, only_param=False):
""" 存储不含有显卡信息的state_dict或model
:param model:
:param model_name:
:param save_dir: 保存的directory
:param only_param:
:return:
"""
model_path = os.path.join(save_dir, model_name)
if not os.path.isdir(save_dir):
os.makedirs(save_dir, exist_ok=True)
if isinstance(model, nn.DataParallel):
model = model.module
if only_param:
state_dict = model.state_dict()
for key in state_dict:
state_dict[key] = state_dict[key].cpu()
torch.save(state_dict, model_path)
else:
_model_device = _get_model_device(model)
model.cpu()
torch.save(model, model_path)
model.to(_model_device)
# def save_pickle(obj, pickle_path, file_name): # def save_pickle(obj, pickle_path, file_name):
# """Save an object into a pickle file. # """Save an object into a pickle file.
# #
@ -285,6 +310,7 @@ def _get_model_device(model):
:param model: nn.Module :param model: nn.Module
:return: torch.device,None 如果返回值为None说明这个模型没有任何参数 :return: torch.device,None 如果返回值为None说明这个模型没有任何参数
""" """
# TODO 这个函数存在一定的风险因为同一个模型可能存在某些parameter不在显卡中比如BertEmbedding. 或者跨显卡
assert isinstance(model, nn.Module) assert isinstance(model, nn.Module)
parameters = list(model.parameters()) parameters = list(model.parameters())
@ -295,6 +321,13 @@ def _get_model_device(model):
def _build_args(func, **kwargs): def _build_args(func, **kwargs):
"""
根据func的初始化参数从kwargs中选择func需要的参数
:param func: callable
:param kwargs: 参数
:return:dict. func中用到的参数
"""
spect = inspect.getfullargspec(func) spect = inspect.getfullargspec(func)
if spect.varkw is not None: if spect.varkw is not None:
return kwargs return kwargs
@ -635,13 +668,13 @@ def _check_forward_error(forward_func, batch_x, dataset, check_level):
warnings.warn(message=_unused_warn) warnings.warn(message=_unused_warn)
def seq_len_to_mask(seq_len): def seq_len_to_mask(seq_len, max_len=None):
""" """
将一个表示sequence length的一维数组转换为二维的mask不包含的位置为0 将一个表示sequence length的一维数组转换为二维的mask不包含的位置为0
转变 1-d seq_len到2-d mask. 转变 1-d seq_len到2-d mask.
Example:: .. code-block::
>>> seq_len = torch.arange(2, 16) >>> seq_len = torch.arange(2, 16)
>>> mask = seq_len_to_mask(seq_len) >>> mask = seq_len_to_mask(seq_len)
@ -651,20 +684,26 @@ def seq_len_to_mask(seq_len):
>>> mask = seq_len_to_mask(seq_len) >>> mask = seq_len_to_mask(seq_len)
>>> print(mask.shape) >>> print(mask.shape)
(14, 15) (14, 15)
>>> seq_len = torch.arange(2, 16)
>>> mask = seq_len_to_mask(seq_len, max_len=100)
>>>print(mask.size())
torch.Size([14, 100])
:param np.ndarray,torch.LongTensor seq_len: shape将是(B,) :param np.ndarray,torch.LongTensor seq_len: shape将是(B,)
:return: np.ndarray or torch.Tensor, shape将是(B, max_length) 元素类似为bool或torch.uint8 :param int max_len: 将长度pad到这个长度默认(None)使用的是seq_len中最长的长度但在nn.DataParallel的场景下可能不同卡的seq_len会有
区别所以需要传入一个max_len使得mask的长度是pad到该长度
:return: np.ndarray, torch.Tensor shape将是(B, max_length) 元素类似为bool或torch.uint8
""" """
if isinstance(seq_len, np.ndarray): if isinstance(seq_len, np.ndarray):
assert len(np.shape(seq_len)) == 1, f"seq_len can only have one dimension, got {len(np.shape(seq_len))}." assert len(np.shape(seq_len)) == 1, f"seq_len can only have one dimension, got {len(np.shape(seq_len))}."
max_len = int(seq_len.max()) max_len = int(max_len) if max_len else int(seq_len.max())
broad_cast_seq_len = np.tile(np.arange(max_len), (len(seq_len), 1)) broad_cast_seq_len = np.tile(np.arange(max_len), (len(seq_len), 1))
mask = broad_cast_seq_len < seq_len.reshape(-1, 1) mask = broad_cast_seq_len < seq_len.reshape(-1, 1)
elif isinstance(seq_len, torch.Tensor): elif isinstance(seq_len, torch.Tensor):
assert seq_len.dim() == 1, f"seq_len can only have one dimension, got {seq_len.dim() == 1}." assert seq_len.dim() == 1, f"seq_len can only have one dimension, got {seq_len.dim() == 1}."
batch_size = seq_len.size(0) batch_size = seq_len.size(0)
max_len = seq_len.max().long() max_len = int(max_len) if max_len else seq_len.max().long()
broad_cast_seq_len = torch.arange(max_len).expand(batch_size, -1).to(seq_len) broad_cast_seq_len = torch.arange(max_len).expand(batch_size, -1).to(seq_len)
mask = broad_cast_seq_len.lt(seq_len.unsqueeze(1)) mask = broad_cast_seq_len.lt(seq_len.unsqueeze(1))
else: else:
@ -698,3 +737,54 @@ class _pseudo_tqdm:
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
del self del self
def iob2(tags: List[str]) -> List[str]:
"""
检查数据是否是合法的IOB数据如果是IOB1会被自动转换为IOB2两者的差异见
https://datascience.stackexchange.com/questions/37824/difference-between-iob-and-iob2-format
:param tags: 需要转换的tags, 需要为大写的BIO标签
"""
for i, tag in enumerate(tags):
if tag == "O":
continue
split = tag.split("-")
if len(split) != 2 or split[0] not in ["I", "B"]:
raise TypeError("The encoding schema is not a valid IOB type.")
if split[0] == "B":
continue
elif i == 0 or tags[i - 1] == "O": # conversion IOB1 to IOB2
tags[i] = "B" + tag[1:]
elif tags[i - 1][1:] == tag[1:]:
continue
else: # conversion IOB1 to IOB2
tags[i] = "B" + tag[1:]
return tags
def iob2bioes(tags: List[str]) -> List[str]:
"""
将iob的tag转换为bioes编码
:param tags: List[str]. 编码需要是大写的
:return:
"""
new_tags = []
for i, tag in enumerate(tags):
if tag == 'O':
new_tags.append(tag)
else:
split = tag.split('-')[0]
if split == 'B':
if i + 1 != len(tags) and tags[i + 1].split('-')[0] == 'I':
new_tags.append(tag)
else:
new_tags.append(tag.replace('B-', 'S-'))
elif split == 'I':
if i + 1 < len(tags) and tags[i + 1].split('-')[0] == 'I':
new_tags.append(tag)
else:
new_tags.append(tag.replace('I-', 'E-'))
else:
raise TypeError("Invalid IOB format.")
return new_tags

View File

@ -4,12 +4,14 @@ __all__ = [
] ]
from functools import wraps from functools import wraps
from collections import Counter from collections import Counter, defaultdict
from .dataset import DataSet from .dataset import DataSet
from .utils import Example from .utils import Option
from functools import partial
import numpy as np
class VocabularyOption(Example): class VocabularyOption(Option):
def __init__(self, def __init__(self,
max_size=None, max_size=None,
min_freq=None, min_freq=None,
@ -89,41 +91,88 @@ class Vocabulary(object):
self.word2idx = None self.word2idx = None
self.idx2word = None self.idx2word = None
self.rebuild = True self.rebuild = True
# 用于承载不需要单独创建entry的词语具体见from_dataset()方法
self._no_create_word = Counter()
@_check_build_status @_check_build_status
def update(self, word_lst): def update(self, word_lst, no_create_entry=False):
"""依次增加序列中词在词典中的出现频率 """依次增加序列中词在词典中的出现频率
:param list word_lst: a list of strings :param list word_lst: a list of strings
:param bool no_create_entry: 在使用fastNLP.TokenEmbedding加载预训练模型时没有从预训练词表中找到这个词的处理方式
如果为True则不会有这个词语创建一个单独的entry它将一直被指向unk的表示; 如果为False则为这个词创建一个单独
的entry如果这个word来自于dev或者test一般设置为True如果来自与train一般设置为False以下两种情况: 如果新
加入一个word且no_create_entry为True但这个词之前已经在Vocabulary中且并不是no_create_entry的则还是会为这
个词创建一个单独的vector; 如果no_create_entry为False但这个词之前已经在Vocabulary中且并不是no_create_entry的
则这个词将认为是需要创建单独的vector的
""" """
self._add_no_create_entry(word_lst, no_create_entry)
self.word_count.update(word_lst) self.word_count.update(word_lst)
return self
@_check_build_status @_check_build_status
def add(self, word): def add(self, word, no_create_entry=False):
""" """
增加一个新词在词典中的出现频率 增加一个新词在词典中的出现频率
:param str word: 新词 :param str word: 新词
:param bool no_create_entry: 在使用fastNLP.TokenEmbedding加载预训练模型时没有从预训练词表中找到这个词的处理方式
如果为True则不会有这个词语创建一个单独的entry它将一直被指向unk的表示; 如果为False则为这个词创建一个单独
的entry如果这个word来自于dev或者test一般设置为True如果来自与train一般设置为False以下两种情况: 如果新
加入一个word且no_create_entry为True但这个词之前已经在Vocabulary中且并不是no_create_entry的则还是会为这
个词创建一个单独的vector; 如果no_create_entry为False但这个词之前已经在Vocabulary中且并不是no_create_entry的
则这个词将认为是需要创建单独的vector的
""" """
self._add_no_create_entry(word, no_create_entry)
self.word_count[word] += 1 self.word_count[word] += 1
return self
def _add_no_create_entry(self, word, no_create_entry):
"""
在新加入word时检查_no_create_word的设置
:param str, List[str] word:
:param bool no_create_entry:
:return:
"""
if isinstance(word, str):
word = [word]
for w in word:
if no_create_entry and self.word_count.get(w, 0) == self._no_create_word.get(w, 0):
self._no_create_word[w] += 1
elif not no_create_entry and w in self._no_create_word:
self._no_create_word.pop(w)
@_check_build_status @_check_build_status
def add_word(self, word): def add_word(self, word, no_create_entry=False):
""" """
增加一个新词在词典中的出现频率 增加一个新词在词典中的出现频率
:param str word: 新词 :param str word: 新词
:param bool no_create_entry: 在使用fastNLP.TokenEmbedding加载预训练模型时没有从预训练词表中找到这个词的处理方式
如果为True则不会有这个词语创建一个单独的entry它将一直被指向unk的表示; 如果为False则为这个词创建一个单独
的entry如果这个word来自于dev或者test一般设置为True如果来自与train一般设置为False以下两种情况: 如果新
加入一个word且no_create_entry为True但这个词之前已经在Vocabulary中且并不是no_create_entry的则还是会为这
个词创建一个单独的vector; 如果no_create_entry为False但这个词之前已经在Vocabulary中且并不是no_create_entry的
则这个词将认为是需要创建单独的vector的
""" """
self.add(word) self.add(word, no_create_entry=no_create_entry)
@_check_build_status @_check_build_status
def add_word_lst(self, word_lst): def add_word_lst(self, word_lst, no_create_entry=False):
""" """
依次增加序列中词在词典中的出现频率 依次增加序列中词在词典中的出现频率
:param list[str] word_lst: 词的序列 :param list[str] word_lst: 词的序列
:param bool no_create_entry: 在使用fastNLP.TokenEmbedding加载预训练模型时没有从预训练词表中找到这个词的处理方式
如果为True则不会有这个词语创建一个单独的entry它将一直被指向unk的表示; 如果为False则为这个词创建一个单独
的entry如果这个word来自于dev或者test一般设置为True如果来自与train一般设置为False以下两种情况: 如果新
加入一个word且no_create_entry为True但这个词之前已经在Vocabulary中且并不是no_create_entry的则还是会为这
个词创建一个单独的vector; 如果no_create_entry为False但这个词之前已经在Vocabulary中且并不是no_create_entry的
则这个词将认为是需要创建单独的vector的
""" """
self.update(word_lst) self.update(word_lst, no_create_entry=no_create_entry)
return self
def build_vocab(self): def build_vocab(self):
""" """
@ -133,10 +182,10 @@ class Vocabulary(object):
""" """
if self.word2idx is None: if self.word2idx is None:
self.word2idx = {} self.word2idx = {}
if self.padding is not None: if self.padding is not None:
self.word2idx[self.padding] = len(self.word2idx) self.word2idx[self.padding] = len(self.word2idx)
if self.unknown is not None: if self.unknown is not None:
self.word2idx[self.unknown] = len(self.word2idx) self.word2idx[self.unknown] = len(self.word2idx)
max_size = min(self.max_size, len(self.word_count)) if self.max_size else None max_size = min(self.max_size, len(self.word_count)) if self.max_size else None
words = self.word_count.most_common(max_size) words = self.word_count.most_common(max_size)
@ -148,13 +197,15 @@ class Vocabulary(object):
self.word2idx.update({w: i + start_idx for i, (w, _) in enumerate(words)}) self.word2idx.update({w: i + start_idx for i, (w, _) in enumerate(words)})
self.build_reverse_vocab() self.build_reverse_vocab()
self.rebuild = False self.rebuild = False
return self
def build_reverse_vocab(self): def build_reverse_vocab(self):
""" """
基于 "word to index" dict, 构建 "index to word" dict. 基于 `word to index` dict, 构建 `index to word` dict.
""" """
self.idx2word = {i: w for w, i in self.word2idx.items()} self.idx2word = {i: w for w, i in self.word2idx.items()}
return self
@_check_build_vocab @_check_build_vocab
def __len__(self): def __len__(self):
@ -205,9 +256,9 @@ class Vocabulary(object):
# remember to use `field_name` # remember to use `field_name`
vocab.index_dataset(train_data, dev_data, test_data, field_name='words') vocab.index_dataset(train_data, dev_data, test_data, field_name='words')
:param datasets: 需要转index的 class:`~fastNLP.DataSet` , 支持一个或多个list :param ~fastNLP.DataSet,List[~fastNLP.DataSet] datasets: 需要转index的一个或多个数据集
:param str field_name: 需要转index的field, 若有多个 DataSet, 每个DataSet都必须有此 field. :param str field_name: 需要转index的field, 若有多个 DataSet, 每个DataSet都必须有此 field.
目前仅支持 ``str`` , ``list(str)`` , ``list(list(str))`` 目前仅支持 ``str`` , ``List[str]`` , ``List[List[str]]``
:param str new_field_name: 保存结果的field_name. 若为 ``None`` , 将覆盖原field. :param str new_field_name: 保存结果的field_name. 若为 ``None`` , 将覆盖原field.
Default: ``None`` Default: ``None``
""" """
@ -240,19 +291,31 @@ class Vocabulary(object):
raise e raise e
else: else:
raise RuntimeError("Only DataSet type is allowed.") raise RuntimeError("Only DataSet type is allowed.")
return self
def from_dataset(self, *datasets, field_name): @property
def _no_create_word_length(self):
return len(self._no_create_word)
def from_dataset(self, *datasets, field_name, no_create_entry_dataset=None):
""" """
使用dataset的对应field中词构建词典:: 使用dataset的对应field中词构建词典::
# remember to use `field_name` # remember to use `field_name`
vocab.from_dataset(train_data1, train_data2, field_name='words') vocab.from_dataset(train_data1, train_data2, field_name='words')
:param datasets: 需要转index的 class:`~fastNLP.DataSet` , 支持一个或多个list :param ~fastNLP.DataSet,List[~fastNLP.DataSet] datasets: 需要转index的一个或多个数据集
:param field_name: 可为 ``str`` ``list(str)`` . :param str,List[str] field_name: 可为 ``str`` ``List[str]`` .
构建词典所使用的 field(s), 支持一个或多个field 构建词典所使用的 field(s), 支持一个或多个field
若有多个 DataSet, 每个DataSet都必须有这些field. 若有多个 DataSet, 每个DataSet都必须有这些field.
目前仅支持的field结构: ``str`` , ``list(str)`` , ``list(list(str))`` 目前仅支持的field结构: ``str`` , ``List[str]`` , ``list[List[str]]``
:param no_create_entry_dataset: 可以传入DataSet, List[DataSet]或者None(默认)该选项用在接下来的模型会使用pretrain
的embedding(包括glove, word2vec, elmo与bert)且会finetune的情况如果仅使用来自于train的数据建立vocabulary会导致test与dev
中的数据无法充分利用到来自于预训练embedding的信息所以在建立词表的时候将test与dev考虑进来会使得最终的结果更好
如果一个词出现在了train中但是没在预训练模型中embedding会为它用unk初始化但它是单独的一个vector如果
finetune embedding的话这个词在更新之后可能会有更好的表示; 而如果这个词仅出现在了dev或test中那么就不能为它们单独建立vector
而应该让它指向unk这个vector的值所以只位于no_create_entry_dataset中的token将首先从预训练的词表中寻找它的表示
如果找到了就使用该表示; 如果没有找到则认为该词的表示应该为unk的表示
:return self: :return self:
""" """
if isinstance(field_name, str): if isinstance(field_name, str):
@ -260,18 +323,21 @@ class Vocabulary(object):
elif not isinstance(field_name, list): elif not isinstance(field_name, list):
raise TypeError('invalid argument field_name: {}'.format(field_name)) raise TypeError('invalid argument field_name: {}'.format(field_name))
def construct_vocab(ins): def construct_vocab(ins, no_create_entry=False):
for fn in field_name: for fn in field_name:
field = ins[fn] field = ins[fn]
if isinstance(field, str): if isinstance(field, str):
self.add_word(field) self.add_word(field, no_create_entry=no_create_entry)
elif isinstance(field, list): elif isinstance(field, (list, np.ndarray)):
if not isinstance(field[0], list): if not isinstance(field[0], (list, np.ndarray)):
self.add_word_lst(field) for word in field:
self.add_word(word, no_create_entry=no_create_entry)
else: else:
if isinstance(field[0][0], list): if isinstance(field[0][0], (list, np.ndarray)):
raise RuntimeError("Only support field with 2 dimensions.") raise RuntimeError("Only support field with 2 dimensions.")
[self.add_word_lst(w) for w in field] for words in field:
for word in words:
self.add_word(word, no_create_entry=no_create_entry)
for idx, dataset in enumerate(datasets): for idx, dataset in enumerate(datasets):
if isinstance(dataset, DataSet): if isinstance(dataset, DataSet):
@ -281,13 +347,30 @@ class Vocabulary(object):
print("When processing the `{}` dataset, the following error occurred.".format(idx)) print("When processing the `{}` dataset, the following error occurred.".format(idx))
raise e raise e
else: else:
raise RuntimeError("Only DataSet type is allowed.") raise TypeError("Only DataSet type is allowed.")
if no_create_entry_dataset is not None:
partial_construct_vocab = partial(construct_vocab, no_create_entry=True)
if isinstance(no_create_entry_dataset, DataSet):
no_create_entry_dataset.apply(partial_construct_vocab)
elif isinstance(no_create_entry_dataset, list):
for dataset in no_create_entry_dataset:
if not isinstance(dataset, DataSet):
raise TypeError("Only DataSet type is allowed.")
dataset.apply(partial_construct_vocab)
return self return self
def _is_word_no_create_entry(self, word):
"""
判断当前的word是否是不需要创建entry的具体参见from_dataset的说明
:param word: str
:return: bool
"""
return word in self._no_create_word
def to_index(self, w): def to_index(self, w):
""" """
将词转为数字. 若词不再词典中被记录, 将视为 unknown, ``unknown=None`` , 将抛出 将词转为数字. 若词不再词典中被记录, 将视为 unknown, ``unknown=None`` , 将抛出``ValueError``::
``ValueError``::
index = vocab.to_index('abc') index = vocab.to_index('abc')
# equals to # equals to
@ -338,6 +421,8 @@ class Vocabulary(object):
self.word2idx = None self.word2idx = None
self.idx2word = None self.idx2word = None
self.rebuild = True self.rebuild = True
self._no_create_word.clear()
return self
def __getstate__(self): def __getstate__(self):
"""Use to prepare data for pickle. """Use to prepare data for pickle.
@ -359,5 +444,7 @@ class Vocabulary(object):
def __repr__(self): def __repr__(self):
return "Vocabulary({}...)".format(list(self.word_count.keys())[:5]) return "Vocabulary({}...)".format(list(self.word_count.keys())[:5])
@_check_build_vocab
def __iter__(self): def __iter__(self):
return iter(list(self.word_count.keys())) for word, index in self.word2idx.items():
yield word, index

View File

@ -0,0 +1,26 @@
"""
embeddings 模块主要用于从各种预训练的模型中获取词语的分布式表示目前支持的预训练模型包括word2vec, glove, ELMO, BERT等这里所有
embedding的forward输入都是形状为 ``(batch_size, max_len)`` 的torch.LongTensor输出都是 ``(batch_size, max_len, embedding_dim)``
torch.FloatTensor所有的embedding都可以使用 `self.num_embedding` 获取最大的输入index范围, `self.embeddig_dim` `self.embed_size` 获取embedding的
输出维度
"""
__all__ = [
"Embedding",
"StaticEmbedding",
"ElmoEmbedding",
"BertEmbedding",
"StackEmbedding",
"LSTMCharEmbedding",
"CNNCharEmbedding",
"get_embeddings"
]
from .embedding import Embedding
from .static_embedding import StaticEmbedding
from .elmo_embedding import ElmoEmbedding
from .bert_embedding import BertEmbedding
from .char_embedding import CNNCharEmbedding, LSTMCharEmbedding
from .stack_embedding import StackEmbedding
from .utils import get_embeddings

View File

@ -0,0 +1,334 @@
import os
import collections
from torch import nn
import torch
import numpy as np
from itertools import chain
from ..core.vocabulary import Vocabulary
from ..io.file_utils import _get_base_url, cached_path, PRETRAINED_BERT_MODEL_DIR
from ..modules.encoder.bert import _WordPieceBertModel, BertModel, BertTokenizer
from .contextual_embedding import ContextualEmbedding
class BertEmbedding(ContextualEmbedding):
"""
别名:class:`fastNLP.embeddings.BertEmbedding` :class:`fastNLP.embeddings.bert_embedding.BertEmbedding`
使用BERT对words进行编码的Embedding建议将输入的words长度限制在430以内而不要使用512(根据预训练模型参数可能有变化)这是由于
预训练的bert模型长度限制为512个token而因为输入的word是未进行word piece分割的(word piece的分割有BertEmbedding在输入word
时切分)在分割之后长度可能会超过最大长度限制
BertEmbedding可以支持自动下载权重当前支持的模型有以下的几种(待补充):
Example::
>>> import torch
>>> from fastNLP import Vocabulary
>>> vocab = Vocabulary().add_word_lst("The whether is good .".split())
>>> embed = BertEmbedding(vocab, model_dir_or_name='en-base-uncased', requires_grad=False, layers='4,-2,-1')
>>> words = torch.LongTensor([[vocab.to_index(word) for word in "The whether is good .".split()]])
>>> outputs = embed(words)
>>> outputs.size()
>>> # torch.Size([1, 5, 2304])
:param ~fastNLP.Vocabulary vocab: 词表
:param str model_dir_or_name: 模型所在目录或者模型的名称当传入模型所在目录时目录中应该包含一个词表文件(.txt作为后缀名),
权重文件(.bin作为文件后缀名), 配置文件(.json作为后缀名)
:param str layers: 输出embedding表示来自于哪些层不同层的结果按照layers中的顺序在最后一维concat起来','隔开层数可以以负数
去索引倒数几层
:param str pool_method: 因为在bert中每个word会被表示为多个word pieces, 当获取一个word的表示的时候怎样从它的word pieces
中计算得到它对应的表示支持 ``last`` , ``first`` , ``avg`` , ``max``
:param float word_dropout: 以多大的概率将一个词替换为unk这样既可以训练unk也是一定的regularize
:param float dropout: 以多大的概率对embedding的表示进行Dropout0.1即随机将10%的值置为0
:param bool include_cls_sep: bool在bert计算句子的表示的时候需要在前面加上[CLS][SEP], 是否在结果中保留这两个内容 这样
会使得word embedding的结果比输入的结果长两个token如果该值为True则在使用 :class::StackEmbedding 可能会与其它类型的
embedding长度不匹配
:param bool requires_grad: 是否需要gradient以更新Bert的权重
"""
def __init__(self, vocab: Vocabulary, model_dir_or_name: str='en-base-uncased', layers: str='-1',
pool_method: str='first', word_dropout=0, dropout=0, requires_grad: bool=False,
include_cls_sep: bool=False):
super(BertEmbedding, self).__init__(vocab, word_dropout=word_dropout, dropout=dropout)
# 根据model_dir_or_name检查是否存在并下载
if model_dir_or_name.lower() in PRETRAINED_BERT_MODEL_DIR:
PRETRAIN_URL = _get_base_url('bert')
model_name = PRETRAINED_BERT_MODEL_DIR[model_dir_or_name]
model_url = PRETRAIN_URL + model_name
model_dir = cached_path(model_url)
# 检查是否存在
elif os.path.isdir(os.path.expanduser(os.path.abspath(model_dir_or_name))):
model_dir = model_dir_or_name
else:
raise ValueError(f"Cannot recognize {model_dir_or_name}.")
self.model = _WordBertModel(model_dir=model_dir, vocab=vocab, layers=layers,
pool_method=pool_method, include_cls_sep=include_cls_sep)
self.requires_grad = requires_grad
self._embed_size = len(self.model.layers)*self.model.encoder.hidden_size
def _delete_model_weights(self):
del self.model
def forward(self, words):
"""
计算words的bert embedding表示计算之前会在每句话的开始增加[CLS]在结束增加[SEP], 并根据include_cls_sep判断要不要
删除这两个token的表示
:param torch.LongTensor words: [batch_size, max_len]
:return: torch.FloatTensor. batch_size x max_len x (768*len(self.layers))
"""
words = self.drop_word(words)
outputs = self._get_sent_reprs(words)
if outputs is not None:
return self.dropout(words)
outputs = self.model(words)
outputs = torch.cat([*outputs], dim=-1)
return self.dropout(outputs)
@property
def requires_grad(self):
"""
Embedding的参数是否允许优化True: 所有参数运行优化; False: 所有参数不允许优化; None: 部分允许优化部分不允许
:return:
"""
requires_grads = set([param.requires_grad for name, param in self.named_parameters()
if 'word_pieces_lengths' not in name])
if len(requires_grads) == 1:
return requires_grads.pop()
else:
return None
@requires_grad.setter
def requires_grad(self, value):
for name, param in self.named_parameters():
if 'word_pieces_lengths' in name: # 这个不能加入到requires_grad中
continue
param.requires_grad = value
class BertWordPieceEncoder(nn.Module):
"""
读取bert模型读取之后调用index_dataset方法在dataset中生成word_pieces这一列
:param str model_dir_or_name: 模型所在目录或者模型的名称默认值为 ``en-base-uncased``
:param str layers: 最终结果中的表示','隔开层数可以以负数去索引倒数几层
:param bool requires_grad: 是否需要gradient
"""
def __init__(self, model_dir_or_name: str='en-base-uncased', layers: str='-1',
requires_grad: bool=False):
super().__init__()
PRETRAIN_URL = _get_base_url('bert')
if model_dir_or_name in PRETRAINED_BERT_MODEL_DIR:
model_name = PRETRAINED_BERT_MODEL_DIR[model_dir_or_name]
model_url = PRETRAIN_URL + model_name
model_dir = cached_path(model_url)
# 检查是否存在
elif os.path.isdir(model_dir_or_name):
model_dir = model_dir_or_name
else:
raise ValueError(f"Cannot recognize {model_dir_or_name}.")
self.model = _WordPieceBertModel(model_dir=model_dir, layers=layers)
self._embed_size = len(self.model.layers) * self.model.encoder.hidden_size
self.requires_grad = requires_grad
@property
def requires_grad(self):
"""
Embedding的参数是否允许优化True: 所有参数运行优化; False: 所有参数不允许优化; None: 部分允许优化部分不允许
:return:
"""
requires_grads = set([param.requires_grad for name, param in self.named_parameters()])
if len(requires_grads) == 1:
return requires_grads.pop()
else:
return None
@requires_grad.setter
def requires_grad(self, value):
for name, param in self.named_parameters():
param.requires_grad = value
@property
def embed_size(self):
return self._embed_size
def index_datasets(self, *datasets, field_name):
"""
使用bert的tokenizer新生成word_pieces列加入到datasets中并将他们设置为input如果首尾不是
[CLS][SEP]会在首尾额外加入[CLS][SEP], 且将word_pieces这一列的pad value设置为了bert的pad value
:param datasets: DataSet对象
:param field_name: 基于哪一列的内容生成word_pieces列这一列中每个数据应该是List[str]的形式
:return:
"""
self.model.index_dataset(*datasets, field_name=field_name)
def forward(self, word_pieces, token_type_ids=None):
"""
计算words的bert embedding表示传入的words中应该自行包含[CLS][SEP]的tag
:param words: batch_size x max_len
:param token_type_ids: batch_size x max_len, 用于区分前一句和后一句话
:return: torch.FloatTensor. batch_size x max_len x (768*len(self.layers))
"""
outputs = self.model(word_pieces, token_type_ids)
outputs = torch.cat([*outputs], dim=-1)
return outputs
class _WordBertModel(nn.Module):
def __init__(self, model_dir:str, vocab:Vocabulary, layers:str='-1', pool_method:str='first', include_cls_sep:bool=False):
super().__init__()
self.tokenzier = BertTokenizer.from_pretrained(model_dir)
self.encoder = BertModel.from_pretrained(model_dir)
# 检查encoder_layer_number是否合理
encoder_layer_number = len(self.encoder.encoder.layer)
self.layers = list(map(int, layers.split(',')))
for layer in self.layers:
if layer<0:
assert -layer<=encoder_layer_number, f"The layer index:{layer} is out of scope for " \
f"a bert model with {encoder_layer_number} layers."
else:
assert layer<encoder_layer_number, f"The layer index:{layer} is out of scope for " \
f"a bert model with {encoder_layer_number} layers."
assert pool_method in ('avg', 'max', 'first', 'last')
self.pool_method = pool_method
self.include_cls_sep = include_cls_sep
# 将所有vocab中word的wordpiece计算出来, 需要额外考虑[CLS]和[SEP]
print("Start to generating word pieces for word.")
# 第一步统计出需要的word_piece, 然后创建新的embed和word_piece_vocab, 然后填入值
word_piece_dict = {'[CLS]':1, '[SEP]':1} # 用到的word_piece以及新增的
found_count = 0
for word, index in vocab:
if index == vocab.padding_idx: # pad是个特殊的符号
word = '[PAD]'
elif index == vocab.unknown_idx:
word = '[UNK]'
word_pieces = self.tokenzier.wordpiece_tokenizer.tokenize(word)
if len(word_pieces)==1:
if not vocab._is_word_no_create_entry(word): # 如果是train中的值, 但是却没有找到
if index!=vocab.unknown_idx and word_pieces[0]=='[UNK]': # 说明这个词不在原始的word里面
word_piece_dict[word] = 1 # 新增一个值
continue
for word_piece in word_pieces:
word_piece_dict[word_piece] = 1
found_count += 1
original_embed = self.encoder.embeddings.word_embeddings.weight.data
# 特殊词汇要特殊处理
embed = nn.Embedding(len(word_piece_dict), original_embed.size(1)) # 新的embed
new_word_piece_vocab = collections.OrderedDict()
for index, token in enumerate(['[PAD]', '[UNK]']):
word_piece_dict.pop(token, None)
embed.weight.data[index] = original_embed[self.tokenzier.vocab[token]]
new_word_piece_vocab[token] = index
for token in word_piece_dict.keys():
if token in self.tokenzier.vocab:
embed.weight.data[len(new_word_piece_vocab)] = original_embed[self.tokenzier.vocab[token]]
else:
embed.weight.data[len(new_word_piece_vocab)] = original_embed[self.tokenzier.vocab['[UNK]']]
new_word_piece_vocab[token] = len(new_word_piece_vocab)
self.tokenzier._reinit_on_new_vocab(new_word_piece_vocab)
self.encoder.embeddings.word_embeddings = embed
word_to_wordpieces = []
word_pieces_lengths = []
for word, index in vocab:
if index == vocab.padding_idx: # pad是个特殊的符号
word = '[PAD]'
elif index == vocab.unknown_idx:
word = '[UNK]'
word_pieces = self.tokenzier.wordpiece_tokenizer.tokenize(word)
word_pieces = self.tokenzier.convert_tokens_to_ids(word_pieces)
word_to_wordpieces.append(word_pieces)
word_pieces_lengths.append(len(word_pieces))
print("Found(Or seg into word pieces) {} words out of {}.".format(found_count, len(vocab)))
self._cls_index = self.tokenzier.vocab['[CLS]']
self._sep_index = self.tokenzier.vocab['[SEP]']
self._pad_index = vocab.padding_idx
self._wordpiece_pad_index = self.tokenzier.vocab['[PAD]'] # 需要用于生成word_piece
self.word_to_wordpieces = np.array(word_to_wordpieces)
self.word_pieces_lengths = nn.Parameter(torch.LongTensor(word_pieces_lengths), requires_grad=False)
print("Successfully generate word pieces.")
def forward(self, words):
"""
:param words: torch.LongTensor, batch_size x max_len
:return: num_layers x batch_size x max_len x hidden_size或者num_layers x batch_size x (max_len+2) x hidden_size
"""
batch_size, max_word_len = words.size()
seq_len = words.ne(self._pad_index).sum(dim=-1)
batch_word_pieces_length = self.word_pieces_lengths[words] # batch_size x max_len
word_pieces_lengths = batch_word_pieces_length.sum(dim=-1)
max_word_piece_length = word_pieces_lengths.max().item()
# +2是由于需要加入[CLS]与[SEP]
word_pieces = words.new_full((batch_size, max_word_piece_length+2), fill_value=self._wordpiece_pad_index)
word_pieces[:, 0].fill_(self._cls_index)
batch_indexes = torch.arange(batch_size).to(words)
word_pieces[batch_indexes, word_pieces_lengths+1] = self._sep_index
attn_masks = torch.zeros_like(word_pieces)
# 1. 获取words的word_pieces的id以及对应的span范围
word_indexes = words.tolist()
for i in range(batch_size):
word_pieces_i = list(chain(*self.word_to_wordpieces[word_indexes[i]]))
word_pieces[i, 1:len(word_pieces_i)+1] = torch.LongTensor(word_pieces_i)
attn_masks[i, :len(word_pieces_i)+2].fill_(1)
# TODO 截掉长度超过的部分。
# 2. 获取hidden的结果根据word_pieces进行对应的pool计算
# all_outputs: [batch_size x max_len x hidden_size, batch_size x max_len x hidden_size, ...]
bert_outputs, _ = self.encoder(word_pieces, token_type_ids=None, attention_mask=attn_masks,
output_all_encoded_layers=True)
# output_layers = [self.layers] # len(self.layers) x batch_size x max_word_piece_length x hidden_size
if self.include_cls_sep:
outputs = bert_outputs[-1].new_zeros(len(self.layers), batch_size, max_word_len + 2,
bert_outputs[-1].size(-1))
s_shift = 1
else:
outputs = bert_outputs[-1].new_zeros(len(self.layers), batch_size, max_word_len,
bert_outputs[-1].size(-1))
s_shift = 0
batch_word_pieces_cum_length = batch_word_pieces_length.new_zeros(batch_size, max_word_len + 1)
batch_word_pieces_cum_length[:, 1:] = batch_word_pieces_length.cumsum(dim=-1) # batch_size x max_len
for l_index, l in enumerate(self.layers):
output_layer = bert_outputs[l]
# 从word_piece collapse到word的表示
truncate_output_layer = output_layer[:, 1:-1] # 删除[CLS]与[SEP] batch_size x len x hidden_size
outputs_seq_len = seq_len + s_shift
if self.pool_method == 'first':
for i in range(batch_size):
i_word_pieces_cum_length = batch_word_pieces_cum_length[i, :seq_len[i]] # 每个word的start位置
outputs[l_index, i, s_shift:outputs_seq_len[i]] = truncate_output_layer[i, i_word_pieces_cum_length] # num_layer x batch_size x len x hidden_size
elif self.pool_method == 'last':
for i in range(batch_size):
i_word_pieces_cum_length = batch_word_pieces_cum_length[i, 1:seq_len[i]+1] - 1 # 每个word的end
outputs[l_index, i, s_shift:outputs_seq_len[i]] = truncate_output_layer[i, i_word_pieces_cum_length]
elif self.pool_method == 'max':
for i in range(batch_size):
for j in range(seq_len[i]):
start, end = batch_word_pieces_cum_length[i, j], batch_word_pieces_cum_length[i, j+1]
outputs[l_index, i, j+s_shift], _ = torch.max(truncate_output_layer[i, start:end], dim=-2)
else:
for i in range(batch_size):
for j in range(seq_len[i]):
start, end = batch_word_pieces_cum_length[i, j], batch_word_pieces_cum_length[i, j+1]
outputs[l_index, i, j+s_shift] = torch.mean(truncate_output_layer[i, start:end], dim=-2)
if self.include_cls_sep:
outputs[l_index, :, 0] = output_layer[:, 0]
outputs[l_index, batch_indexes, seq_len+s_shift] = output_layer[batch_indexes, seq_len+s_shift]
# 3. 最终的embedding结果
return outputs

View File

@ -0,0 +1,295 @@
"""
该文件中主要包含的是character的Embedding包括基于CNN与LSTM的character Embedding与其它Embedding一样这里的Embedding输入也是
词的index而不需要使用词语中的char的index来获取表达
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List
from ..modules.encoder.lstm import LSTM
from ..core.vocabulary import Vocabulary
from .embedding import TokenEmbedding
from .utils import _construct_char_vocab_from_vocab
class CNNCharEmbedding(TokenEmbedding):
"""
别名:class:`fastNLP.embeddings.CNNCharEmbedding` :class:`fastNLP.embeddings.char_embedding.CNNCharEmbedding`
使用CNN生成character embeddingCNN的结构为, embed(x) -> Dropout(x) -> CNN(x) -> activation(x) -> pool -> fc -> Dropout.
不同的kernel大小的fitler结果是concat起来然后通过一层fully connected layer, 然后输出word的表示
Example::
>>> vocab = Vocabulary().add_word_lst("The whether is good .".split())
>>> embed = CNNCharEmbedding(vocab, embed_size=50)
>>> words = torch.LongTensor([[vocab.to_index(word) for word in "The whether is good .".split()]])
>>> outputs = embed(words)
>>> outputs.size()
>>> # torch.Size([1, 550])
:param vocab: 词表
:param embed_size: 该word embedding的大小默认值为50.
:param char_emb_size: character的embed的大小character是从vocab中生成的默认值为50.
:param float word_dropout: 以多大的概率将一个词替换为unk这样既可以训练unk也是一定的regularize
:param float dropout: 以多大的概率drop分布式表示与char embedding的输出
:param filter_nums: filter的数量. 长度需要和kernels一致默认值为[40, 30, 20].
:param kernel_sizes: kernel的大小. 默认值为[5, 3, 1].
:param pool_method: character的表示在合成一个表示时所使用的pool方法支持'avg', 'max'.
:param activation: CNN之后使用的激活方法支持'relu', 'sigmoid', 'tanh' 或者自定义函数.
:param min_char_freq: character的最少出现次数默认值为2.
"""
def __init__(self, vocab: Vocabulary, embed_size: int=50, char_emb_size: int=50, word_dropout:float=0,
dropout:float=0.5, filter_nums: List[int]=(40, 30, 20), kernel_sizes: List[int]=(5, 3, 1),
pool_method: str='max', activation='relu', min_char_freq: int=2):
super(CNNCharEmbedding, self).__init__(vocab, word_dropout=word_dropout, dropout=dropout)
for kernel in kernel_sizes:
assert kernel % 2 == 1, "Only odd kernel is allowed."
assert pool_method in ('max', 'avg')
self.dropout = nn.Dropout(dropout)
self.pool_method = pool_method
# activation function
if isinstance(activation, str):
if activation.lower() == 'relu':
self.activation = F.relu
elif activation.lower() == 'sigmoid':
self.activation = F.sigmoid
elif activation.lower() == 'tanh':
self.activation = F.tanh
elif activation is None:
self.activation = lambda x: x
elif callable(activation):
self.activation = activation
else:
raise Exception(
"Undefined activation function: choose from: [relu, tanh, sigmoid, or a callable function]")
print("Start constructing character vocabulary.")
# 建立char的词表
self.char_vocab = _construct_char_vocab_from_vocab(vocab, min_freq=min_char_freq)
self.char_pad_index = self.char_vocab.padding_idx
print(f"In total, there are {len(self.char_vocab)} distinct characters.")
# 对vocab进行index
max_word_len = max(map(lambda x: len(x[0]), vocab))
self.words_to_chars_embedding = nn.Parameter(torch.full((len(vocab), max_word_len),
fill_value=self.char_pad_index, dtype=torch.long),
requires_grad=False)
self.word_lengths = nn.Parameter(torch.zeros(len(vocab)).long(), requires_grad=False)
for word, index in vocab:
# if index!=vocab.padding_idx: # 如果是pad的话直接就为pad_value了。修改为不区分pad, 这样所有的<pad>也是同一个embed
self.words_to_chars_embedding[index, :len(word)] = \
torch.LongTensor([self.char_vocab.to_index(c) for c in word])
self.word_lengths[index] = len(word)
self.char_embedding = nn.Embedding(len(self.char_vocab), char_emb_size)
self.convs = nn.ModuleList([nn.Conv1d(
char_emb_size, filter_nums[i], kernel_size=kernel_sizes[i], bias=True, padding=kernel_sizes[i] // 2)
for i in range(len(kernel_sizes))])
self._embed_size = embed_size
self.fc = nn.Linear(sum(filter_nums), embed_size)
self.init_param()
def forward(self, words):
"""
输入words的index后生成对应的words的表示
:param words: [batch_size, max_len]
:return: [batch_size, max_len, embed_size]
"""
words = self.drop_word(words)
batch_size, max_len = words.size()
chars = self.words_to_chars_embedding[words] # batch_size x max_len x max_word_len
word_lengths = self.word_lengths[words] # batch_size x max_len
max_word_len = word_lengths.max()
chars = chars[:, :, :max_word_len]
# 为1的地方为mask
chars_masks = chars.eq(self.char_pad_index) # batch_size x max_len x max_word_len 如果为0, 说明是padding的位置了
chars = self.char_embedding(chars) # batch_size x max_len x max_word_len x embed_size
chars = self.dropout(chars)
reshaped_chars = chars.reshape(batch_size*max_len, max_word_len, -1)
reshaped_chars = reshaped_chars.transpose(1, 2) # B' x E x M
conv_chars = [conv(reshaped_chars).transpose(1, 2).reshape(batch_size, max_len, max_word_len, -1)
for conv in self.convs]
conv_chars = torch.cat(conv_chars, dim=-1).contiguous() # B x max_len x max_word_len x sum(filters)
conv_chars = self.activation(conv_chars)
if self.pool_method == 'max':
conv_chars = conv_chars.masked_fill(chars_masks.unsqueeze(-1), float('-inf'))
chars, _ = torch.max(conv_chars, dim=-2) # batch_size x max_len x sum(filters)
else:
conv_chars = conv_chars.masked_fill(chars_masks.unsqueeze(-1), 0)
chars = torch.sum(conv_chars, dim=-2)/chars_masks.eq(0).sum(dim=-1, keepdim=True).float()
chars = self.fc(chars)
return self.dropout(chars)
@property
def requires_grad(self):
"""
Embedding的参数是否允许优化True: 所有参数运行优化; False: 所有参数不允许优化; None: 部分允许优化部分不允许
:return:
"""
params = []
for name, param in self.named_parameters():
if 'words_to_chars_embedding' not in name and 'word_lengths' not in name:
params.append(param.requires_grad)
requires_grads = set(params)
if len(requires_grads) == 1:
return requires_grads.pop()
else:
return None
@requires_grad.setter
def requires_grad(self, value):
for name, param in self.named_parameters():
if 'words_to_chars_embedding' in name or 'word_lengths' in name: # 这个不能加入到requires_grad中
continue
param.requires_grad = value
def init_param(self):
for name, param in self.named_parameters():
if 'words_to_chars_embedding' in name or 'word_lengths' in name: # 这个不能reset
continue
if param.data.dim()>1:
nn.init.xavier_uniform_(param, 1)
else:
nn.init.uniform_(param, -1, 1)
class LSTMCharEmbedding(TokenEmbedding):
"""
别名:class:`fastNLP.embeddings.LSTMCharEmbedding` :class:`fastNLP.embeddings.char_embedding.LSTMCharEmbedding`
使用LSTM的方式对character进行encode. embed(x) -> Dropout(x) -> LSTM(x) -> activation(x) -> pool -> Dropout
Example::
>>> vocab = Vocabulary().add_word_lst("The whether is good .".split())
>>> embed = LSTMCharEmbedding(vocab, embed_size=50)
>>> words = torch.LongTensor([[vocab.to_index(word) for word in "The whether is good .".split()]])
>>> outputs = embed(words)
>>> outputs.size()
>>> # torch.Size([1, 550])
:param vocab: 词表
:param embed_size: embedding的大小默认值为50.
:param char_emb_size: character的embedding的大小默认值为50.
:param float word_dropout: 以多大的概率将一个词替换为unk这样既可以训练unk也是一定的regularize
:param dropout: 以多大概率drop character embedding的输出以及最终的word的输出
:param hidden_size: LSTM的中间hidden的大小如果为bidirectional的hidden会除二默认为50.
:param pool_method: 支持'max', 'avg'
:param activation: 激活函数支持'relu', 'sigmoid', 'tanh', 或者自定义函数.
:param min_char_freq: character的最小出现次数默认值为2.
:param bidirectional: 是否使用双向的LSTM进行encode默认值为True
"""
def __init__(self, vocab: Vocabulary, embed_size: int=50, char_emb_size: int=50, word_dropout:float=0,
dropout:float=0.5, hidden_size=50,pool_method: str='max', activation='relu', min_char_freq: int=2,
bidirectional=True):
super(LSTMCharEmbedding, self).__init__(vocab)
assert hidden_size % 2 == 0, "Only even kernel is allowed."
assert pool_method in ('max', 'avg')
self.pool_method = pool_method
self.dropout = nn.Dropout(dropout)
# activation function
if isinstance(activation, str):
if activation.lower() == 'relu':
self.activation = F.relu
elif activation.lower() == 'sigmoid':
self.activation = F.sigmoid
elif activation.lower() == 'tanh':
self.activation = F.tanh
elif activation is None:
self.activation = lambda x: x
elif callable(activation):
self.activation = activation
else:
raise Exception(
"Undefined activation function: choose from: [relu, tanh, sigmoid, or a callable function]")
print("Start constructing character vocabulary.")
# 建立char的词表
self.char_vocab = _construct_char_vocab_from_vocab(vocab, min_freq=min_char_freq)
self.char_pad_index = self.char_vocab.padding_idx
print(f"In total, there are {len(self.char_vocab)} distinct characters.")
# 对vocab进行index
self.max_word_len = max(map(lambda x: len(x[0]), vocab))
self.words_to_chars_embedding = nn.Parameter(torch.full((len(vocab), self.max_word_len),
fill_value=self.char_pad_index, dtype=torch.long),
requires_grad=False)
self.word_lengths = nn.Parameter(torch.zeros(len(vocab)).long(), requires_grad=False)
for word, index in vocab:
# if index!=vocab.padding_idx: # 如果是pad的话直接就为pad_value了. 修改为不区分pad与否
self.words_to_chars_embedding[index, :len(word)] = \
torch.LongTensor([self.char_vocab.to_index(c) for c in word])
self.word_lengths[index] = len(word)
self.char_embedding = nn.Embedding(len(self.char_vocab), char_emb_size)
self.fc = nn.Linear(hidden_size, embed_size)
hidden_size = hidden_size // 2 if bidirectional else hidden_size
self.lstm = LSTM(char_emb_size, hidden_size, bidirectional=bidirectional, batch_first=True)
self._embed_size = embed_size
self.bidirectional = bidirectional
def forward(self, words):
"""
输入words的index后生成对应的words的表示
:param words: [batch_size, max_len]
:return: [batch_size, max_len, embed_size]
"""
words = self.drop_word(words)
batch_size, max_len = words.size()
chars = self.words_to_chars_embedding[words] # batch_size x max_len x max_word_len
word_lengths = self.word_lengths[words] # batch_size x max_len
max_word_len = word_lengths.max()
chars = chars[:, :, :max_word_len]
# 为mask的地方为1
chars_masks = chars.eq(self.char_pad_index) # batch_size x max_len x max_word_len 如果为0, 说明是padding的位置了
chars = self.char_embedding(chars) # batch_size x max_len x max_word_len x embed_size
chars = self.dropout(chars)
reshaped_chars = chars.reshape(batch_size * max_len, max_word_len, -1)
char_seq_len = chars_masks.eq(0).sum(dim=-1).reshape(batch_size * max_len)
lstm_chars = self.lstm(reshaped_chars, char_seq_len)[0].reshape(batch_size, max_len, max_word_len, -1)
# B x M x M x H
lstm_chars = self.activation(lstm_chars)
if self.pool_method == 'max':
lstm_chars = lstm_chars.masked_fill(chars_masks.unsqueeze(-1), float('-inf'))
chars, _ = torch.max(lstm_chars, dim=-2) # batch_size x max_len x H
else:
lstm_chars = lstm_chars.masked_fill(chars_masks.unsqueeze(-1), 0)
chars = torch.sum(lstm_chars, dim=-2) / chars_masks.eq(0).sum(dim=-1, keepdim=True).float()
chars = self.fc(chars)
return self.dropout(chars)
@property
def requires_grad(self):
"""
Embedding的参数是否允许优化True: 所有参数运行优化; False: 所有参数不允许优化; None: 部分允许优化部分不允许
:return:
"""
params = []
for name, param in self.named_parameters():
if 'words_to_chars_embedding' not in name and 'word_lengths' not in name:
params.append(param)
requires_grads = set(params)
if len(requires_grads) == 1:
return requires_grads.pop()
else:
return None
@requires_grad.setter
def requires_grad(self, value):
for name, param in self.named_parameters():
if 'words_to_chars_embedding' in name or 'word_lengths' in name: # 这个不能加入到requires_grad中
continue
param.requires_grad = value

View File

@ -0,0 +1,100 @@
from abc import abstractmethod
import torch
from ..core.vocabulary import Vocabulary
from ..core.dataset import DataSet
from ..core.batch import DataSetIter
from ..core.sampler import SequentialSampler
from ..core.utils import _move_model_to_device, _get_model_device
from .embedding import TokenEmbedding
class ContextualEmbedding(TokenEmbedding):
def __init__(self, vocab: Vocabulary, word_dropout:float=0.0, dropout:float=0.0):
super(ContextualEmbedding, self).__init__(vocab, word_dropout=word_dropout, dropout=dropout)
def add_sentence_cache(self, *datasets, batch_size=32, device='cpu', delete_weights: bool=True):
"""
由于动态embedding生成比较耗时所以可以把每句话embedding缓存下来这样就不需要每次都运行生成过程
:param datasets: DataSet对象
:param batch_size: int, 生成cache的sentence表示时使用的batch的大小
:param device: 参考 :class::fastNLP.Trainer 的device
:param delete_weights: 似乎在生成了cache之后删除权重在不需要finetune动态模型的情况下删除权重会大量减少内存占用
:return:
"""
for index, dataset in enumerate(datasets):
try:
assert isinstance(dataset, DataSet), "Only fastNLP.DataSet object is allowed."
assert 'words' in dataset.get_input_name(), "`words` field has to be set as input."
except Exception as e:
print(f"Exception happens at {index} dataset.")
raise e
sent_embeds = {}
_move_model_to_device(self, device=device)
device = _get_model_device(self)
pad_index = self._word_vocab.padding_idx
print("Start to calculate sentence representations.")
with torch.no_grad():
for index, dataset in enumerate(datasets):
try:
batch = DataSetIter(dataset, batch_size=batch_size, sampler=SequentialSampler())
for batch_x, batch_y in batch:
words = batch_x['words'].to(device)
words_list = words.tolist()
seq_len = words.ne(pad_index).sum(dim=-1)
max_len = words.size(1)
# 因为有些情况可能包含CLS, SEP, 从后面往前计算比较安全。
seq_len_from_behind = (max_len - seq_len).tolist()
word_embeds = self(words).detach().cpu().numpy()
for b in range(words.size(0)):
length = seq_len_from_behind[b]
if length==0:
sent_embeds[tuple(words_list[b][:seq_len[b]])] = word_embeds[b]
else:
sent_embeds[tuple(words_list[b][:seq_len[b]])] = word_embeds[b, :-length]
except Exception as e:
print(f"Exception happens at {index} dataset.")
raise e
print("Finish calculating sentence representations.")
self.sent_embeds = sent_embeds
if delete_weights:
self._delete_model_weights()
def _get_sent_reprs(self, words):
"""
获取sentence的表示如果有缓存则返回缓存的值; 没有缓存则返回None
:param words: torch.LongTensor
:return:
"""
if hasattr(self, 'sent_embeds'):
words_list = words.tolist()
seq_len = words.ne(self._word_pad_index).sum(dim=-1)
_embeds = []
for b in range(len(words)):
words_i = tuple(words_list[b][:seq_len[b]])
embed = self.sent_embeds[words_i]
_embeds.append(embed)
max_sent_len = max(map(len, _embeds))
embeds = words.new_zeros(len(_embeds), max_sent_len, self.embed_size, dtype=torch.float,
device=words.device)
for i, embed in enumerate(_embeds):
embeds[i, :len(embed)] = torch.FloatTensor(embed).to(words.device)
return embeds
return None
@abstractmethod
def _delete_model_weights(self):
"""删除计算表示的模型以节省资源"""
raise NotImplementedError
def remove_sentence_cache(self):
"""
删除缓存的句子表示. 删除之后如果模型权重没有被删除将开始使用动态计算权重
:return:
"""
del self.sent_embeds

Some files were not shown because too many files have changed in this diff Show More