mirror of
https://gitee.com/fastnlp/fastNLP.git
synced 2024-11-30 11:17:50 +08:00
ab55f25e20
1. Tester has a parameter "print_every_step" to control printing. print_every_step == 0 means NO print. 2. Tester's evaluate return (list of) floats, rather than torch.cuda.tensor 3. Trainer also has a parameter "print_every_step". The same usage. 4. In training, validation steps are not shown. 5. Updates to code comments. 6. fastnlp.py is ready for CWS. test_fastNLP.py works. |
||
---|---|---|
.. | ||
core | ||
data_for_tests | ||
loader | ||
modules | ||
__init__.py | ||
ner_decode.py | ||
ner.py | ||
readme_example.py | ||
seq_labeling.py | ||
test_charlm.py | ||
test_cws.py | ||
test_fastNLP.py | ||
test_loader.py | ||
test_metrics.py | ||
test_tester.py | ||
test_trainer.py | ||
text_classify.py |