1. Tester has a parameter "print_every_step" to control printing. print_every_step == 0 means NO print.
2. Tester's evaluate return (list of) floats, rather than torch.cuda.tensor
3. Trainer also has a parameter "print_every_step". The same usage.
4. In training, validation steps are not shown.
5. Updates to code comments.
6. fastnlp.py is ready for CWS. test_fastNLP.py works.
- specify the name of the config file and the name of corresponding section where model init params store.
- fastnlp.py needs load_pickle to get dictionary size and the number of labels
- other minor adjustments
- add Loss, Optimizer
- change Trainer & Tester initialization interface: two styles of definition provided
- handle Optimizer construction and loss function definition in a hard manner
- add argparse in task-specific scripts. (seq_labeling.py & text_classify.py)
- seq_labeling.py & text_classify.py work
- move preprocess.py from loader/ to core/
- changes to interface of preprocess: 1. add run method, to run the main processing 2. add cross validation split 3. add return value 4. merge subclasses
- Trainer supports cross validation
- add data as arguments in Trainer.train & Tester.test
- add readme.example.py, to run the example program shown in README.md
- other corresponding changes
- see fastNLp/saver/logger.py to know how to create and use a logger
- a log file named "train_test.log" will be created in the same dir as the main file where the program starts
- this file records all important events happened in Trainer & Tester's methods
- rename Inference to Predictor
- rename Trainer.prepare_input to Trainer.load_train_data, load data_train.pkl only
- add __contains__ method to config Section class
- more code comments
- more elegant make_batch & data_iterator: Samplers return batch samples instead of batch indices