awtk/tools/word_gen
dependabot[bot] d4479f9b34
Bump lodash from 4.17.15 to 4.17.19 in /tools/word_gen
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.15 to 4.17.19.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.15...4.17.19)

Signed-off-by: dependabot[bot] <support@github.com>
2020-07-16 15:59:40 +00:00
..
.gitignore add word_gen 2018-06-26 14:47:03 +08:00
chinese_with_freq.txt add t9 input engine 2020-04-21 10:22:05 +08:00
gen_words_json.js use novel-segment 2020-02-13 10:40:32 +08:00
package-lock.json Bump lodash from 4.17.15 to 4.17.19 in /tools/word_gen 2020-07-16 15:59:40 +00:00
package.json improve action thread 2020-02-13 10:19:58 +08:00
README.md move suggest_words to assets 2020-05-11 11:07:48 +08:00
to_json.js add t9 input engine 2020-04-21 10:22:05 +08:00
to_words_bin.js add t9 input engine 2020-04-21 10:22:05 +08:00
words.bin add t9 input engine 2020-04-21 10:22:05 +08:00
words.json add t9 input engine 2020-04-21 10:22:05 +08:00

抓取网页,生成输入法联想词库。

生成数据

在当前目录下运行:

  • 准备
npm install
  • 抓取网页生成words.json

可以修改maxURLS改变最大网页数量。

node gen_words_json.js 
  • 生成二进制的words.bin文件

可以根据自己的需要进行编辑words.json。

node to_words_bin.js

使用现有数据

chinese_with_freq.txt是从 https://github.com/ling0322/webdict 下载的。

如果不想自己生成,可以直接使用该文件:

node to_json.js

更新数据

在awtk根目录下运行

cp tools/word_gen/words.bin demos/assets/default/raw/data/suggest_words_zh_cn.dat

如果不支持文件系统,还需要运行更新资源的脚本:

python scripts/update_res.py all

注意:

node_modules/segment/lib/module/DictTokenizer.js#getChunks 可能导致OOM。

如果遇到问题可以限制chunks.length的大小如下面限制为5000。

let getChunksCallsNr = 0;
var getChunks = function (wordpos, pos, text) {
  var words = wordpos[pos] || [];
  // debug('getChunks: ');
  // debug(words);
  // throw new Error();
  var ret = [];
  if(getChunksCallsNr > 150) {
    throw "get Chunks error";
  }
  
  getChunksCallsNr++;
  for (var i = 0; i < words.length; i++) {
    var word = words[i];
    //debug(word);
    var nextcur = word.c + word.w.length;
    if (!wordpos[nextcur]) {
      ret.push([word]);
    } else  {
      var chunks = getChunks(wordpos, nextcur);
      for (var j = 0; j < chunks.length && j < 5000; j++) {
        ret.push([word].concat(chunks[j]));
      }
    }
  }
  getChunksCallsNr--;

  return ret;
};