英文分词错误,末尾有单字符's'

刚刚接触,使用默认的分词词典 tokenizer = hanlp.load(‘LARGE_ALBERT_BASE’)
对英文分词出现了错误:

···python
tokenizer(‘lenovo lp1 tws bluetooth earbuds ipx4 waterproof s’)
# [‘lenovo lp1 tws bluetooth’, ’ earbuds ipx4 waterproof s’]
文本最后有个单独的s,分词错误,不知道是程序预处理的原因,还是分词模型,词典的问题呢。
尝试去掉最后的‘ s’后可以正确分词
[‘lenovo’, ’ lp1’, ’ tws’, ’ bluetooth’, ’ ‘, ‘earbuds’, ’ ipx4 waterproof’]
又尝试修改s为a,b,c:
tokenizer(‘lenovo lp1 tws bluetooth earbuds ipx4 waterproof a’)
# [‘lenovo lp1 tws bluetooth’, ’ earbuds ipx4 waterproof a’]
tokenizer(‘lenovo lp1 tws bluetooth earbuds ipx4 waterproof b’)
# [‘lenovo’, ’ lp1 tws bluetooth’, ’ earbuds’, ’ ipx4 waterproof b’]
tokenizer(‘lenovo lp1 tws bluetooth earbuds ipx4 waterproof c’)
# [‘lenovo lp1 tws bluetooth’, ’ earbuds ipx4 waterproof c’]
···

请等待即将发布的HanLPv2.1:

['lenovo', 'lp1', 'tws', 'bluetooth', 'earbuds', 'ipx4', 'waterproof']