分词、停有词时出现新的错误(已搜索旧帖,无解)

运行:
segment = DoubleArrayTrieSegment()
termlist = segment.seg(“江西鄱阳湖干枯了中国最大的淡水湖变成了大草原”)
时,出现如下错误:

Traceback (most recent call last):
File “D:/temp.py”, line 11, in
segment.seg(text)
TypeError: Ambiguous overloads found for com.hankcs.hanlp.seg.Segment.seg(str) between:
public java.util.List com.hankcs.hanlp.seg.Segment.seg(java.lang.String)
public java.util.List com.hankcs.hanlp.seg.Segment.seg(char)

------------------------------------原代码如下------------------------
def load_from_file(path):
map = JClass(‘java.util.TreeMap’)() # 创建TreeMap实例
with open(path,encoding=‘utf8’) as src:
for word in src:
word = word.strip() # 去掉Python读入的\n
map[word] = word
return JClass(‘com.hankcs.hanlp.collection.trie.DoubleArrayTrie’)(map)
def remove_stopwords_termlist(termlist, trie):
return [term.word for term in termlist if not trie.containsKey(term.word)]

trie = load_from_file(‘stopwords.txt’)
segment = DoubleArrayTrieSegment()
termlist = segment.seg(“江西鄱阳湖干枯了中国最大的淡水湖变成了大草原”)
print(‘去掉停用词前:’, termlist)
print(‘去掉停用词后:’, remove_stopwords_termlist(termlist, trie))

无法复现。请保证jpype1==0.7.0

conda install -c conda-forge jpype1==0.7.0      
pip install pyhanlp

这个地方有个取巧的地方绕过这个报错,那就是把分词的字符串改成list,
segment.seg(text)
改成
segment.seg(list(text))

你估计安装了高版本的JPype1,不兼容的。正常安装完全没有问题:Google Colab