文件名称:jieba-master
文件大小:11.83MB
文件格式:ZIP
更新时间:2022-02-04 11:45:12
python3 分词 nlp
结巴”中文分词:做最好的 Python 中文分词组件 "Jieba" (Chinese for "to stutter") Chinese text segmentation: built to be the best Python Chinese word segmentation module. Scroll down for English documentation.
【文件预览】:
jieba-master
----extra_dict()
--------stop_words.txt(222B)
--------dict.txt.big(8.19MB)
--------idf.txt.big(3.9MB)
--------dict.txt.small(1.48MB)
----MANIFEST.in(32B)
----.gitattributes(483B)
----Changelog(7KB)
----LICENSE(1KB)
----test()
--------test_pos.py(5KB)
--------test_change_dictpath.py(873B)
--------parallel()
--------test_tokenize_no_hmm.py(5KB)
--------test_bug.py(199B)
--------extract_tags_stop_words.py(658B)
--------test_pos_no_hmm.py(5KB)
--------test_lock.py(1KB)
--------test.txt(118B)
--------test.py(5KB)
--------test_file.py(383B)
--------extract_tags_idfpath.py(594B)
--------jieba_test.py(9KB)
--------test_pos_file.py(403B)
--------foobar.txt(11B)
--------jiebacmd.py(461B)
--------test_whoosh.py(2KB)
--------demo.py(3KB)
--------test_tokenize.py(5KB)
--------test_userdict.py(1KB)
--------lyric.txt(721B)
--------test_cut_for_search.py(5KB)
--------userdict.txt(151B)
--------extract_tags_with_weight.py(895B)
--------extract_tags.py(528B)
--------test_cutall.py(5KB)
--------extract_topic.py(1KB)
--------test_whoosh_file_read.py(826B)
--------test_multithread.py(830B)
--------test_no_hmm.py(5KB)
--------test_whoosh_file.py(1KB)
----setup.py(2KB)
----README.md(29KB)
----jieba()
--------_compat.py(1KB)
--------analyse()
--------dict.txt(4.84MB)
--------__init__.py(19KB)
--------__main__.py(2KB)
--------posseg()
--------finalseg()
----.gitignore(2KB)