日韩久久久精品,亚洲精品久久久久久久久久久,亚洲欧美一区二区三区国产精品 ,一区二区福利

Stanford NLP Chinese(中文)的使用

系統 2510 0

Stanford NLP Chinese(中文)的使用_twenz for higher_百度空間

Stanford NLP Chinese(中文)的使用

Stanford NLP tools提供了處理中文的三個工具,分別是分詞、Parser;具體參考:

http://nlp.stanford.edu/software/parser-faq.shtml#o

?

1.分詞 Chinese segmenter

下載:http://nlp.stanford.edu/software/

Stanford Chinese Word Segmenter A Java implementation of a CRF-based Chinese Word Segmenter

這個包比較大,運行時候需要的內存也多,因而如果用eclipse運行的時候需要修改虛擬內存空間大小:

運行-》自變量-》VM自變量-》-Xmx800m (最大內存空間800m)

demo代碼(修改過的,未檢驗):

??? Properties props = new Properties();
??? props.setProperty("sighanCorporaDict", "data");
??? // props.setProperty("NormalizationTable", "data/norm.simp.utf8");
??? // props.setProperty("normTableEncoding", "UTF-8");
??? // below is needed because CTBSegDocumentIteratorFactory accesses it
??? props.setProperty("serDictionary","data/dict-chris6.ser.gz");
??? //props.setProperty("testFile", args[0]);
??? props.setProperty("inputEncoding", "UTF-8");
??? props.setProperty("sighanPostProcessing", "true");
?? ?
??? CRFClassifier classifier = new CRFClassifier(props);
??? classifier.loadClassifierNoExceptions("data/ctb.gz", props);
??? // flags must be re-set after data is loaded
??? classifier.flags.setProperties(props);
??? //classifier.writeAnswers(classifier.test(args[0]));
??? //classifier.testAndWriteAnswers(args[0]);
?? ?
??? String result = classifier.testString("我是中國人!");
??? System.out.println(result);

?

2. Stanford Parser

可以參考http://nlp.stanford.edu/software/parser-faq.shtml#o

http://blog.csdn.net/leeharry/archive/2008/03/06/2153583.aspx

根據輸入的訓練庫不同,可以處理英文,也可以處理中文。輸入是分詞好的句子,輸出詞性、句子的語法樹(依賴關系)

英文demo(下載的壓縮文件中有):

??? LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
??? lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});

??? String[] sent = { "This", "is", "an", "easy", "sentence", "." };
??? Tree parse = (Tree) lp.apply(Arrays.asList(sent));
??? parse.pennPrint();
??? System.out.println();

??? TreebankLanguagePack tlp = new PennTreebankLanguagePack();
??? GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
??? GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
??? Collection tdl = gs.typedDependenciesCollapsed();
??? System.out.println(tdl);
??? System.out.println();

??? TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
??? tp.printTree(parse);

中文有些不同:

? //LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
??? LexicalizedParser lp = new LexicalizedParser("xinhuaFactored.ser.gz");
??? //lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});

??? //??? String[] sent = { "This", "is", "an", "easy", "sentence", "." };
??? String[] sent = { "他", "和", "我", "在",? "學校", "里", "常", "打", "桌球", "。" };
??? String sentence = "他和我在學校里常打臺球。";
??? Tree parse = (Tree) lp.apply(Arrays.asList(sent));
??? //Tree parse = (Tree) lp.apply(sentence);
? ?
??? parse.pennPrint();
?? ?
??? System.out.println();
/*
??? TreebankLanguagePack tlp = new PennTreebankLanguagePack();
??? GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
??? GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
??? Collection tdl = gs.typedDependenciesCollapsed();
??? System.out.println(tdl);
??? System.out.println();
*/
??? //only for English
??? //TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
??? //chinese
??? TreePrint tp = new TreePrint("wordsAndTags,penn,typedDependenciesCollapsed",new ChineseTreebankLanguagePack());
??? tp.printTree(parse);

然而有些時候我們不是光只要打印出來的語法依賴關系,而是希望得到關于語法樹(圖),則需要采用如下的程序:
?? ??? ?String[] sent = { "他", "和", "我", "在",? "學校", "里", "常", "打", "桌球", "。" };
?? ??? ?ParserSentence ps = new ParserSentence();
?? ??? ?Tree parse = ps.parserSentence(sent);
?? ??? ?parse.pennPrint();
?? ??? ?TreebankLanguagePack tlp = new ChineseTreebankLanguagePack();
?? ???? GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
?? ???? GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
?? ???? Collection tdl = gs.typedDependenciesCollapsed();
?? ???? System.out.println(tdl);
?? ???? System.out.println();
?? ???? for(int i = 0;i < tdl.size();i ++)
?? ???? {
?? ??? ??? ?//TypedDependency(GrammaticalRelation reln, TreeGraphNode gov, TreeGraphNode dep)
?? ??? ??? ?TypedDependency td = (TypedDependency)tdl.toArray()[i];
?? ??? ??? ?System.out.println(td.toString());
?? ???? }

//采用GrammaticalStructure的方法 getGrammaticalRelation ( TreeGraphNode ?gov, TreeGraphNode ?dep)可以獲得兩個詞的語法依賴關系

Stanford NLP Chinese(中文)的使用


更多文章、技術交流、商務合作、聯系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號聯系: 360901061

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對您有幫助就好】

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長會非常 感謝您的哦!!!

發表我的評論
最新評論 總共0條評論
主站蜘蛛池模板: 卫辉市| 峨山| 通辽市| 张家川| 无锡市| 榆林市| 自贡市| 乐山市| 柏乡县| 楚雄市| 苏尼特左旗| 宜丰县| 宁明县| 枣强县| 德安县| 临沧市| 拜城县| 桐庐县| 治县。| 湟中县| 千阳县| 昌江| 武功县| 萨嘎县| 安龙县| 松潘县| 景德镇市| 大余县| 板桥市| 昌吉市| 仲巴县| 大姚县| 海伦市| 台湾省| 绥德县| 白城市| 丁青县| 温州市| 太康县| 平利县| 河东区|