User Tools

Site Tools


nzdl:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
nzdl:projects [2017/11/05 22:43] – [Others] kjdonnzdl:projects [2018/01/09 01:37] – [3D Book Visualizer] kjdon
Line 16: Line 16:
  
  
-=====Extracting data and metadata=====+===== Extracting and enriching data and metadata =====
 ====Sequitur==== ====Sequitur====
  
Line 26: Line 26:
 [[http://www.nzdl.org/Kea/|Kea]] is a program for automatically extracting keywords and keyphrases from the full text of documents. Candidate keyphrases are identified using rudimentary lexical processing, features are computed for each candidate, and machine learning is used to determines which candidates should be assigned as keyphrases. [[http://www.nzdl.org/Kea/|Kea]] is a program for automatically extracting keywords and keyphrases from the full text of documents. Candidate keyphrases are identified using rudimentary lexical processing, features are computed for each candidate, and machine learning is used to determines which candidates should be assigned as keyphrases.
  
-=====Text Mining=====+==== Maui ====
  
-See our Text Mining Webpage. ?? what link? http://www.cs.waikato.ac.nz/~nzdl/textmining/+[[https://code.google.com/archive/p/maui-indexer/ |Maui]] is an indexing tool that automatically identifies main topics in text documents. Depending on the task, topics are tags, keywords, keyphrases, vocabulary terms, descriptors, index terms or titles of Wikipedia articles. Maui builds on the Kea algoritm, but provides additional functionalities: it allows the assignment of topics to documents based on terms from Wikipedia using Wikipedia Miner. Maui also has many new features that help identify topics more accurately.
  
 +==== Wikipedia Miner ====
 +
 +[[http://nzdl.org/wikipediaminer | Wikipedia Miner]] is an open-source software system that allows researchers and developers to integrate Wikipediaʼs rich semantics into their own applications. The toolkit creates databases that contain summarized versions of Wikipediaʼs content and structure, and includes a Java API to provide access to them. 
 =====Browsing interfaces===== =====Browsing interfaces=====
  
Line 46: Line 49:
    
 It supports the PDF and DjVu document formats. It supports the PDF and DjVu document formats.
 +
 +==== MAT: Metadata Analysis Tool ====
 +
 +[[nzdl:mat|MAT]] is a tool for producing statistics and visualisations of repository metadata.
 +
  
 ==== Phind==== ==== Phind====
Line 66: Line 74:
  
 ===== Chinese Text Segmentation===== ===== Chinese Text Segmentation=====
- 
-[[http://www.nzdl.org/cgi-bin/congb]] 
- 
-[[http://www.nzdl.org/chinese-text-segmenter/demo1.htm]]  
  
 Word segmentation is designed to find word boundaries in languages like Chinese and Japanese, which are (unlike English) written without spaces or other word delimiters (except for punctuation marks). It plays a significant role in applications that use the word as the basic unit due to the fact that machine-readable Chinese text is invariably stored in unsegmented form. Word segmentation is designed to find word boundaries in languages like Chinese and Japanese, which are (unlike English) written without spaces or other word delimiters (except for punctuation marks). It plays a significant role in applications that use the word as the basic unit due to the fact that machine-readable Chinese text is invariably stored in unsegmented form.
  
-We have implemented a WWW interface for segmenting Chinese text.+We have implemented a WWW interface for segmenting Chinese text. A demo used to be available at www.nzdl.org/cgi-bin/congb but that is no longer running. You can see an illustration of the transform at [[http://www.nzdl.org/chinese-text-segmenter/demo1.htm]]. (Currently at [[http://community.nzdl.org/www/chinese-text-segmenter/demo1.htm]])
  
-If your web browser does not support Chinese text[[http://www.nzdl.org/chinese-text-segmenter/demo1.htm|illustrations of the transformation]] are available. +(Note, the code can be found on community, in the chinese-text-segmenter directory.)
-Currently at [[http://commdev.nzdl.org/www/chinese-text-segmenter/demo1.htm]]+
  
 More information can be found in the paper: [[https://www.cs.waikato.ac.nz/~ihw/papers/00WT-YW-RMN-IHW-Comprsbased.pdf| A Compression-based Algorithm for Chinese Word Segmentation]] More information can be found in the paper: [[https://www.cs.waikato.ac.nz/~ihw/papers/00WT-YW-RMN-IHW-Comprsbased.pdf| A Compression-based Algorithm for Chinese Word Segmentation]]
Line 86: Line 89:
 =====Others===== =====Others=====
  
-[[http://collections.nzdl.org/ELKB/|Electronic Lexical Knowledge Base (ELKB)]] is software for accessing and exploring the Roget's thesaurus. It also provides solutions for various natural language processing tasks. All scripts were originally developed as a part of Mario Jarmasz' Master thesis at the [[http://engineering.uottawa.ca/eecs/|University of Ottawa]], Canada.+[[http://nzdl.org/ELKB/|Electronic Lexical Knowledge Base (ELKB)]] is software for accessing and exploring the Roget's thesaurus. It also provides solutions for various natural language processing tasks. All scripts were originally developed as a part of Mario Jarmasz' Master thesis at the [[http://engineering.uottawa.ca/eecs/|University of Ottawa]], Canada.
  
nzdl/projects.txt · Last modified: 2023/03/13 01:46 by 127.0.0.1