This article needs additional citations for verification. (February 2016) |
Shallow parsing (also chunking or light parsing) is an analysis of a sentence which first identifies constituent parts of sentences (nouns, verbs, adjectives, etc.) and then links them to higher order units that have discrete grammatical meanings (noun groups or phrases, verb groups, etc.). While the most elementary chunking algorithms simply link constituent parts on the basis of elementary search patterns (e.g., as specified by regular expressions), approaches that use machine learning techniques (classifiers, topic modeling, etc.) can take contextual information into account and thus compose chunks in such a way that they better reflect the semantic relations between the basic constituents.[1] That is, these more advanced methods get around the problem that combinations of elementary constituents can have different higher level meanings depending on the context of the sentence.
It is a technique widely used in natural language processing. It is similar to the concept of lexical analysis for computer languages. Under the name "shallow structure hypothesis", it is also used as an explanation for why second language learners often fail to parse complex sentences correctly.[2]
{{cite journal}}
: CS1 maint: multiple names: authors list (link)