Succinct data structure

In computer science, a succinct data structure is a data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson[1] to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, insofar as the size of the stored or encoded data similarly depends upon the specific content of the data itself.

Suppose that is the information-theoretical optimal number of bits needed to store some data. A representation of this data is called:

  • implicit if it takes bits of space,
  • succinct if it takes bits of space, and
  • compact if it takes bits of space.

For example, a data structure that uses bits of storage is compact, bits is succinct, bits is also succinct, and bits is implicit.

Implicit structures are thus usually reduced to storing information using some permutation of the input data; the most well-known example of this is the heap.

  1. ^ Cite error: The named reference jacobson1988succinct was invoked but never defined (see the help page).