In computer science, an integer literal is a kind of literal for an integer whose value is directly represented in source code. For example, in the assignment statement x = 1
, the string 1
is an integer literal indicating the value 1, while in the statement x = 0x10
the string 0x10
is an integer literal indicating the value 16, which is represented by 10
in hexadecimal (indicated by the 0x
prefix).
By contrast, in x = cos(0)
, the expression cos(0)
evaluates to 1 (as the cosine of 0), but the value 1 is not literally included in the source code. More simply, in x = 2 + 2,
the expression 2 + 2
evaluates to 4, but the value 4 is not literally included. Further, in x = "1"
the "1"
is a string literal, not an integer literal, because it is in quotes. The value of the string is 1
, which happens to be an integer string, but this is semantic analysis of the string literal – at the syntactic level "1"
is simply a string, no different from "foo"
.