For most integers, the short
integer type is a bit small, and any
data type larger than int
is unnecessarily large. Analogously, for
most floating-point numbers, the float
floating-point type is a bit
small, and any data type larger than double
is unnecessarily large.
Consequently, most programmers use int
and double
more than
they do any other integer and floating-point types, and all the programs in the
rest of this book use int
and double
for all integers and
floating-point numbers.