Tokenizers treat whitespace characters as delimiters that divide character sequences into tokens. Thus, a file containing the following characters is viewed as a stream of nine tokens divided by spaces and line-terminating characters:
4 7 3 8 8 7 2 10 5