Token Buffer

The simplest data structure for storing the token stream is the TokenBuffer.

It holds both the source code text and the token stream but has limited incremental rescanning capabilities, only allowing appending to the end of the source code. Token buffers are useful for loading large files from disk or network incrementally, particularly when data arrives in parts.

You are encouraged to use token buffer if you don't need general incremental rescanning capabilities and if you want to store only the source code with tokens, or if you plan to initially load the source code and later reupload it to a general-purpose compilation unit storage like Document.

Token buffer offers the fastest scanning implementation among other Lady Deirdre compilation unit storages, providing high performance when iterating through token chunks and source code substrings.

Also, this object is useful for examining the results of the lexical scanner output.

use lady_deirdre::lexis::{TokenBuffer, SourceCode};

let mut buffer = TokenBuffer::<JsonToken>::from("[1, 2, 3");

assert_eq!(buffer.substring(..), "[1, 2, 3");

buffer.append(", 4, 5, 6]");

assert_eq!(buffer.substring(..), "[1, 2, 3, 4, 5, 6]");

// Prints all tokens in the token buffer to the terminal.
for chunk in buffer.chunks(..) {
    println!("{:?}", chunk.token);
}