© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
C. MilanesiBeginning Rusthttps://doi.org/10.1007/978-1-4842-7208-4_21

21. Standard Library Collections

Carlo Milanesi1  
(1)
Bergamo, Italy
 
In this chapter, you will learn:
  • How to measure the time spent running portions of Rust code

  • How performance reasons suggest which kind of collection to use

  • Which is the best collection for various kinds of operations: sequential scan, insertion and removal of items at both ends, removal of the largest item, search by value, search by key, or scan in order

  • Which collections in the Rust standard library correspond to the collections in the C++ standard library

Collections

Arrays, vectors, structs, tuple-structs, tuples, and enums are data types whose objects may contain several other objects. However, for structs, tuple-structs, tuples, and enums, for each contained object a specific clause must be specified both in the type declaration and in the object construction, so they are not usable to contain hundreds of objects. Instead, arrays and vectors are data types that may contain many objects, even if they are defined and instantiated with simple formulas. The objects of such kinds are named collections.

Arrays and vectors are optimal collections in many cases: they use memory efficiently, they are fast to scan, they use CPU caches efficiently, and they allow fast direct access through an integer offset (or index). Yet, for some operations, they are definitely inefficient, so in such cases it is appropriate to use other kinds of collections. The Rust standard library provides various kinds of them: VecDeque<T>, LinkedList<T>, BinaryHeap<T>, BTreeSet<T>, BTreeMap<K,V>, HashSet<T>, and HashMap<K,V>.

Speaking of collections, arrays are a separate case, because they are entirely stack allocated, and they have a size defined at compile time. All the other collections, including vectors, contain a variable number of items; and they store a fixed-length header in the stack, while the data, if there is some, is allocated in the heap. We will name them dynamically sized collections.

Measuring Execution Time

Because the choice of a collection depends much on its performance, here we take a detour, where we see how we can measure with precision the time spent by different portions of Rust code.

Performance is important for all software developers. However, those who program in a high-level language typically reason about how many milliseconds or how many seconds are spent by a command, while those who program in a low-level language, like Rust, often reason about how many microseconds or even nanoseconds are spent by a single function.

In the Rust standard library, there are functions that can precisely measure the time elapsed between two source code points:
let start_time = std::time::Instant::now();
for i in 0..10_000 {
    println!("{}", i);
}
println!("{:?}", start_time.elapsed());

This program first will print all the integer numbers from 0 to 9999, and then it will print the time spent to print such numbers. A possible content of the last line is: 20.695631ms. It means that the time taken to run the loop has been a bit more than 20 milliseconds.

Such time depends mainly on the power of the computer used, on the operating system, and on the terminal emulator program. It depends little on the optimization level used by the Rust compiler, as most time is used by library code and operating system code.

However, some programs perform a lot of computations, and for such programs the level of optimizations performed by the compiler can dramatically affect the time to run a command.

At the beginning of the book, we saw that to compile a Rust program it was enough to write the rustc command, followed by the name of the source file, with no need for any other arguments. Yet, such a command does not activate the compiler optimizations, so it generates machine code that is good for debugging but not as efficient as it could be.

If you are interested in performance, you should activate the compiler optimizations. These are activated using the command-line option -O (it’s an uppercase letter, not a zero). If that option is omitted, every optimization is disabled.

Therefore, in this chapter it is assumed that the examples are compiled by the following command line:
rustc -O main.rs

If you go back to the previous example, you will notice that first the Instant type has been included from the standard library. It represents time instants with a resolution smaller than a microsecond.

To measure a time, you should invoke the now function of the Instant type . Every invocation of now returns an object of type Instant, representing the current time instant. Then, the elapsed method of the Instant type returns an object of type std::time::Duration, which represents the time elapsed from the creation time of the Instant object to the current time. If a Duration is printed for debug (using the {:?} placeholder), the nearest unit of measurement is automatically chosen, and the symbol of such unit is output as suffix.

Performing Arbitrary Insertions and Removals

Now let’s get back to the operations on the collections.

The following program is very efficient:
use std::time::Instant;
const SIZE: usize = 100_000_000;
let start_time = Instant::now();
let mut v = Vec::<usize>::with_capacity(SIZE);
let t1 = start_time.elapsed();
for i in 0..SIZE {
    v.push(i);
}
let t2 = start_time.elapsed();
for _ in 0..SIZE {
    v.pop();
}
let t3 = start_time.elapsed();
print!("{:?} {:?} {:?}", t1, t2 - t1, t3 - t2);

Remember to compile it specifying the -O option.

This program will print three numbers that depend on the computer, on the operating system, and possibly even on the version of the compiler.

A possible result is: 3.517μs 449.095194ms 61.93267ms.

That means that to create a vector that allocates room for one hundred million usize objects , which in a 64-bit system occupies 800 MB, few microseconds are spent; to put one hundred million values in such space, without ever allocating memory, less than half a second is spent; and to remove all such numbers, one at a time, proceeding from the last one, less than one tenth of a second is spent.

Instead, the following program is very inefficient:
use std::time::Instant;
const SIZE: usize = 100_000;
let start_time = Instant::now();
let mut v = Vec::<usize>::with_capacity(SIZE);
let t1 = start_time.elapsed();
for i in 0..SIZE {
    v.insert(0, i);
}
let t2 = start_time.elapsed();
for _ in 0..SIZE {
    v.remove(0);
}
let t3 = start_time.elapsed();
print!("{:?} {:?} {:?}", t1, t2 - t1, t3 - t2);

It could print: 6.663μs 1.151939082s 1.015383097s.

First of all, notice that this time only one hundred thousand items are processed, which is one thousandth of those processed by the previous example.

To create the 800KB-large vector, few microseconds are spent, but to insert one hundred thousand items from the beginning of the vector, more than one second is spent, and as much time is spent to remove them by proceeding from the first one. So, such an insertion operation appears to be more than two thousand times slower than that of the previous example, and such a deletion operation appears to be more than sixteen thousand times slower than that of the previous example.

The reason for such times is easy to explain.

To add an item at the end of a vector having enough capacity, all you need to do is to check that there is enough space, to copy the item into the vector buffer, and to increment the item count. That, for a usize, in the computer used to measure those times, takes less than five nanoseconds, including the time to move the iterator to the next item.

To remove an item from the end of a nonempty vector, all you need to do is to check that the vector is not empty, and to decrement the item count. That takes less than one nanosecond.

To insert an item at the beginning of a vector, first you need to shift by one place all the items already in the vector, to free the first place for the new item. At the start of the loop, the shift operation is fast, but as the item count grows, the time spent to insert an item at the beginning of the vector grows as much.

Similarly, to remove the first item, you must shift all the items but the first one, which will be overwritten by the second item.

Using the notation of the theory of computational complexity, we can say that the insertion and the removal of an item at the end of a vector have a O(K) complexity, which is constant complexity, while the insertion and the removal of an item at the beginning of a vector already containing N items have O(N) complexity, which is linear complexity.

If the insertions and removals are not really at the beginning, but in an intermediate position, the performance will be better, but still much slower than insertion and removal at the end.

Queues

If you need to insert and remove items only at the beginning of a vector, it is enough to redesign the vector in inverted order, so that such operations will be applied at the end.

Instead, if you need to insert items at one end of the sequence, and remove items at the other end, the vector type is not the optimal collection. Such use case is that of a First-In-First-Out queue. Here is an example of a queue implemented using a Vec , inserting items at the end and removing them from the beginning:
use std::time::Instant;
const SIZE: usize = 40_000;
let start_time = Instant::now();
let mut v = Vec::<usize>::new();
for i in 0..SIZE {
    v.push(i);
    v.push(SIZE + i);
    v.remove(0);
    v.push(SIZE * 2 + i);
    v.remove(0);
}
let t1 = start_time.elapsed();
while v.len() > 0 {
    v.remove(0);
}
let t2 = start_time.elapsed();
print!("{:?} {:?}", t1, t2 - t1);

It could print: 330.812316ms 163.214733ms.

The first timed code portion creates an empty vector, and then, for forty thousand times, it inserts three numbers at the end of the vector and it extracts two items from the beginning. The second code portion extracts from the beginning all the remaining items. The first portion spends about a third of a second, and the second portion about a sixth of a second. Actually, almost all the time is spent for the extractions, because insertions are extremely fast.

We could try, instead, to insert items at the beginning and extract them from the end:
use std::time::Instant;
const SIZE: usize = 40_000;
let start_time = Instant::now();
let mut v = Vec::<usize>::new();
for i in 0..SIZE {
    v.insert(0, i);
    v.insert(0, SIZE + i);
    v.pop();
    v.insert(0, SIZE * 2 + i);
    v.pop();
}
let t1 = start_time.elapsed();
while v.len() > 0 {
    v.pop();
}
let t2 = start_time.elapsed();
print!("{:?} {:?}", t1, t2 - t1);

It could print: 505.627254ms 211ns.

Now the insertions are slow, and the removals spend virtually zero time. However, the sum of the two times hasn’t improved at all.

Now, instead of the Vec type, let’s use the VecDeque type:
use std::time::Instant;
const SIZE: usize = 40_000;
let start_time = Instant::now();
let mut vd = std::collections::VecDeque::<usize>::new();
for i in 0..SIZE {
    vd.push_back(i);
    vd.push_back(SIZE + i);
    vd.pop_front();
    vd.push_back(SIZE * 2 + i);
    vd.pop_front();
}
let t1 = start_time.elapsed();
while vd.len() > 0 {
    vd.pop_front();
}
let t2 = start_time.elapsed();
print!("{:?} {:?}", t1, t2 - t1);

It could print: 994.255μs 47.34μs.

The difference from the previous times is abysmal. Indeed, the whole program spends less than one millisecond, compared with around 500 milliseconds spent by the versions using a vector.

Notice that while the Vec type is automatically imported in the current namespace, the VecDeque type, like the types of the other collections, must be explicitly qualified.

The VecDeque name is shorthand for vector-based double-ended queue. Here, the queue word means: sequential collection, into which items are inserted at one end and from which items are extracted at the other end. The fact of being double-ended means that the items can be inserted at both ends and extracted from both ends, with no penalty. Finally, the fact of being vector-based means that it is implemented storing its items in a vector, and that its items can be accessed using an integer offset, similarly to arrays and vectors.

To insert or remove items at the end of a vector, you can use the simple words push and pop, without specifying the point of insertion or extraction; it is understood that such point is the end, which is the only point where such operations are mostly efficient. Instead, VecDeque collections handle both ends equally well; so, the recommended functions to insert items at the beginning and at the end are, respectively, push_front and push_back; and the recommended functions to remove items from the beginning and from the end are, respectively, pop_front and pop_back. The VecDeque type also supports the insert and remove functions , similarly to vectors, but such functions are not recommended because they can be inefficient.

Given that queues are so efficient, why shouldn’t we always use them, instead of vectors?

The reason is that for the most frequent operations on vectors, which are iteration and direct access, vectors are faster by a constant factor. Here is a program that first adds forty thousand numbers to a VecDeque collection and to a Vec collection, and then computes the sum of the inserted items by scanning such collections:
use std::time::Instant;
const SIZE: usize = 40_000;
let mut v = Vec::<usize>::new();
let mut vd = std::collections::VecDeque::<usize>::new();
let start_time = Instant::now();
for i in 0..SIZE {
    v.push(i);
}
let t1 = start_time.elapsed();
for i in 0..SIZE {
    vd.push_back(i);
}
let mut count = 0;
let t2 = start_time.elapsed();
for i in v.iter() {
    count += i;
}
let t3 = start_time.elapsed();
for i in vd.iter() {
    count += i;
}
let t4 = start_time.elapsed();
print!("{} {:?} {:?} {:?} {:?}", count,
    t1, t2 - t1, t3 - t2, t4 - t3);

It could print: 1599960000 495.299μs 466.374μs 16.642μs 89.7μs.

The count is printed just to force the compiler to do the actual computation. Otherwise, the optimizer could skip it all.

The times mean that to insert each item at the end of a collection, the Vec and the VecDeque collection types have almost the same performance, but for scanning the whole collection, Vec is five as fast.

Linked Lists

For some applications, there is the need to frequently insert and remove items in intermediate positions. In such cases, both vectors and queues perform such operations inefficiently, so, if you have such a need, you can resort to another kind of collection, LinkedList. Though, there are really few cases in which the usage of a LinkedList is justified.

Binary Heaps

There can be another access use case for a collection, the so-called priority queue . It happens when only two functions are used: one to insert an item and one to extract it. But every item is inserted with a priority value, and the function to extract an item must get the item with the highest value among the items contained in the collection. Using vectors, you could start with the following program:
let data = vec![
    48, 18, 20, 35, 17, 13, 39, 12, 42, 33, 29, 27, 50, 16];
let mut v = Vec::<i32>::new();
fn add(v: &mut Vec<i32>, item: i32) {
    v.push(item);
}
fn extract(v: &mut Vec<i32>) -> i32 {
    v.pop().unwrap()
}
let mut i = 0;
loop {
    if i == data.len() { break; }
    add(&mut v, data[i]);
    i += 1;
    if i == data.len() { break; }
    add(&mut v, data[i]);
    i += 1;
    print!("{} ", extract(&mut v));
}
while ! v.is_empty() {
    print!("{} ", extract(&mut v));
}

It will print: 18 35 13 12 33 27 16 50 29 42 39 17 20 48 .

The data vector is used as an item provider. Its 14 objects are processed, two each iteration, in the seven iterations of the first loop. At each iteration, two items of the data vector are passed to the add function, which inserts them at the end of the v vector. Then, the last item is extracted from the v vector and printed. The last statement of the program repeatedly extracts and prints the last items of the vector, until the vector becomes empty.

The result is not the desired one, as the last inserted item will be always extracted, even if the collection contains an item bigger than it. For example, after having inserted the numbers 48 and 18, the first extracted number is 18, even though the number 48 is greater than it.

The correct result can be obtained by replacing the add function with the following one:
fn add(v: &mut Vec<i32>, item: i32) {
    v.push(item);
    v.sort();
}
or by replacing the extract function with the following one:
fn extract(v: &mut Vec<i32>) -> i32 {
    v.sort();
    v.pop().unwrap()
}

In both cases, the program will print: 48 35 20 39 42 33 50 29 27 18 17 16 13 12 .

In the first case, each time a value is added to the v vector, that vector is sorted again so that the values are always in ascending order. In the second case, the vector is sorted just before a value is extracted from the v vector. Both techniques guarantee that the extracted value is always the largest among those contained in the collection.

Both correct versions have the drawback of quite often invoking the sort function , which has a significant cost.

An equivalent but much faster version is obtained if you replace the declarations of the v vector and of the add and extract functions, with the following code:
use std::collections::BinaryHeap;
let mut v = BinaryHeap::<i32>::new();
fn add(v: &mut BinaryHeap<i32>, item: i32) {
    v.push(item);
}
fn extract(v: &mut BinaryHeap<i32>) -> i32 {
    v.pop().unwrap()
}

As you can see, binary heaps are used by invoking the push and pop functions , similarly to vectors. Though, while in these last collections the pop function extracts the last inserted item among those still contained in the collection, in binary heaps the pop function extracts the item having the largest value among those still contained in the collection.

Regarding the implementation, here we can say that both when an item is added and when an item is removed, some items are moved, so that the last item is always the largest; however, the other items are not necessarily in ascending order.

Ordered Sets and Unordered Sets

Another way to use a collection is to insert an item only if it is not already contained in the collection. That realizes the concept of mathematical set. Indeed, in mathematics it makes no sense to say that an item is contained several times in a set.

A way to implement this kind of insertion using a vector would be to scan the whole vector at every insertion, and insert the item only if it is not found. Of course, such an algorithm would be inefficient.

There are several techniques to create a collection that efficiently inserts an item only if it is not already contained, which is a collection that implements the concept of a set without duplicates.

The most efficient technique to search for arbitrary objects uses a kind of data structure named hashtable, so such a type of collection is named HashSet .

However, such a collection has the drawback that its items are not ordered. If you need ordered items, you can use a collection using a data structure named B-tree, and therefore it is named BTreeSet . To search this data structure is definitely faster than to search a large vector, but it is usually less efficient than to search a hashtable. Here is an example of both set implementations:
let arr = [6, 8, 2, 8, 4, 9, 6, 1, 8, 0];
let mut v = Vec::<_>::new();
let mut hs = std::collections::HashSet::<_>::new();
let mut bs = std::collections::BTreeSet::<_>::new();
for i in arr.iter() {
    v.push(i);
    hs.insert(i);
    bs.insert(i);
}
print!("Vec:");
for i in v.iter() { print!(" {}", i); }
println!(". {:?}", v);
print!("HashSet :");
for i in hs.iter() { print!(" {}", i); }
println!(". {:?}", hs);
print!("BTreeSet:");
for i in bs.iter() { print!(" {}", i); }
println!(". {:?}", bs);
It could print:
Vec: 6 8 2 8 4 9 6 1 8 0. [6, 8, 2, 8, 4, 9, 6, 1, 8, 0]
HashSet : 2 4 9 8 0 1 6. {2, 4, 9, 8, 0, 1, 6}
BTreeSet: 0 1 2 4 6 8 9. {0, 1, 2, 4, 6, 8, 9}

As you can see, the Vec collection contains all the ten inserted items, in insertion order; instead, the Hashset collection and the BTreeSet collection contains only seven items, as there is only one of the two 6 numbers, and only one of the three 8 numbers . Moreover, the BTreeSet collection is sorted, while the Hashset collection is in random order. A different run can generate a different order.

Here is some code to measure the performance of such collections:
const SIZE: u32 = 40_000;
// Creating empty collections.
let mut v = Vec::<_>::new();
let mut hs = std::collections::HashSet::<_>::new();
let mut bs = std::collections::BTreeSet::<_>::new();
// Adding elements to the collections,
// measuring the elapsed time.
let start_time = std::time::Instant::now();
for i in 0..SIZE { v.push(i); }
let t1 = start_time.elapsed();
for i in 0..SIZE { hs.insert(i); }
let t2 = start_time.elapsed();
for i in 0..SIZE { bs.insert(i); }
let t3 = start_time.elapsed();
for i in 0..SIZE { if ! v.contains(&i) { return; } }
let t4 = start_time.elapsed();
// Performing binary searches on a sorted vector,
// measuring the elapsed time.
v.swap(10_000, 20_000);
v.sort();
let t5 = start_time.elapsed();
for i in 0..SIZE {
    if v.binary_search(&i).is_err() { return; }
}
// Searching the hashtable, measuring the elapsed time.
let t6 = start_time.elapsed();
for i in 0..SIZE { if ! hs.contains(&i) { return; } }
// Searching the BTree, measuring the elapsed time.
let t7 = start_time.elapsed();
for i in 0..SIZE { if ! bs.contains(&i) { return; } }
let t8 = start_time.elapsed();
// Printing the measured times.
println!("Pushes into Vec: {:?} per item", t1 / SIZE);
println!("Insertions into HashSet: {:?} per item", (t2 - t1) / SIZE);
println!("Insertions into BTreeSet: {:?} per item", (t3 - t2) / SIZE);
println!("Linear search in Vec: {:?} per item", (t4 - t3) / SIZE);
println!("Sorting of Vec: {:?}", t5 - t4);
println!("Binary search in Vec: {:?} per item", (t6 - t5) / SIZE);
println!("Search in HashSet: {:?} per item", (t7 - t6) / SIZE);
println!("Search in BTreeSet: {:?} per item", (t8 - t7) / SIZE);
It could print:
Pushes into Vec: 8ns per item
Insertions into HashSet: 121ns per item
Insertions into BTreeSet: 148ns per item
Linear search in Vec: 9.217μs per item
Sorting of Vec: 116.76μs
Binary search in Vec: 37ns per item
Search in HashSet: 24ns per item
Search in BTreeSet: 40ns per item

To insert a four-byte number at the end of a Vec, 8 nanoseconds are taken, while inserting it into a HashSet or BTreeSet takes around 15 times as long.

To sequentially search that forty-thousand-items vector for an existing number, around 9,000 nanoseconds are taken. Then, the vector is sorted (actually it was already sorted, so two items are swapped beforehand, to make timing more realistic), in 116 microseconds. To search that sorted vector using the binary search algorithm, an item is found in less than 40 nanoseconds on average. It turns out that, if searches are at least 13 times as frequent as insertions, it is more efficient to sort the array after every insertion and then use binary search.

To search the HashSet for a number, about 24 nanoseconds are taken; and to search the BTreeSet for a number, about 40 nanoseconds are taken. So they are orders of magnitude faster than a vector; the only case in which a sorted Vec with binary search is competitive with a BTreeSet is when first there are all the insertions into the vector, then the vector is sorted once, and then there are only searches.

All this happens in a particular platform, with a particular type of item, with a particular number of items, and with a particular sequence of operations. If you change something of these, the algorithm that used to be slower can become the fastest one.

The best way to optimize a data structure whose performance is critical is as follows. First declare a trait having all the operations that your data structure will offer to the rest of the application. Then create the simplest possible implementation of such trait and use that trait and its methods in the rest of the program. Then you measure if the performance of such methods is good enough. If it is good enough, there is no need to optimize it. Otherwise, you keep changing the implementation of the trait, and you measure the performance of the application until you find a satisfying implementation.

Ordered Dictionaries and Unordered Dictionaries

Another commonly used kind of collection is the dictionary, which is a collection accessible by a search key.

Dictionaries can be considered sets of key-value pairs, with the peculiarity that the look-up is performed using only the key. As a consequence, a dictionary cannot contain two pairs having the same key, even if their values are different.

In this case also, there are several possible algorithms, and the two main ones are provided by the Rust standard library with the names HashMap and BTreeMap. The first one, similar to HashSet , is somewhat faster but does not keep the items ordered; the second one is slower, but it keeps the items ordered by their key:
let arr = [(640, 'T'), (917, 'C'), (412, 'S'),
    (670, 'T'), (917, 'L')];
let mut v = Vec::<_>::new();
let mut hs = std::collections::HashMap::<_, _>::new();
let mut bs = std::collections::BTreeMap::<_, _>::new();
for &(key, value) in arr.iter() {
    v.push((key, value));
    hs.insert(key, value);
    bs.insert(key, value);
}
print!("Vec:");
for &(key, value) in v.iter() {
    print!(" {}: {},", key, value);
}
println!("     {:?}", v);
print!("HashMap:");
for (key, value) in hs.iter() {
    print!(" {}: {},", key, value);
}
println!("     {:?}", hs);
print!("BTreeMap:");
for (key, value) in bs.iter() {
    print!(" {}: {},", key, value);
}
println!("     {:?}", bs);
It could print:
Vec: 640: T, 917: C, 412: S, 670: T, 917: L,
    [(640, 'T'), (917, 'C'), (412, 'S'), (670, 'T'), (917, 'L')]
HashMap: 917: L, 412: S, 640: T, 670: T,
    {917: 'L', 412: 'S', 640: 'T', 670: 'T'}
BTreeMap: 412: S, 640: T, 670: T, 917: L,
    {412: 'S', 640: 'T', 670: 'T', 917: 'L'}

An array containing five pairs is used as a source of data. Three other containers are declared and initialized as empty: a Vec, a HashMap, and a BtreeMap . Then, each pair of the array is inserted into such containers.

In fact, all of them will be inserted into the Vec, but only four of them will be inserted into the HashMap and into the BtreeMap, because they do not admit duplicate keys. When the pair (917, 'L') is inserted, it replaces the pair (917, 'C') already contained in the dictionary. Instead, duplicate values are allowed.

Notice that in the vector the items are in insertion order, in the BTreeMap collection they are in key order, and in the HashMap collection they are in random order.

Their performance is similar to that of the corresponding HashSet and BTreeSet collections.

Collections in C++ and in Rust

Those who know the C++ standard library will find useful the following table that corresponds C++ collections with Rust collections.

For some C++ collections there is no corresponding Rust collection. In such cases, the most similar collection is indicated, preceded by a tilde (~), to indicate an approximate correspondence. Such an indication should be particularly useful to those who have some C++ code to convert into Rust code.

C++

Rust

array<T>

[T]

vector<T>

Vec<T>

deque<T>

VecDeque<T>

forward_list<T>

~ LinkedList<T>

list<T>

LinkedList<T>

stack<T>

~ Vec<T>

queue<T>

~ VecDeque<T>

priority_queue<T>

BinaryHeap<T>

set<T>

BTreeSet<T>

multiset<T>

~ BTreeMap<T,u32>

map<K,V>

BTreeMap<K,V>

multimap<K,V>

~ BTreeMap<K,(V,u32)>

unordered_set<T>

HashSet<T>

unordered_multiset<T>

~ HashMap<T,u32>

unordered_map<K,V>

HashMap<K,V>

unordered_multimap<K,V>

~ HashMap<K,(V,u32)>

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.208.13