A Few Other Key Points

Although the items in this section are not quite Aha!-quality insights, you should consider them while exploring this chapter:

More examples

More examples come with Threading Building Blocks—for instance, examplesparallel_whileparallel_preorder, which uses parallel_while to do parallel preorder traversal of a sparse graph—I simply did not have the time or space to include them all in this chapter. The examples are set up as ready to build and try, although they currently include virtually no explanation about how they work internally. You just have to read the source code.

Even more examples

Be sure to regularly visit the web sites for Threading Building Blocks, as we hope to add more and more examples, user forums, and so on.

Use a scalable memory allocator

Have you ever analyzed the thread safety and scalability of your memory allocator? The results will send you looking for change. See the “Memory Allocation” section in this chapter, and see Chapter 6.

Create only programs that can run serially (threads=1) for debugging purposes

Always aim to be able to debug your code without concurrency as you write it. Experience will show that it is easier to debug common mistakes, which have nothing to do with parallelism, while running without concurrency. See Chapter 2 and Chapter 12.

The task scheduler is quite approachable

Spawning tasks (Chapter 9) can be a better alternative to parallel_for when boundaries and computation granularity change between function calls. Take a look at how efficient it can be when you create and spawn tasks from the loop. See the example in the “Open Dynamics Engine” section (and other task examples in this chapter).

When in doubt, build a task graph that domain-decomposes your data structures from the top

And create about twice as many tasks as the number of cores you can imagine ever running upon. See “Game Threading Example,” later in the chapter.

Controlling access to the shared data

This is made easy with the help of highly concurrent containers (Chapter 5): you can have multiple threads reading data containers without blocking each other, or you can modify a container with one writer exclusively. Moreover, if two writers modify different parts of the data containers, they will not block each other! See the example in the section “CountStrings: Using concurrent_hash_map,” later in this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.42.168