List of Figures

Chapter 1. Clojure philosophy

Figure 1.1. Broad goals of Clojure: this figure shows some of the concepts that underlie the Clojure philosophy, and how they intersect.

Figure 1.2. The Runner: a child’s flip book serves to illustrate Clojure’s notions of state, time, and identity. The book itself represents the identity. Whenever you wish to show a change in the illustration, you draw another picture and add it to the end of your flip book. The act of flipping the pages therefore represents the states over time of the image within. Stopping at any given page and observing the particular picture represents the state of the Runner at that moment in time.

Figure 1.3. The Mutable Runner: modeling state change with mutation requires that you stock up on erasers. Your book becomes a single page, requiring that in order to model changes, you must physically erase and redraw the parts of the picture requiring change. Using this model, you should see that mutation destroys all notion of time, and state and identity become one.

Figure 1.4. The corresponding chessboard layout

Chapter 3. Dipping our toes in the pool

Figure 3.1. Visualization of xor. This is the graphic drawn by the six or so lines of code we’ve looked at so far—a visual representation of Clojure’s bit-xor function.

Figure 3.2. Three possible results from draw-values. The draw-values function you’ve written can be used to create a variety of graphics. Here are examples, from left to right, of bit-and, +, and *.

Chapter 5. Composite data types

Figure 5.1. Each cons-cell is a simple pair, a car and a cdr. A. A list with two cells, each of which has a value X and Y as the head (the car in Lisp terminology) and a list as the tail (the cdr). This is very similar to first and rest in Clojure sequences. B. A cons-cell with a simple value for both the head and tail. This is called a dotted pair but is not supported by any of Clojure’s built in types.

Figure 5.2. Overtaking the smaller. In Big-O, regardless of the other ancillary costs, the higher order of magnitude will always overtake the lower eventually.

Figure 5.3. The two collections used internally in a single queue. peek returns the front item of the seq, pop returns a new queue with the front of the seq left off, and conj adds a new item to the back of the vector.

Figure 5.4. Basic set operations. The three Venn diagrams show a graphical representation of Clojure’s set functions: intersection, union, and difference.

Chapter 6. Being lazy and set in your ways

Figure 6.1. Shared structure tree: no matter how big the left side of a tree’s root node is, something can be inserted on the right side without copying, changing, or even examining the left side. All those values will be included in the new tree, along with the inserted value.

Figure 6.2. Each step of a lazy seq may be in one of two states. If the step is unrealized, it’ll contain a function or closure of no arguments (a thunk) that can be called later to realize the step. When this happens, the thunk’s return value is cached instead, and the thunk itself is released as pictured in the first two lazy seq boxes, transitioning the step to the realized state. Note that although not shown here, a realized lazy seq may simply contain nothing at all, called nil, indicating the end of the seq.

Figure 6.3. Lazy linked-list example. Each node of this linked list contains a value (the head) and a delay (the tail). The creation of the next part is forced by a call to tail—it doesn’t exist until then.

Figure 6.4. The qsort function shown earlier would use a structure like this for its work list when sorting the vector [2 1 4 3]. Note that all the parts described by a standard quicksort implementation are represented here.

Figure 6.5. Internal structure of qsort. Each filter and remove lazily returns items from its parent sequence only as required. So to return the first two items of the seq returned by qsort, no remove steps are required from either level A or B. To generate the sequence (4), a single remove step at level B would be needed to eliminate everything less than 3. As more items are forced from the seq returned by qsort, more of the internal filter and remove steps will be run.

Chapter 7. Functional programming

Figure 7.1. Generalized tail-call optimization: if you know that A calls B in the tail position, then you also know that A’s resources are no longer needed, allowing Scheme to deallocate them and defer to B for the return call instead.

Figure 7.2. Elevator trampoline: the trampoline function explicitly bounces between mutually recursive calls.

Figure 7.3. A graphical representation of Z World clearly shows the optimal/only path.

Chapter 8. Macros

Figure 8.1. Arrow macro: each expression is inserted into the following one at compile time, allowing you to write the whole expression inside-out when that feels more natural.

Chapter 9. Combining data and code

Figure 9.1. The logical layout of namespaces. The process to resolve a Var joy.ns/authors includes a symbolic resolution of the namespace and the Var name. The result is the Var itself. Aliases created with :use work as expected.

Figure 9.2. Namespace private directories: the directories layout for an illustrative joy.contracts namespace

Figure 9.3. Namespace private source: the top of the source file for the joy.contracts namespace

Figure 9.4. Private API directories: using the folder layout to hide namespace implementation details

Figure 9.5. Private API source: the client-facing API is located in contracts.clj and the private API in impl.clj.

Figure 9.6. Hierarchy conflict: most languages allowing type derivations use a built-in conflict-resolution strategy. In the case of CLOS, it’s fully customizable. Clojure requires conflicts to be resolved with prefer-method.

Figure 9.7. As opposed to the notion of monkey-patching and wrapping, the polymorphism in Clojure resides in the functions themselves and not in the classes worked with.

Chapter 10. Java.next

Figure 10.1. Proxy lookup: the instance returned by proxy is a proper proxy that does method dispatch to functions in a lookup table. These functions can therefore be swapped out with replacements as needed.

Figure 10.2. A simple use of DynaFrame: now that you’ve compiled the DynaFrame class, you can start using it to display simple GUIs.

Figure 10.3. A simple dynamic update of DynaFrame: we can update the DynaFrame on the fly without restarting.

Figure 10.4. Basic GUI containers: using only a handful of rudimentary containers, we can build neato GUI prototypes.

Figure 10.5. DynaFrame alerts: we can create slightly more complex GUIs and attach actions on the fly.

Figure 10.6. A much more elaborate DynaFrame GUI: there’s no limit to the complexity of this simple GUI model. Go ahead and experiment to your heart’s content.

Figure 10.7. Outside-in and inside-out error handling. There are two ways to handle errors in Clojure. The typical way is to let exceptions flow from the inner forms to the outer. The other way, discussed in section 13.4, uses dynamic bindings to “reach into” the inner forms to handle them immediately.

Chapter 11. Mutation

Figure 11.1. Illustrating an STM retry: Clojure’s STM works much like a database.

Figure 11.2. Clojure’s embedded transactions: a restart in any of Clojure’s embedded transactions A, B, b, and C causes a restart in the whole subsuming transaction. This is unlike a fully embedded transaction system where the subtransactions can be used to restrain the scope of restarts.

Figure 11.3. Clojure’s four reference types are listed across the top, with their features listed down the left. Atoms are for lone synchronous objects. Agents are for asynchronous actions. Vars are for thread-local storage. Refs are for synchronously coordinating multiple objects.

Figure 11.4. Alter path: the in-transaction value 9 for the Ref num-moves is retrieved in the body of the transaction and manipulated with the alter function inc. This resulting value 10 is eventually used for the commit-time value, unless a retry is required.

Figure 11.5. Splitting coordinated Refs: if Refs A and B should be coordinated, then splitting their updates across different transactions is dangerous. Value a? is eventually committed to A, but the update for B never commits due to retry and coordination is lost. Another error occurs if B’s change depends on A’s value and A and B are split across transactions. There are no guarantees that the dependent values refer to the same timeline.

Figure 11.6. Commute path: the in-transaction value 9 in the num-moves Ref is retrieved in the body of the transaction and manipulated with the commute function. But the commute function inc is again run at commit time with the current value 13 contained in the Ref. The result of this action serves as the committed value 14.

Figure 11.7. Clojure agents versus Erlang processes: each Agent and process starts with the value 1. Both receive an inc request simultaneously but can only process one at a time, so more are queued. Requests to the process are queued until a response can be delivered, whereas any number of simultaneous derefs can be done on an Agent. Despite what this illustration may suggest, an Agent is not just an actor with a hat on.

Figure 11.8. Agents using send versus send-off. When an Agent is idle, no CPU resources are being consumed. Each action is sent to an Agent using either send or send-off, which determines which thread pool will be used to dequeue and apply the action. Because actions queued with send are applied by a limited thread pool, the Agents queue up for access to these threads, a constraint that doesn’t apply to actions queued with send-off.

Figure 11.9. Thread-local Var bindings. This illustration depicts a single Var being used from three different threads. Each rounded box is a Var binding, either thread-local or root. Each star is the Var being deref’ed, with the solid arrow pointing to the binding used. The dotted lines point from a thread-local binding to the next binding on the stack.

Chapter 12. Performance

Figure 12.1. Clojure’s chunked sequences allow a windowed view of a sequence. This model is more efficient, in that it allows for larger swaths of memory to be reclaimed by the garbage collector and better cache locality in general. There’s a cost to total laziness, but often the benefit gained is worth the cost.

Figure 12.2. Using seq1, you can again reclaim the one-at-a-time sequence model. Though not as efficient as the chunked model, it does again provide total sequence laziness.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.170.174