List of Figures

Chapter 1. Clojure philosophy

Figure 1.1. Some of the concepts that underlie the Clojure philosophy, and how they intersect

Figure 1.2. Right-to-left shuffle: the r->lfix function shuffles math operators, moving the rightmost infix operations to the innermost nested parentheses to ensure that they execute first.

Figure 1.3. Left-to-right shuffle: the l->rfix function shuffles math operators, moving the leftmost infix operations to the inner-most nested parentheses to ensure that they execute first.

Figure 1.4. The word REPL hints at three repeated or looped phases: read, eval, and print.

Figure 1.5. More REPL phases: Clojure also has macro-expansion and compilation phases.

Figure 1.6. The reader takes a textual representation of a Clojure program and produces the corresponding data structures.

Figure 1.7. The Runner. A child’s flip book serves to illustrate Clojure’s notions of state, time, and identity. The book itself represents the identity. Whenever you wish to show a change in the illustration, you draw another picture and add it to the end of your flip book. The act of flipping the pages therefore represents the states over time of the image within. Stopping at any given page and observing the particular picture represents the state of the Runner at that moment in time.

Figure 1.8. The Mutable Runner. Modeling state change with mutation requires that you stock up on erasers. Your book becomes a single page: in order to model changes, you must physically erase and redraw the parts of the picture requiring change. Using this model, you should see that mutation destroys all notion of time, and state and identity become one.

Figure 1.9. The corresponding chessboard layout

Chapter 2. Drinking from the Clojure fire hose

Figure 2.1. A graphical representation of the sum-down-from function

Chapter 3. Dipping your toes in the pool

Figure 3.1. Visualization of xor. This is the graphic drawn by the 10 or so lines of code we’ve looked at so far—a visual representation of Clojure’s bit-xor function.

Figure 3.2. The draw-values function you’ve written can be used to create a variety of graphics. Here are examples, from left to right, of bit-and, +, and *.

Chapter 5. Collection types

Figure 5.1. Each cons-cell is a simple pair, a car and a cdr. (A) A list with two cells, each of which has a value—x and y, respectively—as the head (the car in Lisp terminology) and a list as the tail (the cdr). This is very similar to first and rest in Clojure sequences. (B) A cons-cell with a simple value for both the head and tail. This is called a dotted pair but is not supported by any of Clojure’s built-in types.

Figure 5.2. In Big-O, regardless of the other ancillary costs, the higher order of magnitude always overtakes the lower eventually.

Figure 5.3. The crosswise neighbors of cell 0,0

Figure 5.4. The crosswise neighbors of cell 1,1

Figure 5.5. The two collections used internally in a single queue. peek returns the front item of the seq, pop returns a new queue with the front of the seq left off, and conj adds a new item to the back of the vector.

Figure 5.6. The three Venn diagrams show a graphical representation of Clojure’s set functions: intersection, union, and difference.

Chapter 6. Being lazy and set in your ways

Figure 6.1. Shared structure tree: no matter how big the left side of a tree’s root node is, something can be inserted on the right side without copying, changing, or even examining the left side. All those values will be included in the new tree, along with the inserted value.

Figure 6.2. Each step of a lazy seq may be in one of two states. If the step is unrealized, it contains a function or closure of no arguments that can be called later to realize the step. When this happens, the thunk’s return value is cached instead, and the thunk itself is released as pictured in the first two lazy seq boxes, transitioning the step to the realized state. Note that although not shown here, a realized lazy seq may contain nothing at all, called nil, indicating the end of the seq.

Figure 6.3. Lazy linked-list example. Each node of this linked list contains a value (the head) and a delay (the tail). The creation of the next part is forced by a call to tail—it doesn’t exist until then.

Figure 6.4. The qsort function shown earlier uses a structure like this for its work list when sorting the vector [2 1 4 3]. Note that all the parts described by a standard quicksort implementation are represented here.

Figure 6.5. Internal structure of qsort. Each filter and remove lazily returns items from its parent sequence only as required. So, to return the first two items of the seq returned by qsort, no remove steps are required from either level A or level B. To generate the sequence (4), a single remove step at level B is needed to eliminate everything less than 3. As more items are forced from the seq returned by qsort, more of the internal filter and remove steps are run.

Chapter 7. Functional programming

Figure 7.1. Generalized tail-call optimization: if you know that A calls B in the tail position, then you also know that A’s resources are no longer needed, allowing Scheme to deallocate them and defer to B for the return call instead.

Figure 7.2. Elevator trampoline: the trampoline function explicitly bounces between mutually recursive calls.

Figure 7.3. A graphical representation of Z World clearly shows the optimal/only path.

Chapter 8. Macros

Figure 8.1. Arrow macro: each expression is inserted into the following one at compile time, allowing you to write the whole expression inside-out when that feels more natural.

Chapter 9. Combining data and code

Figure 9.1. The logical layout of namespaces. The process to resolve a var joy.ns/authors includes a symbolic resolution of the namespace and the var name. The result is the var itself. Aliases created with :use work as expected.

Figure 9.2. The directories layout for an illustrative joy.contracts namespace

Figure 9.3. The top of the source file for the joy.contracts namespace

Figure 9.4. Private API directories: using the folder layout to hide namespace implementation details

Figure 9.5. Private API source: the client-facing API is located in contracts.clj, and the private API in impl.clj.

Figure 9.6. Most languages allowing type derivations use a built-in conflictresolution strategy. In the case of CLOS, it’s fully customizable. Clojure requires conflicts to be resolved with prefermethod.

Figure 9.7. As opposed to the notion of monkey-patching and wrapping, the polymorphism in Clojure resides in the functions themselves and not in the classes worked with.

Chapter 10. Mutation and concurrency

Figure 10.1. Tom, alone

Figure 10.2. Tom inserts data into the work queue

Figure 10.3. Tom inserts data into the work queue while Crow and Joel consume it at their leisure.

Figure 10.4. Clojure’s four reference types are listed across the top, with their features listed down the left. Atoms are for lone synchronous objects. Agents are for asynchronous actions. Vars are for thread-local storage. Refs are for synchronously coordinating multiple objects.

Figure 10.5. The king neighbors of cell 1,1

Figure 10.5. Illustrating an STM retry: Clojure’s STM works much like a database.

Figure 10.6. A restart in any of Clojure’s embedded transactions A, B, b, or C causes a restart in the entire subsuming transaction. This is unlike a fully embedded transaction system, where the subtransactions can be used to restrain the scope of restarts.

Figure 10.7. If refs A and B should be coordinated, then splitting their updates across different transactions is dangerous. Value a? is eventually committed to A, but the update for B never commits due to retry, and coordination is lost. Another error occurs if B’s change depends on A’s value and A and B are split across transactions. There are no guarantees that the dependent values refer to the same timeline.

Figure 10.8. The in-transaction value 9 for the ref num-moves is retrieved in the body of the transaction and manipulated with the alter function inc. The resulting value 10 is eventually used for the committime value, unless a retry is required.

Figure 10.9. The in-transaction value 9 in the nummoves ref is retrieved in the body of the transaction and manipulated with the commute function. But the commute function inc is again run at commit time with the current value 13 contained in the ref. The result of this action serves as the committed value 14.

Figure 10.10. Clojure agents versus Erlang processes: each agent and process starts with the value 1. Both receive an inc request simultaneously but can process only one at a time, so more are queued. Requests to the process are queued until a response can be delivered, whereas any number of simultaneous derefs can be done on an agent. Despite what this illustration may suggest, an agent is not an actor with a hat on.

Figure 10.11. When an agent is idle, no CPU resources are being consumed. Each action is sent to an agent using either send or send-off, which determines which thread pool is used to dequeue and apply the action. Because actions queued with send are applied by a limited thread pool, the agents queue up for access to these threads—a constraint that doesn’t apply to actions queued with send-off.

Figure 10.12. Thread-local var bindings. This illustration depicts a single var being used from three different threads. Each rounded box is a var binding, either thread-local or root. Each star is the var being deref’ed, with the solid arrow pointing to the binding used. The dotted lines point from a thread-local binding to the next binding on the stack.

Chapter 11. Parallelism

Figure 11.1. The concurrent design with an intermediate work queue from the previous chapter is potentially parallelizable.

Figure 11.2. The concurrent design can be parallelized through the use of another work queue and producer as well as two more consumers.

Chapter 12. Java.next

Figure 12.1. The instance returned by proxy is a proper proxy that does method dispatch to functions in a lookup table. These functions can therefore be swapped out with replacements as needed.

Figure 12.2. A directory listing served by the simple web server

Figure 12.3. File details served by the simple web server

Figure 12.4. Now that you’ve compiled the DynaFrame class, you can start using it to display simple GUIs.

Figure 12.5. You can update the DynaFrame on the fly without restarting.

Figure 12.6. Using only a handful of rudimentary containers, you can build neato GUI prototypes.

Figure 12.7. DynaFrame alerts: you can create slightly more complex GUIs and attach actions on the fly.

Figure 12.8. A much more elaborate DynaFrame GUI. There’s no limit to the complexity of this simple GUI model. Go ahead and experiment to your heart’s content.

Figure 12.9. There are two ways to handle errors in Clojure. The typical way is to let exceptions flow from the inner forms to the outer. The other way, discussed in section 17.4, uses dynamic bindings to reach into the inner forms to handle errors immediately.

Chapter 14. Data-oriented programming

Figure 14.1. Accessing rectangular data in Java is often a chore, highlighting the vast differences in the data model and the code to access it.

Figure 14.2. Via various programming magicks, an ORM provides a class-instance interface that maps to database tables on the back end for its property values.

Figure 14.3. Many programs can be viewed as an Engine of computation, taking an input and performing some action.

Figure 14.4. The data-programmable model is composed of an Engine taking a specification, performing some actions, and eventually returning or materializing a result.

Figure 14.5. The Ant engine: Ant takes a build specification and returns a build artifact.

Figure 14.6. The Lisp compiler as the ultimate computation engine. The Clojure compiler is a data-programmable engine taking Clojure data as input and returning Clojure data as a result.

Figure 14.7. Integrating polyglot systems using events as data

Figure 14.8. Clojure code is a data structure that Clojure can manipulate.

Chapter 15. Performance

Figure 15.1. Clojure’s chunked sequences allow a windowed view of a sequence. This model is more efficient, in that it allows for larger swaths of memory to be reclaimed by the garbage collector and better cache locality in general. There’s a cost to total laziness, but often the benefit gained is worth the cost.

Figure 15.2. Using seq1, you can reclaim the one-at-a-time sequence model. Although not as efficient as the chunked model, it does provide total sequence laziness.

Chapter 16. Thinking programs

Figure 16.1. Starting position of the example Sudoku board

Chapter 17. Clojure changes the way you think

Figure 17.1. Using defformula is akin to programming a spreadsheet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.23.92.78